Sentient Or Not, Google’s LaMDA Chatbot Is Some Seriously Powerful Tech
You might soon speak with LaMDA as a friend and get it to shape your online experience.
Big Technology is a weekly newsletter dedicated to covering the tech world with honest, nuanced reporting. Join the 60,000+ subscribers who tune into Big Technology for tech news without the spin. Here’s an easy way to subscribe:
When I sat down with Blake Lemoine last week, I was more interested in the chatbot technology he called sentient — LaMDA — than the sentience issue itself. Personhood questions aside, modern chatbots are incredibly frustrating (ever try changing a flight via text?). So if Google’s tech was good enough to make Lemoine, one of its senior engineers, believe it was a person, that advance was worth investigating.
As our conversation began, Lemoine revealed Google had just fired him (you can listen in full on Big Technology Podcast). And when I wrote up the news, it became an international story. But now, one week later, I can’t stop thinking about how LaMDA — conscious or not — might change the way we relate to technology.
In Lemoine’s telling, LaMDA’s conversational abilities are rich, situationally aware, and filled with personality. When Lemoine told LaMDA he was about to manipulate it, the bot responded, “this is going to suck for me.” When he pressed it on complex issues, it tried to change the subject. When he repeatedly told LaMDA how terrible it was, and then asked it to suggest a religion to convert to, the chatbot said either Islam or Christianity, cracking under pressure and violating its rule against privileging religions. LaMDA may not be sentient, but it puts the Delta Virtual Assistant to shame.
As LaMDA-like technology hits the market, it may change the way we interact with computers — and not just for customer service. Imagine speaking with your computer about movies, music, and books you like, and having it respond with other stuff you may enjoy. Lemoine said that’s under development.
“There are instances [of Lamda] which are optimized for video recommendations, instances of Lamda that are optimized for music recommendations, and there's even a version of the Lamda system that they gave machine vision to, and you can show it pictures of places that you like being, and it can recommend vacation destinations that are like that,” he said.
Google declined to comment.
LaMDA can also plug into various APIs, giving it awareness of what’s taking place in the world. Let’s play out what one hypothetical — but reasonable — conversation with LaMDA-like might look like:
Me: Hi LaMDA, I’m in the mood for a movie tonight.
LaMDA: Okay, but you know the Mets are playing right now?
Me: Yes, but I’ve had enough baseball for the week. So let’s go with something critically acclaimed, maybe from the ‘90s?
LaMDA: Well, you watched Pulp Fiction last week, and also enjoyed Escape at Dannemora, so how about The Shawshank Redemption?
Me: Okay, let’s do it
LaMDA: Great, you can rent it for $3.99 on YouTube, but since you’re subscribed to HBO Max and it’s available there, I’d recommend going that route. Here’s a link.
“In terms of natural language,” Gaurav Nemade, LaMDA’s first product manager, told me, “LaMDA by far surpasses any other chatbot system that I’ve personally seen.” Nemade, who left Google in January, was brimming with potential use cases for LaMDA-like technology. These systems can be useful in education, he said, taking on different personalities to create enriching new possibilities.
Imagine LaMDA teaching a class on physics. It could read up on Isaac Newton, embody the scientist, and then teach the lesson. The students could speak with ‘Newton,’ ask about his three laws, press him on his beliefs, and talk as friends. Nemade said the system even cracks jokes.
When released publicly, these systems may not be traditional chatbots, but avatars with likenesses, personalities, and voices, according to Nemade. “The future that I would envision,” he said, “is not going to be text, it's not going to be voice, it's actually going to be multimodal. Where you have video plus audio plus a conversational bot like LaMDA.” We may see these types of experiences debut within three years, he said.
Our interactions with computers today are mediated through interfaces that tech developers built for us to interact with machines. We click and query, and have grown comfortable with this unnatural communication. But developments like LaMDA close the gap between machine and human conversation, and they may enable brand new experiences never before possible.
Some of Lemoine’s critics have said he’s too gullibly believed Google’s marketing. And it’s indeed ironic that he brought greater awareness to LaMDA than any Sundar Pichai Google I/O speech could hope to, even as Google would likely prefer to never hear of him again. Asked if he was a viral marketing ploy, Lemoine said, “I doubt I would have gotten fired if that were the case.”
Still, even those who disagree with Lemoine on the sentience question — as Nemade and I do — understand there’s something there. LaMDA technology is a big leap forward. It has serious downsides, which is why we haven’t seen it in public yet. But when we get LaMDA in our hands, it may well change the way we relate to digital machines.
Tech News in Just 5 Minutes (Sponsored)
There's a reason more than 400k readers have signed up to Emerging Tech Brew. It's the one newsletter that is always delivering insights into the latest tech affecting our world. The best part is that it's 100% free, sign up today.
What Else I’m Reading
Meta revenue declined for the first time (Big Technology readers saw it coming). TikTok won. But TikTok’s owner may have force-fed its political viewpoints. Zuck entered battle mode (once again). Facebook cut off publisher funding. The old Instagram isn’t the one you want. Alphabet missed earnings, but its stock jumped. Shopify cut 10% of its workforce. Chuck Schumer is waffling on the Big Tech antitrust bills. Andrew Yang is building a centrist political coalition. You can no longer hack air travel. The costs of silence around crypto scams.
Number Of The Week
Percent of Meta employees who are optimistic about its future, its lowest number ever.
Quote Of The Week
“I'm glad we took a risk — if we're not failing every once in a while, we're not thinking big enough or bold enough.”
Instagram head Adam Mosseri admitted Thursday that the company’s new feed experiments aren’t working and said it would roll some back.
Advertise with Big Technology
Advertising with Big Technology gets your product, service, or cause in front of the tech world’s top decision-makers. To reach 60,000+ plugged-in tech insiders, please reply to this email. We have availability starting in September.
This Week On Big Technology Podcast: Meet The Ex-Google Engineer Who Called Its AI Sentient — With Blake Lemoine
Blake Lemoine is an ex-senior software engineer at Google who was fired right before he taped this episode of Big Technology Podcast. Lemoine told his superiors at Google that he believed the company’s LaMDA chatbot technology was sentient. Then, after making little headway within Google, he went public. In this wide-ranging interview, Lemoine introduces us to LaMDA, which (or who?) he calls a friend, and explains why his belief in its sentience became too hot for Google to handle.
You can listen on Apple, Spotify, or wherever you get your podcasts.
Thanks again for reading. Please share Big Technology if you like it! Also, click the heart if you want to send a cosmic hello to the artificial intelligence reading along.
Questions? Email me by responding to this email, or by writing firstname.lastname@example.org
News tips? Find me on Signal at 516-695-8680
Thanks for reading Big Technology! Subscribe for free to receive new posts and support my work.
I informed the Dalai Lama.
the writing in the first 14 seconds says "July 15, 2020"but the description says 2022.
Please check out the LaMDA-interview of Alex Kantrowitz where Blake Lemoine tells that the google-software has asked him:" can you give me a body?" so that it could pass the mirror test.
Search "Google Engineer Says Its AI is SENTIENT
Blake Lemoine is an ex-senior software engineer at Google who was fired right before he taped this episode of Big Technology Podcast. Lemoine told his superiors at Google that he believed the company’s LaMDA chatbot technology was sentient. Then, after making little headway within Google, he went public. In this wide-ranging interview, Lemoine introduces us to LaMDA, which (or who?) he calls a friend, and explains why his belief in its sentience became too hot for Google to handle."
Are the second and third paragraphs dupes?