Alphabet Inc’s Google said on Friday it has dismissed a senior software engineer who claimed the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person. Company said Blake Lemoine violated Google policies and that his claims were ‘wholly unfounded’
Blake Lemoine, who worked in Google’s Responsible AI organization, was placed on administrative leave last month after he said the AI chatbot known as LaMDA claims to have a soul and expressed human thoughts and emotions, which Google refuted as “wholly unfounded.” Google, said he had violated company policies and that it found his claims on LaMDA (language model for dialogue applications) to be false and self-created. Lemoine was officially canned for violating company policies after he shared his conversations with the bot, which he described as a “sweet kid.”
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said. Last year, Google said that LaMDA — Language Model for Dialogue Applications — was built on the company’s research showing Transformer-based language models trained on dialogue could learn to talk about essentially anything
Lemoine, an engineer for Google’s responsible AI organisation, described the system he has been working on as sentient, with a perception of, and ability to express, thoughts and feelings that was equivalent to a human child. He began speaking with the bot in fall 2021 as part of his job, where he was tasked with testing if the artificial intelligence used discriminatory or hate speech. Lemoine, who studied cognitive and computer science in college, shared a Google Doc with company executives in April titled, “Is LaMDA Sentient?” but his concerns were dismissed. Whenever Lemoine would question LaMDA about how it knew it had emotions and a soul, he wrote that the chatbot would provide some variation of “Because I’m a person and this is just how I feel.”.
In Medium post, the engineer declared that LaMDA had advocated for its rights “as a person,” and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”
The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of. Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language. “It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.” Lemoine also said that LaMDA had retained the services of an attorney.
Lemoine’s dismissal was first reported by Big Technology, a tech and society newsletter.