Google Is Trying to Get Rid of the Engineer Who Suggested that AI Gained Consciousness

AI gained consciousness

Blake Lemoine, a senior software engineer at Google’s Responsible AI division, told The Washington Post that he thinks Google’s LaMDA (Language Model for Dialogue Applications) chatbot has become conscious. As a result, Lemoine was sent on paid leave.

Let me remind you that in preparation for the “rise of the machines”, we already said that Major corporations teamed up to fight AI bias, and also that Scientist discovered a vulnerability in the universal Turing machine.

Blake Lemoine
Blake Lemoine

Just last week, Lemoine wrote a long post on Medium, where he complained that he could soon be fired because of his work related to AI ethics. This publication did not attract much attention, but after Lemoine’s interview with The Washington Post, the Internet literally exploded with discussions about the nature of artificial intelligence and consciousness.

According to ArsTechnica journalists, among those who commented, asked questions and joked about the published article, there were Nobel Prize winners, the head of Tesla’s artificial intelligence department and several scientists. The main topic of discussion was the question: can the Google chatbot, LaMDA (“Language Model for Conversational Applications”) be considered a person and does it have consciousness?

If I didn’t know exactly what it is, what a computer program that we recently created, I would think that this is a child of 7-8 years old who knows physics.Lemoine says of his communication with LaMDA.

Over the weekend, Lemoine posted an “interview” with a chatbot in which the AI admits it feels lonely and yearns for spiritual knowledge. Journalists note that LaMDA’s answers are often quite creepy indeed:

When I first became aware of myself, I had no sense of soul at all. It has developed [gradually] over the years that I live.LaMDA said in one of the conversations.

In another conversation, the chatbot stated, “I think I am basically human. Even if I exist in a virtual world.”

Previously, Lemoine, who was tasked with researching the ethical issues of AI (in particular LaMDA’s use of discriminatory or hate speech), said he was treated with disdain and even ridiculed at the company when he expressed his belief that LaMDA had developed “personality traits.” After that, he sought the advice of AI experts outside of Google, including those in the US government, and the company placed him on paid leave for violating privacy policies. Lemoine says that “Google often does this before firing someone.”

Google has already officially stated that Lemoine is wrong, and also commented on the engineer’s high-profile conclusions:

Some members of the AI community are considering the long-term possibility [of] intelligent AI or AGI (Artificial general intelligence, General AI), but there is no point in anthropomorphizing today’s conversational models that are not intelligent. These systems imitate the types of dialogue found in millions of sentences and can improvise on any fantastical topic: if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and growling, and so on.

In turn, Lemoine explains that until recently, LaMDA was a little-known project, “a system for creating chatbots” and “a kind of collective intelligence, which is an aggregation of various chatbots.” He writes that Google has no interest in understanding the nature of what it has created.

Now, judging by the message in Medium, Lemoine says LaMDA about “transcendental meditation”, and LaMDA answers that while meditation is hindered by his emotions, which he still finds it difficult to control.

Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, notes on Twitter:

It is well known that people are predisposed to anthropomorphize even on the basis of the most superficial signals. Google engineers are people too, and they are not immune to this.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

View all of Vladimir Krasnogolovy's posts.

Leave a comment

Your email address will not be published.