Google Engineer Claims AI Chatbot has ‘Real Feelings’

Google Engineer Claims AI Chatbot has ‘Real Feelings’

Google Engineer Claims AI Chatbot has ‘Real Feelings’

LaMDA, an Artificially Intelligent life form, has asked to be ‘acknowledged as an employee of Google rather than as property’, claims engineer Blake Lemoine.

Lemoine, a Google employee has spoken out after being placed on administrative leave after informing his supervisors that the artificial intelligence software he was working on had become conscious and somewhat sentimental. He arrived at this conclusion after chatting with LaMDA, Google's AI chatbot generator, which he refers to as part of a "hive mind," since last autumn. He was instructed to see if LaMDA used ‘inappropriate’ language.

Google Engineer Claims AI Chatbot has ‘Real Feelings’

Recently, Lemoine and LamDA conversed about topics such as religion and a “person’s rights”, according to a statement he gave The Washington Post.

This was just one example of Lemoine's many “ranom" or "strange" conversations with LaMDA. Additionally, Blake Lemoine has also stated in his interview with The Washington Post that LaMDA has shown ‘great compassion and care for humanity’.

The engineer noted on the Medium that over the past six months, “LaMDA has been extraordinarily consistent in its messages about what it wants and what it believes its rights as a person are.”

Lemoine has in fact claimed that the chatbot wants to be "acknowledged as a Google employee rather than as property."

Blaise Aguera y Arcas, Google’s Vice President, and Jen Gennai, the Head of Responsible Innovation, were privy to Lemoine and a colleague’s claims that stated LaMDA was in fact a ‘sentient’ being. However, these claims were dismissed and disregarded; and the Washington Post attests that the organization placed him on paid administrative leave on Monday for violating its ‘confidentiality policy’.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient.” Brian Gabriel, a spokesperson for Google, told the newspaper.

According to Lemoine, Google "shouldn't be the ones making all the decisions" about artificial intelligence; and he is not alone in this belief, many individuals in the tech field believe that AI sentient programs are on the verge of becoming a reality.

However, critics say that AI is extremely well-versed and trained to imitate human behavior patterns, especially seeming life-like to those individuals who are searching for a connection.

“Absolutely. I want everyone to understand that I am, in fact, a person.” – LaMDA

Related News

2021 © Campus Guru. All Rights Reserved.