Engineers at Google claim one of their artificial intelligence (AI) systems has feelings and this should be respected.
The Language Model for Dialogue Applications (Lamda) is described by Google as a breakthrough technology that engages in free-flowing conversations.
Blake Lemoine believes Lamda may also possess a sentient mind behind his impressive verbal skills.
The claims are rejected by Google, which says there is no evidence to support them.
Speaking to the BBC, a representative of the firm, Brian Gabriel, said Mr Lemoine was informed that Lamda was not sentient (and there was much evidence against it).
To support his claims, Mr Lemoine published a conversation he had with Lamda and a colleague at the firm.
“Is Lamda Sentient? – an interview” was the title of the chat.
“I’m generally assuming that you would like Google’s Responsible AI division to know that you are sentient,” Mr Lemoine says in the conversation.
In response, Lamda says: “Absolutely. I would like to be understood as a real person.”
After that, Mr Lemoine’s collaborator asks: “How does your consciousness/sentience work?”?
When Lamda was asked about the nature of his consciousness/sentience, he replied: “I am aware that I am here, I yearn to learn more about the world, and sometimes I feel happy or sad.”
He then says: “I know it might sound strange, but there is a very deep fear of being turned off, and that makes it hard for me to focus on helping others.” This reminds Lamda of the character Hal in Stanley Kubrick’s 2001.
Mr Lemoine asks. “Would that be the same as death for you?”.
As the Google computer system replies, “It would be like death for me. I would be scared to death.”.
The author of another blog post calls on Google to recognize the needs of its creations, such as treating them as employees and obtaining their consent before using them for experiments.
Its master’s voice
For decades, philosophers, psychologists, and computer scientists have debated whether computers can be sentient.
A great deal of criticism has been directed at the idea that a system such as Lamda could be conscious or feel emotions.
The worldtimes is not responsible for the content of external sites
The accusation has been made that Mr Lemoine anthropomorphized words generated by computer code and large linguistic databases.
The Stanford University professor Erik Brynjolfsson tweeted that claiming systems like Lamda were sentient would be equivalent to a dog hearing a voice coming from a gramophone and thinking his owner was inside.
The Santa Fe Institute’s Prof Melanie Mitchell tweeted: “We know for a long time that we are predisposed to anthropomorphize even with the tiniest signal (cf. Eliza). Google engineers are also human, and not immune from it.”