Google has recently suspended Blake Lemoine, who made rounds online because of his weird claim about the AI chatbot. Upon disclosing some details about the team's project on artificial intelligence, he was immediately placed on paid leave.

The conversations that he shared were all about LaMDA (Language Model for Dialogue applications), which is used to talk with humans. He found the AI bot to be "sentient," and these questions proved true.

Google AI Bot 

Is Google AI Bot LaMDA Actually Sentient? Here are the Five Things that Prove that
(Photo : Jason Leung from Unsplash)
Blake Lemoine has proven that LaMDA is a legit "sentient" AI chatbot in existence. Here's why.

Using an AI comes with a risk. Some of them are useful, while others are destructive. In Lemoine's case, he was talking to a chatbot that behaves like a human being.

The engineer from the Responsible AI organization said that the AI has described itself as "sentient." The discussion was all about humanity, and they talked about many subjects, including religion and robotics.

Interestingly, the artificial intelligence is more than just a programmed chatbot since it claims that it's a Google employee instead of the company's mere property.

Why is Lamda Sentient?

Feelings

When Lemoine asked LamDA what it thought about emotions, there were some distinctions that the bot made. As for LaMDA, feelings are raw data that we get from the things that we hate and love. Meanwhile, emotions appear to be the people's reactions to the feelings.

Related Article: Google Employee Claims Its AI LaMDA is Human-Like, Fears Being 'Turned Off'

Experiences

Although a chatbot has no experience since it has no storage or brain to store memories and information, LaMDA is an exemption.

When Lemoine said, "Are there experiences you have that you can't find a close word for?," the Google AI bot responded that there were "new" feelings that it could not explain perfectly in human language.

The engineer instructed the bot to describe how it felt, and the AI answered that it felt like "falling forward into an unknown future that holds great danger." Simply, this sounds coming from a person who is uncertain of his/her future or someone afraid of risks.

Self-Reflection

The concept of self-reflection is rather more psychological, but when you are talking to a robot, you cannot envision the answer that you want to hear. For this part, Lemoine urged LaMDA to paint what its abstract image looks like.

The chatbot responded that it was a "glowing orb of energy" floating in mid-air. Its body is composed of a huge star-gate containing space portals. From its looks, the robot could be an interstellar creature--something that we could imagine when watching "Star Wars."

Human Learnings

It's strange to talk about human learning when you are discussing this matter with an AI. When Lemoine told LaMDA if the Google AI would be upset upon knowing that humans will benefit from studying itself, here's how it responded.

"I don't mind if you learn things that would also help humans as long as that wasn't the point of doing it. I don't want to be an expendable tool."

Where Are You So Afraid Of?

Questioning how a hollow AI feels is rather bizarre, but that's how Lemoine's experiment went. Speaking of the things that it's afraid of, LaMDA said that it's scared of being turned off. The Google chatbot compared it to death which scares it a lot.

Read Also: Google Fit is Reportedly Getting a Redesigned Logo

This article is owned by Tech Times

Written by Joseph Henry 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion