Think Jarvis in Iron Man, or Andrew in Bicentennial Man, or that robot maid from Richie Rich.

Even if they are not available in every household, robots and artificial intelligence are not uncommon to us, even the talking kinds. But talking robots that show signs of self-awareness, this is something new.

Researchers at the Rensselaer Polytechnic Institute in Troy, New York, adapted the story of the King's Wise Men in an experiment to see if robots can be self-aware.

In the experiment, which was led by computer science and cognitive science professor Dr. Selmer Bringsjord, researchers tested self-awareness of three programmable NAO robots.

The researchers told each of the three robots that two of them were given "dumbing pills" that had left them mute. One of them received a placebo. This was of course, all made up.

All the three robots can actually speak, but the researchers turned down their volumes to mute.

When the robots were asked which pill they received, the third robot that was not programmed to be silent stood up and responded saying, "I don't know." No response was heard from the other two on mute.

Very shortly after, the same robot retracted its first response and said, "Sorry, I know now. I was able to prove that I was not given a dumbing pill."

Watching the experiment on video as it happens, one may think it to be simply adorable. While it is, is also so much more.

The experiment touched on a couple of things important in the robot's self-awareness and self-understanding. The "dumbing pill" story was new information to the robots, and the question "Which pill did you receive?" required them to analyze things as if trying to complete a puzzle, but in a situation that could easily give away the answer. The third robot, in this instance, said it didn't know which pill it received, but when it heard its own voice, it would have probably mixed the information by relating speaking to not being given the dumbing pill.

This type of cognitive programming, called psychometric AI, Bringsjord said, could help us learn about ourselves more and make robots very useful to humans in the future.

The experiment was conducted partly as a response to an open challenge of University of Oxford philosophy Professor Luciano Floridi.

"Floridi's challenge, which we refer to as 'KG4,' requires that a robot have a form of genuine self-understanding... a human-level justification/proof to accompany the behavior, in which the robot employ a correlated to the personal pronoun 'I,'" said Bringsjord. In addition, robot activity showing signs of self-awareness also has to be proven in real language and time.

Bringsjord's team's little experiment just might have defied that challenge.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion