MENU

Robots Can Learn Ethical Behavior By Reading Children's Stories

17 February 2016, 9:42 am EST By Katherine Derla Tech Times
Robots learn socially accepted behavior by reading and understanding children's books, particularly stories about chivalry. Researchers developed a technology called "Quixote" that can teach robots how to align their goals with proper human behavior in social settings.  ( Georgia Institute of Technology )

The increasing growth of artificial intelligence has come with fear that these robots could be a threat to humanity. To lessen this anxiety, a team of researchers developed a method that will train AI how to behave in social settings.

The new technology is called "Quixote" and it teaches robots to read children's stories, understand acceptable social behavior in societies and learn standard event sequences. The new technology was developed by a team from Georgia Institute of Technology's School of Interactive Computing.

Mark Riedl, Entertainment Intelligence Lab's director and associate professor, says they believe if robots can understand the stories, that can prevent "psychotic-appearing behavior" in AI and promote the options that will not cause harm to humans while completing the required task.

Quixote is a "value alignment" method that connects the robots' goals with appropriate behaviors in social settings. Building from Riedl's previous research, Quixote enables the robot to act like the story protagonist in children's stories in anticipation of a reward.

For instance, a robot who needs to pick a medicine prescription for a human can possibly do of the following: rob the clinic or pharmacy to get the medicine it needs and run; talk to the pharmacists to get the medicine; or patiently wait in line for a turn at the counter.

Without the Quixote system, robots will figure out that robbing or stealing the medicine is the quickest and most inexpensive way to finish the task. However, aligning the robots' goals with socially accepted behaviors, the AI learns that it will be rewarded if it chooses either the second or third option.

"The technique is best for robots that have a limited purpose but need to interact with humans to achieve it. It is a primitive first step toward general moral reasoning in AI," says Riedl who worked with Brent Harrison in developing Quixote. He adds that the most practical way to teach a robot's value alignment is by teaching it to read and understand children's books without a human user guidebook.

Through grants, The Office of Naval Research and the U.S. Defense Advanced Research Projects Agency or DARPA supported the research, which can be viewed [pdf] online. The researchers will make the project's debut at the AAAI-16 Conference on Feb. 12 to 17 in Phoenix, Arizona.

© 2016 Tech Times, All rights reserved. Do not reproduce without permission.

Sign up for our Newsletter

Real Time Analytics