Researchers at Google are concerned about certain things going wrong with future artificial intelligence systems.

Google – in its newly released research paper produced in collaboration with researchers from Stanford, University of California, Berkeley, and Elon Musk-backed AI developer OpenAI – wondered how robot minds can be designed properly and without dangerous, unintended consequences to their owners.

The tech scientists came up with five “practical research problems” that programmers have to factor in before they start creating another AI system:

Avoiding negative side effects – One issue is: how does one stop a robot from knocking over a vase or bookcase while performing its household duties?

Avoiding reward hacking – As a robot is programmed to clean up a room religiously, how is it possible to prevent it from messing up the place out of its sheer pleasure of responsibility?

Scalable oversight – How much decision-making is allowed a robot, and does it need to ask permission before it moves objects every time?

Safe exploration – How should a robot be taught of the limits of its curiosity? Based on the researchers’ example, an AI system could be learning where it is supposed to mop and the best way to do the task, but how can it be taught that taking a wet mop to new areas is all right but that it shouldn’t stick the object in an electrical outlet?

Robustness to distributional shift – It is also important to ensure robots know how to respect the space they live in. But how does a household cleaning robot learn to act differently from a factory-sweeping machine?

In a blog post accompanying the paper, Google researcher Chris Olah emphasized that discussions around AI safety risks remain highly hypothetical as well as speculative.

“We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably,” Olah said.

The authors suggested an array of methods of mitigating potential harm, such as establishing simulations that robot agents can operate in before they hit the real world. Human oversight, too, proves crucial: a human handler can check a bot before the latter takes actions beyond its sphere of reference.

Google has an interesting stake in the game, Verge reported. The tech company might be offloading its robotic hardware company Boston Dynamics, yet it keeps pouring money and resources into different AI initiatives.

See the full technical report here.

Photo: Shunichi Kouroki | Flickr

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion