For a generation that has been exposed to the Terminator movies, visions of a robot uprising come to mind whenever news about advancements in artificial intelligence surface.

Great minds such as Tesla Motors and SpaceX CEO Elon Musk, famed astrophysicist Stephen Hawking and Apple co-founder Steve Wozniak have previously expressed their concern on the possibility of a robot apocalypse.

It would seem that Google, one of the companies at the forefront of artificial intelligence development, is now sharing some of these concerns, as its DeepMind unit has published a study that seeks to implement safety measures on the technology.

The paper, published as a collaboration between DeepMind and the Future of Humanity Institute of Oxford University, discusses a "big red button" that will allow humans to turn off artificial intelligence in a robot and take control of it in case the robot is misbehaving or malfunctioning.

And just so it is clear, the Future of Humanity Institute is named as such as it wants humanity to have a future, with Nick Bostrom, its founding director, being one of the more vocal opponents of artificial intelligence.

The researchers of the paper, DeepMind's Laurent Orseau and the Future of Humanity Institute's Stuart Armstrong, explain that artificial intelligence agents are not likely to be at their best behavior at all times. They believe that the key to addressing situations where robots go haywire is safe interruptibility, though the researchers admit that some systems may be unaffected by the so-called big red button.

Specifically, the systems that could not be stopped by the kill switch being developed are the ones related to policy-search robotics, which is a part of machine learning. As such, the research seems to be a long way off from being fully completed to apply the off button to all forms of artificial intelligence.

The research might seem to be a bit of an overreaction, given that the most high-profile achievement of artificial intelligence so far has been beating a world champion in a board game. However, Bostrom has previously stated a theory that once artificial intelligence at the level of human intelligence has been developed, it would not be too long before it goes beyond the capabilities of humans. Robots would then build better robots with better artificial intelligence, resulting in a snowball of development that humans would not be able to replicate nor rescind.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion