Stephen Hawking Warns Of Robot Apocalypse: Here's How Humanity Can Protect Itself From Artificial Intelligence
Albert Einstein has warned more than half a century ago of an apocalyptic war to be fought with sticks and stones. The prediction, which came from one of the brightest minds of 20th century physics, was not without wisdom if the war has to be fought between humans.
How about robot apocalypse, which Stephen Hawking predicted?
This is not sci-fi stuff lifted off from the characters of Avengers: The Age of Ultron, Terminator, and Matrix. A robot apocalypse is a potential threat to humanity if left unchecked on top of catastrophic war, mass extinction, and, well, climate change.
"Technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war," Hawking said.
Threat From Artificial Intelligence
Hawking, whose existence defied medical science after he was diagnosed by doctors with life-threatening disorder that affects normal functioning of the nerves and muscles, turned 75 last January.
His warning came amid the dizzying development of artificial intelligence that may spell doom to mankind.
Hawking reiterated the warning he made few years ago: AI could signal the "end of the human race."
The same warning was echoed by Bill Gates, Elon Musk, and Steve Wozniak along with some 8,000 great minds in research and technological corporations. In an open letter they signed in 2015, they pledged to coordinate closely the progress in AI to prevent its development from getting out of hand.
AI, a product of human intelligence, has so much to offer for the advancement of man and society, but the threat is real as researchers struggle to ensure that these AIs and its applications must do what man wants them to do.
The 2015 open letter called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial."
Protecting Humans From AI
The call to protect humans from the aggression that could arise from the advancement of AI system has never been stronger than before.
Hawking argued that equal amount of safety measures imposed on apocalyptic weapons must also be dedicated to AI system.
"We need to be quicker to identify such threats and act before they get out of control," the 75-year-old physicist said.
He is thinking of "some form of world government."
Musk, Tesla Motors CEO, also saw that potential threat to mankind as he referred to AI system as "our biggest existential threat."
"I'm increasingly inclined to think that there should be some regulatory oversight," he said.
The regulatory body either at the national or international level, the Tesla Motors chief said, is needed "to make sure that we don't do something very foolish."