Stephen Hawking Warns That Artificial Intelligence Will Replace Humans
Amid the rapid growth of artificial intelligence proponents, Stephen Hawking warns that artificial intelligence would eventually render humans obsolete. AIs can do more harm than good if humans become too aggressive in developing them.
AIs Outperforming Humans
This is not the first time the renowned physicist warned about the AI apocalypse. He previously said that rapidly advancing technology may spell doom for humans several years from now.
During the launch of Artificial Intelligence Center at Cambridge University last October, Hawking already expressed his fear that AIs are like double-edged swords — they could be one of the greatest inventions, but they can also be mankind's worst nightmare. While its development could help technological advancement, they also have the potential to wipe out the humanity — a prediction he keeps on reminding everyone.
In his recent interview, Hawking shared that time will come when artificial intelligence will already reach a potential where they will be more effective than humans, rendering us powerless against them.
"I fear that AI may replace humans altogether," said Hawking. "If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans."
Technology Is Good But Approach With Caution
Hawking acknowledges that technology has a handful of benefits for the humans. It can help end terminal diseases and curb poverty, but without caution, it can also be deadly.
He also warns that the mankind should prepare for possible consequences. Experts and stakeholders should be aware of its potential dangers and recognize and manage them before it becomes out of hand.
SpaceX founder and CEO Elon Musk echoed the sentiments of Hawking. However, Musk believes that the government must make a strong stance regarding AIs by creating laws to regulate it.
In his previous interview, Microsoft cofounder Bill Gates seems to downplay the threat.
"The so-called control problem that Elon is worried about isn't something that people should feel is imminent," said Gates. "We shouldn't panic about it."
Hawking stressed the need for everyone to immerse in the study of science. He explained that gaining knowledge to surpass the theoretical debate on AI development is necessary to prepare for the future.
"You all have the potential to push the boundaries of what is accepted, or expected, and to think big," Hawking shared. "It is an exciting — if precarious — place to be and you are the pioneers."