Visionary Elon Musk says we should be careful when creating artificial intelligence (AI), noting that given the right set of circumstances, it could be more dangerous than even a worldwide nuclear war.

Musk, the founder of both Tesla Motors and SpaceX, took to Twitter with his concerns after reading Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies." The book looks at the potential impact of creating AI that might grow smarter than humans.

Bostrom's novel, which reads a lot like science fiction, says that once artificial intelligence reaches that point, we cannot assume that it will have the same values as humanity, such as curiosity, enjoyment of life's pleasures, selflessness and humility.

"It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve," Bolstrom says in his book. "But it is no less possible-and probably technically easier-to build a superintelligence that places final value on nothing but calculating the decimals of pi."

Notable science fiction author Isaac Asimov solved this problem in 1942, by recommending three specific laws for AI programming: a robot may not hurt a human or allow a human to be hurt, a robot must obey human orders unless it conflicts with the first law, and a robot must protect itself as long as it doesn't conflict with the other two laws. Of course, Asimov's "I, Robot" shows that these three laws are not completely sufficient to protect us, but they're probably a good place to start.

For example, what if an AI, seeking to protect us through these laws decides that the best way to do that is to lock us all up in a padded room where we cannot be hurt?

Overall, Musk embraces future technologies. Just today, his company, SpaceX, launched a commercial satellite on its Falcon 9 rocket. He is also a major investor in the AI company Vicarious. Perhaps his tweets are merely a warning, a reminder that we should practice care with the technology we create.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion