According to Tesla and SpaceX founder Elon Musk, artificial intelligence could be more dangerous than nuclear weapons.

Musk revealed these thoughts over the weekend in a post on Twitter, which is not the first time that Musk has expressed concerns over artificial intelligence.

"Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes," tweeted Musk.

Musk is referring to the book Superintelligence by Nick Bostrom, which is set for an English release next month. 

Bostrom is the founder of the Future of Humanity Institute in Oxford University, which recently forged a partnership with the Centre for the Study of Existential Risk in Cambridge University. The goal of the partnership is to study how artificial intelligence, among other manmade creations, can one day eliminate all humans.

"Basically, just think of machine super-intelligence as something that's really good at achieving the outcomes it prefers," Bostrom said. "So good it could steamroll over human opposition. Everything then depends on what it is that it prefers, so, unless you can engineer its preferences in exactly the right way, you're in trouble."

"Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable," Musk also tweeted.

Back in March, it was revealed that Musk was part of a group, which included Facebook founder Mark Zuckerberg and actor Ashton Kutcher, which invested a total of $40 million into artificial intelligence company Vicarious FPC.

Musk, however, said in June that the reason for his involvement in the Vicarious investment is not for any financial gain, but rather to be able to "keep an eye on" the developments in technology that the company is making.

Musk said that he thinks a scenario straight from the Terminator movies can happen in real life through the development of artificial intelligence.

However, according to Vicarious advisor and Berkeley professor Bruno Olshausen, human knowledge is nowhere near understanding how our brain works, and as such, is still far behind in creating intelligence that can replicate it, let alone an intelligence that can defy orders and think on its own.

Olshausen's comments may hint that if there would be a risk of a robot uprising in the future, it is looking like it is not an imminent threat. However, Musk thinks that the risk is real, and that humans are better off being extremely careful in developing artificial intelligence.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion