Elon Musk Thinks Artificial Intelligence Is More Threatening Than North Korea: Is He On To Something?
Elon Musk thinks we should fear artificial intelligence more than North Korea.
The celebrated SpaceX and Tesla CEO posted a photo via Twitter late Friday, Aug. 11, showing a gambling addiction poster captioned with "In the end, the machines will win." It's, of course, a comedic way to repurpose an old ad, but Musk seems serious about his sentiments, regardless.
Elon Musk: AI Should Be Regulated
"If you're not concerned about AI safety, you should be. Vastly more risk than North Korea," tweeted Musk. He elaborated further: Since cars, food, planes, and other things that pose a threat to mankind are regulated, Musk believes AI should have some form of regulation as well.
Looking at the replies Musk received, it seems a large number of people agree with his stance on AI and its regulation. One user, Daniel Pedraza, made an interesting point amid Musk's calls for AI safety. Since AI is a field where there's a huge chance for things to change quickly, he expressed the need to develop a regulatory framework that can adapt to the speed of these potential shifts.
"[A]ny fixed set of rules that are incorporated risk being ineffective quite quickly," said Pedraza.
Why AI Is Concerning
Many AI experts fear of developing AI too rapidly for our own good. On paper, the concerns sound like they're pulled straight out of a well-crafted science fiction novel, but these might turn out to be valid fears in the end. AI is a largely unexplored terrain, and it's the kind of complex subject where the more one understands a little bit further, the more it unravels its unknowns. In short: the more we know, the more don't know.
Musk's recent musings can be probably be taken as a response to recent news of OpenAI — a non-profit AI research company with backers such as Peter Thiel, Microsoft, and Musk himself — defeating human players at eSports.
The notion of computers taking over humanity sounds much like the next major Hollywood doom flick, but efforts are already under way to create a type of AI that's ethically aligned — meaning one that won't vaporize humanity when it realizes it can. Researchers and experts from Google, Amazon, Microsoft, Facebook, and more big-name firms have already begun discussions to ensure that AI will benefit the human race, not degrade it.
The world is in an awkward position with regard to safety concerns over AI simply because it doesn't exist yet. Hence, its potential effects remain unknown even to experts who have an inkling to the probable repercussions. Either Musk is blowing it out of proportion or he's completely right.