MENU

Artificial Intelligence has Great Potential, Threat: Experts Sign Open Letter to Protect Humanity from Deadly Machines

Close

Artificial Intelligence (AI) has immense potential. However, AI experts around the world are signing an open letter to protect humanity from destruction by deadly machines.

The Future of Life Institute has put forth an open letter that wants to ensure that the progress of AI does not grow beyond the control of humanity.

The attached document of the open letter explains [pdf] that the development of the human civilization is the result of human intelligence. Humanity can achieve even more with the help of human development that has created AI. The letter highlights that AI also has a lot of potential, but people should also be aware of its dangers.

The attached document of the Future of Life hints at certain potential risks that mankind may come across due to AI.

"Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls," reads the summary of the attached document.

The open letter has already been signed by many industry experts such as Demis Hassabis, Shane Legg and Mustafa Suleyman, who are the co-founders of British AI company DeepMind that was acquired by Google in 2014.

Erik Brynjolfsson and Leslie Pack Kaelbling, professors at Massachusetts Institute of Technology (MIT), along with many MIT students have also signed the open letter.

The three main concerns that the Future of Life Institute highlights are self-driving cars, autonomous weapons and machine ethics.

The key focus of the Future of Life Institute is to explain the possible risks involving man-made AI on humanity. A similar concept was depicted in Hollywood film "Terminator" where the resistance was trying to stop Skynet before it destroys mankind.

Elon Musk, the CEO of Tesla and SpaceX, is a member of the institute's Scientific Advisory Board. Musk has also raised concerns about the developments in the field of AI and said it could be more severe than nuclear weapons.

Musk also revealed that he believes that AI should be regulated either at international or national level so that there are certain restrictions in place. Musk has also signed the letter.

Professor Stephen Hawking, who also signed the letter, suggests that AI has the potential to cause terrible consequences.

ⓒ 2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Real Time Analytics