Scientists at the Stanford University have released the first report of their One Hundred Year Study on Artificial Intelligence (AI100), in which they try to predict the potential impacts of artificial intelligence on human life in the long run.

Known as "Artificial Intelligence and Life in 2030," the report comes just two years since the researchers began their work. It focuses on the history of AI technology and how it is being used in various fields today, such as in the development of robots for medical purposes and self-driving vehicles for transportation.

Experts believe the Stanford study is important not only for researchers but for policymakers as well who may need to create new laws that would better cater to such technological advancements.

AI100

AI100 is the brainchild of Eric Horvitz, managing director of Microsoft Research's Redmond laboratory. It is meant to create a better understanding on how artificial intelligence is being developed and how it will impact the world over the coming century.

Russ Altman, a bioengineering professor and faculty director of AI100 at Stanford, said that the study will take some time to complete, but the release of the first report signifies a good start for them.

"Stanford is excited to host this process of introspection," Altman said. "This work makes practical contribution to the public debate on the roles and implications of artificial intelligence."

One of the main purposes of AI100 is to allay fears regarding the possibility of artificial intelligence programs getting out of hand, similar to the premise behind the Terminator film franchise.

While several tech leaders, such as SpaceX's Elon Musk and famed theoretical physicist Stephen Hawking, have expressed concern about such a scenario, AI100 researchers said there is really no need to be afraid of AI programs going rogue.

The report said that there have been no machines developed with the ability to sustain long-term goals and intent on their own. There are also no plans to create such machines in the near future.

The real dangers of artificial intelligence lie not in its potential to become a Skynet-like program but rather in the unintended consequences that could come about from a helpful technology such as the displacement of human labor and the erosion of privacy.

To avoid such an event, it is crucial for AI researchers and policymakers to find a balance between developing innovations and adhering to social mechanisms. This is to ensure that the benefits of such a technology will be widely distributed.

The AI100 researchers pointed out that if society views artificial intelligence with "fear and suspicion," it could slow down its development or drive it underground entirely. It could also impede the work that developers are doing to ensure that AI technologies remain safe and reliable.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion