OpenAI, the leading AI research organization, is developing "superintelligent" AI systems; consequently, the firm has established a new team to manage powerful AI systems that may exceed humans in 10 years.

Sutskever and Leike recently stressed the need to find methods to regulate superintelligent AI since there is no mechanism to stop such AI from becoming rogue. They made this point in a blog post. When dealing with AI systems that are much more intelligent than humans, the current strategies for aligning AI depend on human supervision, which may need more, per TechCrunch.

To solve this issue, OpenAI is creating the Superalignment team, which will have access to 20% of the company's computer capabilities. The team will comprise researchers from several departments inside the corporation and scientists and engineers from OpenAI's last alignment team. They aim to address the fundamental technological difficulties associated with superintelligent AI control during the next four years.

The team intends to use human input to train AI systems, use AI to support human assessment, and eventually create AI capable of undertaking alignment research. Alignment research focuses on ensuring that AI systems provide the expected results.

Steps Must Be Taken to Manage AI

According to OpenAI, compared to humans, AI can advance alignment research more quickly. AI systems may progressively take on more alignment tasks as they develop, develop, implement, and research better alignment methods. 

Future generations of AI are intended to be more in line with human ideals thanks to the cooperation between AI systems and human researchers. Instead of doing alignment research, human researchers will assess alignment research carried out by AI systems.

Read Also: AI-Powered Tool Shows Promise in Detecting Dementia and Alzheimer's 

Sutskever and Leike show confidence in the ability of AI to address the difficulties of superintelligence alignment while noting the drawbacks of their strategy, such as possible inconsistencies and biases in utilizing AI for assessment. They emphasize that superintelligence alignment is fundamentally a machine learning challenge and contend that finding solutions will depend on machine learning professionals' skills.

OpenAI intends to disseminate the developments and results of this initiative broadly. The firm also understands how critical it is to support AI models outside of its own that are safe and in sync. They stress that dealing with superintelligence alignment should be a top priority worldwide, on par with other significant societal hazards like pandemics and nuclear war.

The Serious Threat of AI

As AI develops, worries about possible dangers and social repercussions become more prominent. The Centre for AI Safety's declaration, which has the endorsement of experts, including executives from OpenAI and Google DeepMind, emphasizes how crucial it is to reduce the dangers connected with AI to protect humanity, according to a BBC report.

These dangers include the ability to manage and manipulate people, the deployment of deadly autonomous weaponry, the consequences of widespread unemployment on people's mental health, and the potential for AI-driven information systems to undermine democracy and spark social unrest.

AI technology also raises concerns concerning job displacement; estimates indicate that tens and hundreds of millions of jobs might be lost over the next 10 years, according to The Guardian.

Related Article: Authors File Lawsuit Against OpenAI For Using Copyrighted Books for Training Without Permission 

byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion