Max Tegmark, a major artificial intelligence advocate and scientist, warns that big tech is deflecting from AI's deadly threat. Tegmark said at the AI Summit in Seoul that focusing on safety rather than AI's ability to kill slows regulation.

Tegmark likened it to Enrico Fermi's 1942 nuclear reactor innovation, which led to nuclear weapons. "AI models passing the Turing test are a similar warning," he remarked, citing Geoffrey Hinton and Yoshua Bengio's concerns, according to The Guardian.

After OpenAI's GPT-4 debut last year, Tegmark's Future of Life Institute recommended a six-month hiatus on advanced AI development. Professional support did not lead to any pause. AI conferences like Seoul's have led to regulatory debates, but Tegmark says corporate pressure has turned the focus away from the most serious concerns.

Tegmark also compared big tech firms' lobbying efforts with "how tobacco companies delayed smoking regulations." He called for prompt action due to public concern and the need for government-imposed safety standards.

Research: AI Could Lead to Human Extinction

Max Tegmark's warning comes after a government-commissioned assessment indicated that AI might represent an "extinction-level threat to the human species," as reported by Time. 

The publication reported in March that it acquired a document warning that AI research poses urgent and rising national security dangers, likening instability to nuclear weapons. It shows the potential of artificial general intelligence (AGI), a hypothetical technology that might handle most activities beyond human levels within five years, according to several researchers.

Three researchers wrote "An Action Plan to Increase the Safety and Security of Advanced AI," consulting over 200 government officials, experts, and workers from OpenAI, Google DeepMind, Anthropic, and Meta. The skewed incentives in these corporations' decision-making are problematic.

Key suggestions include banning AI model training over a new federal AI agency-determined computer power level. This agency would force AI businesses to get government approval before training and deploying new models. The study also suggests strengthening AI chip manufacturing and export rules, as well as increasing federal support for AI safety research.

Read Also: Remark Uses AI to Create Personalized Shopping Guides to Address Buyer's Remorse

The State Department presented its $250,000 report for November 2022 on February 26. It notes that its suggestions are not State Department or US government positions.

With AI progressing and public anxiety rising, the report's ideas are unprecedented. According to AI Policy Institute research, over 80% of Americans think AI might accidentally create a disaster, and 77% want tighter government oversight of AI.

World Science Festival
(Photo : Amy Sussman/Getty Images for World Science Festival) 
Physicist Max Tegmark (L) and musician Mark Everett (R) speak at the panel discussion "Parallel Worlds, Parallel Lives" at the World Science Festival held at the Paley Center for Media on May 29, 2008 in New York City.

Challenges in Regulating AI

The initiatives may meet political obstacles. Greg Allen, the director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies, maintains his skepticism about the implementation of these recommendations due to the current policy's preference for transparency and regulation over AI training limits.

Notably, the Superalignment team at OpenAI, responsible for managing existential concerns arising from superhuman AI systems, has dissolved, as previously reported by Gizmodo. The team's founders, Ilya Sutskever and Jan Leike, resigned simultaneously.

The July 2023 Superalignment team sought to prevent superhuman AI systems from becoming rogue. Wired claims that OpenAI will include the team's duties in its study. John Schulman, cofounder of OpenAI, will lead research on the dangers of advanced AI models.

Jan Leike announced his OpenAI resignation on X, formerly Twitter. He highlighted recurrent clashes with business leadership over basic principles and the team's struggles to get computing resources for vital research. OpenAI must prioritize security, safety, and alignment in AI development, Leike said. 

Related Article: Former TikTok CEO Sees AI as 'Vastly Overhyped Already'

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion