A group of ethicists have criticized the call for a "pause" on the development of AI systems backed by Elon Musk, saying that the proposal distracts from the real harm caused by AI systems today.

US-INTERNET-SOFTWARE-AI-OpenAI
(Photo : OLIVIER DOULIERY/AFP via Getty Images)
This illustration picture shows the ChatGPT logo displayed on a smartphone in Washington, DC, on March 15, 2023. - Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year

Here's What Ethicists Have to Say

In a letter signed by over 2,000 people, including Musk and Turing award winner Yoshua Bengio, the Future of Life Institute called for a six-month minimum moratorium on "training AI systems more powerful than GPT-4."

However, the group of ethicists, including Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, argue that the focus on hypothetical risks of "powerful digital minds" with "human-competitive intelligence" ignores the real harm caused by the deployment of AI systems today.

The letter, they say, addresses none of the ongoing harms from these systems, including worker exploitation, massive data theft, and the concentration of power in the hands of a few people, exacerbating social inequities.

The group of ethicists, who are currently working together at the DAIR Institute to study and expose AI-associated harms, argue that the call for an "AI pause" is dangerous because it distracts from the need for regulation that enforces transparency.

They argue that organizations building these systems should be required to document and disclose the training data and model architectures, and that the onus of creating tools that are safe to use should be on the companies that build and deploy generative systems.

Read Also: Chess Legend Kasparov Says Google AI and Microsoft ChatGPT Aren't Top Security Risks: Saying He Fears Bad Actors More

Call for Inclusion

While they agree that "such decisions must not be delegated to unelected tech leaders," they also note that such decisions should not be up to the academics experiencing an "AI summer," who are largely financially beholden to Silicon Valley.

Instead, they claim that those most impacted by AI systems must be heard in this conversation, such as immigrants subjected to "digital border walls," women being forced to wear specific clothing, workers experiencing PTSD while filtering outputs of generative systems, artists seeing their work stolen for corporate profit, and gig workers scraping to make ends meet. 

"The current race towards ever larger 'AI experiments' is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people," reads the statement.

"We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."

Related Article: #HustleGPT Challenge: GPT-4 Users Share How They Use AI to Start Businesses! Should You Join Them?

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: AI OpenAI
Join the Discussion