Consumer groups across 13 European countries are urging regulators to launch investigations into the potential risks posed by generative AI, such as ChatGPT, and to enforce existing legislation to safeguard consumers. 

The call for action coincides with the release of a new report by Forbrukerrådet, a Norwegian consumer organization and member of BEUC, which highlights the numerous risks associated with generative AI, identifies existing protective regulations, and identifies areas where further rules need to be developed.

AI
(Photo: Gerd Altmann from Pixabay)

EU Urged to Investigate Generative AI

Ursula Pachl, Deputy Director General of BEUC, emphasized the concerns surrounding generative AI systems like ChatGPT, noting the potential for deception, manipulation, and harm to individuals. 

There are also worries about the dissemination of disinformation, amplification of biases, and fraudulent activities facilitated by such AI systems.

"We call on safety, data, and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action. These laws apply to all products and services, be they AI-powered or not and authorities must enforce them," Pachl said in a statement.

Consumer groups are urging safety, data, and consumer protection authorities to initiate investigations promptly rather than waiting for consumer harm to occur.

They emphasize that existing laws apply to all products and services, including those powered by AI, and it is essential for authorities to enforce them rigorously. 

The forthcoming comprehensive regulation on AI systems being developed by the EU is seen as crucial for ensuring consumer protection. The regulation should subject all AI systems, including generative AI, to public scrutiny and reassert public authorities' control. 

Lawmakers are urged to mandate that the output generated by any generative AI system is safe, fair, and transparent for consumers.

BEUC has already written to consumer safety and consumer protection authorities in April, urging them to launch investigations due to the rapid proliferation of generative AI models like ChatGPT and the potential harms associated with their deployment. 

The European Data Protection Board has already established a task force to examine ChatGPT specifically.

Read Also: Texas Federal Judge Implements Measures to Prevent AI-Generated Arguments in Court

Summary of AI Risks

The report published by Forbrukerrådet provides a summary of the current and emerging challenges, risks, and harms posed by generative AI. 

Some of these challenges include concerns related to power, transparency, and accountability, as certain AI developers have restricted external scrutiny, making it difficult to understand data collection methods and decision-making processes. 

The report also highlights instances of wrong or inaccurate output, where generative AI systems may generate content without contextual understanding, potentially leading to harm. 

Additionally, the report addresses the manipulation or deception of consumers through the use of emotive language and speech patterns by AI chatbots.

Biases and discrimination stemming from biased datasets, as well as privacy, personal integrity, and security vulnerabilities associated with generative AI, are also highlighted as significant concerns.

In light of these findings, consumer groups are urging regulators to take immediate action to investigate and address the risks posed by generative AI while ensuring compliance with existing regulations and developing robust frameworks for future AI systems. 

Related Article: Communication Assistant GrammarlyGO Aims To Save You Time on a Wide Array of Written Tasks

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion