The rise of artificial intelligence (AI) has brought forth a new era of technological advancements, with AI chatbots like OpenAI's ChatGPT and Google Bard gaining prominence. 

As the development of powerful AI services intensifies, regulators are grappling with the need to control a technology that has the potential to disrupt societal and business norms.

However, these regulators are turning to old laws for new tech governance, according to a report by Reuters.

Robot
(Photo : Lukas from Pixabay )

Navigating Existing Laws

The European Union (EU) has taken the lead in crafting new AI regulations to address concerns regarding privacy and safety raised by the rapid progress in generative AI technology, exemplified by OpenAI's ChatGPT. However, the enforcement of these legislations is expected to take several years. 

In the absence of specific regulations, governments are resorting to applying existing rules. Massimiliano Cimnaghi, a European data governance expert, points out that existing data protection laws are being employed to safeguard personal data while regulations addressing threats to public safety are also being utilized, albeit without explicit definitions for AI.

Regulators and industry experts in the United States and Europe are striving to enforce current regulations that encompass a wide range of areas, such as copyright, data privacy, and the handling of data inputs and generated content by AI models. 

These efforts aim to ensure compliance and address potential concerns associated with these aspects.

Suresh Venkatasubramanian, a former technology advisor to the White House, highlighted the need for agencies in both regions to interpret and reinterpret their mandates.

He specifically pointed out the US Federal Trade Commission's ongoing investigation into algorithms for discriminatory practices, utilizing existing regulatory powers to address the issue.

Read Also: Reddit to Make AI Pay for Accessing its API, Learning from Post Archives from Human-Generated Content

Copyright Issue

Within the European Union, proposed measures under the AI Act would require companies like OpenAI to disclose any copyrighted materials, such as books or photographs, utilized in the training of their models. This provision leaves them susceptible to legal challenges regarding copyright infringement. 

However, it is important to note that proving copyright infringement may not be a straightforward process, as Sergey Lagodinsky, one of the politicians involved in drafting the EU proposals, has acknowledged. 

According to Bertrand Pailhes, the technology lead at French data regulator CNIL, they have adopted a "creative" approach to exploring how existing laws can be applied to artificial intelligence.

In France, discrimination claims typically fall under the jurisdiction of the Defenseur des Droits (Defender of Rights). However, due to the lack of expertise in AI bias within that institution, CNIL has taken the initiative to address this issue. 

Pailhes explained that they are examining the full range of effects, with a primary focus on data protection and privacy. CNIL is considering utilizing a provision of GDPR that safeguards individuals from automated decision-making. 

Related Article: Quick Guide: New AI-Powered App Superchat Allows Users to Chat With DaVinci, Other Historical People!

Byline
(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: AI AI Laws EU
Join the Discussion