The AI Foundation Model Transparency Act proposes mandatory disclosure of copyrighted training data by AI companies, enhancing transparency during model training. This addresses concerns raised in legal actions against Stability AI and Getty Images' complaints. 

Doom Calculator: AI Algorithm that Predicts Death Sparks Ethical Debate
(Photo : OLIVIER MORIN/AFP via Getty Images)
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software.

Enhancing Transparency

A new bill called the AI Foundation Model Transparency Act proposes that AI companies disclose copyrighted training data, ensuring transparency regarding the use of copyright-protected information during AI model training. 

Representatives Anna Eshoo (D-CA) and Don Beyer (D-VA) introduced the bill, directing the Federal Trade Commission (FTC) to collaborate with the National Institute of Standards and Technology (NIST) in establishing regulations for reporting training data transparency.

The bill mandates that companies disclose the origins of training data, outline data retention procedures during inference, detail model limitations or risks, align with NIST's AI Risk Management Framework, adhere to potential federal standards, and provide information on the computational resources utilized in both training and model operation. 

AI developers are required to report red teaming efforts aimed at preventing the model from disseminating inaccurate or harmful information, particularly in areas such as medical queries, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and for vulnerable populations, including children.

AI Companies Facing Lawsuit

Highlighting the significance of transparency in training data, the bill emerges in response to a surge in copyright infringement lawsuits against AI companies. 

Notably, it references legal actions involving artists versus Stability AI, Midjourney, and Deviant Art, as well as Getty Images' complaints against Stability AI. However, this was largely dismissed in October, as reported by VentureBeat.

The bill underscores the escalating legal challenges and public apprehensions surrounding copyright infringement amid the growing accessibility of artificial intelligence. 

Acknowledging the proliferation of foundation models in public use, the bill expresses concern about instances where the public is exposed to information that is inaccurate, imprecise, or biased.

The bill notes the surge in lawsuits and public apprehensions regarding copyright infringement as artificial intelligence becomes more accessible to the public. 

Also Read: Microsoft to Protect Its AI-Using Customers From Copyright Lawsuits

It underscores the concern that the widespread use of foundation models has resulted in numerous instances where the public is exposed to information that is inaccurate, imprecise, or biased.

However, the bill's journey involves being assigned to a committee for discussion, and it remains uncertain whether this process will occur before the commencement of the busy election campaign season.

Representatives Eshoo and Beyer's legislation aligns with the Biden administration's AI executive order, contributing to the establishment of reporting standards for AI models. 

While the executive order lacks the force of law, the potential passage of the AI Foundation Model Transparency Act would institute federal rules mandating transparency requirements for training data.

Related Article: AI-Generated Work Cannot be Copyrighted says US Judge as it Lacks Human Creation

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion