The European Union (EU) is urging signatories to its Code of Practice on Online Disinformation to take action against deepfakes and other AI-generated content, according to a report by TechCrunch.

AI
(Photo : Gerd Altmann from Pixabay)

Labeling AI Content

Vera Jourova, the EU's commissioner for values and transparency, emphasized the need for technology to identify AI content and clearly label it for users during a recent meeting with the Code's signatories.

Jourova acknowledged the positive potential of AI technologies but also highlighted the risks and negative consequences they pose, particularly regarding the creation and dissemination of disinformation. 

These new technologies present challenges for combating disinformation, prompting Jourova to call for a dedicated track within the Code to address these concerns.

The current version of the Code does not explicitly require identifying and labeling deepfakes. However, the Commission intends to update the Code to include mitigation measures for AI-generated content. It aims to make it count towards compliance with the legally binding Digital Services Act (DSA).

Two main discussion angles have emerged regarding including mitigation measures for AI-generated content in the Code. The first angle focuses on services integrating generative AI, such as Microsoft's New Bing and Google's Bard AI-augmented search services. 

These services should commit to implementing necessary safeguards to prevent malicious actors from using them to generate disinformation. 

The second angle involves signatories with services capable of disseminating AI-generated disinformation, requiring them to employ technology to recognize and label such content for users. 

Read Also: EU Official Reveals Twitter's Exit from Voluntary Pact Against Disinformation

Right to Freedom of Speech

Jourova revealed that she had discussed the matter with Google's Sundar Pichai, who mentioned that Google has technology capable of detecting AI-generated text content. However, the company continues to develop and improve its capabilities in this area.

The EU commissioner emphasized the need for clear and fast labeling of deepfakes and other AI-generated content so that users can readily distinguish between machine-generated and human-generated content. The Commission expects platforms to implement labeling measures immediately.

While the DSA already includes provisions for labeling manipulated audio and imagery on very large online platforms (VLOPs), the addition of labeling to the disinformation Code aims to ensure an even earlier implementation. 

Jourova stressed that machines do not have a right to freedom of speech, emphasizing the importance of protecting this freedom while addressing the fundamental idea that machines should not possess this right.

Furthermore, the Commission anticipates action from signatories on reporting risks related to AI-generated disinformation next month. Jourova urged relevant signatories to use the July reports as an opportunity to inform the public about the safeguards they are implementing to prevent the misuse of generative AI for spreading disinformation. 

The Code currently has 44 signatories, including major tech companies like Google, Facebook, and Microsoft, as well as smaller ad tech entities and civil society organizations.

However, Twitter recently withdrew from the voluntary EU Code, marking a noteworthy development in the ongoing efforts to combat disinformation. 

Related Article: G-7 Leaders Want to Develop an AI Framework Called the 'Hiroshima AI Process' After the Recent Summit

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: EU Deepfakes AI
Join the Discussion