Italy's data protection authority has initiated a fact-finding investigation into the widespread collection of personal data for training artificial intelligence (AI) algorithms, as announced on Wednesday. 

Renowned for its proactive stance among the 31 national data protection authorities, the Italian watchdog is particularly focused on ensuring compliance with the General Data Protection Regulation (GDPR), a cornerstone of European data privacy regulations.

This move follows an earlier incident this year when Italy briefly barred the popular chatbot ChatGPT over concerns of a potential privacy breach.

The current investigation aims to evaluate whether online platforms are implementing sufficient measures to prevent the excessive collection of personal data, commonly referred to as data scraping, for use in AI algorithms.

Italy Launches Probe into AI Training Practices Over Personal Data Gathering
(Photo : MARCO BERTORELLO/AFP via Getty Images)
A photo taken on March 31, 2023, in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its chatGPT robot.



The Italian data protection authority stated, "Following the fact-finding investigation, the Authority reserves the right to take the necessary steps, also in an urgent matter," as quoted by Reuters. The statement did not specifically name any company involved.

To ensure a comprehensive examination, Italy has invited participation from academics, AI experts, and consumer groups, encouraging them to share their perspectives during a 60-day fact-finding period, according to TVP.

France, Germany, and Italy Agree on AI Regulation

In a broader context, France, Germany, and Italy recently reached an agreement on the regulation of AI These governments support voluntary but legally binding commitments for AI providers in the European Union, according to a joint paper that Reuters obtained. This agreement is anticipated to expedite negotiations at the European level.

Read Also: Cybersecurity Alert: North Korean Hackers Deploy Malware Through Bogus Job Recruitment Tactics

While the European Parliament presented the AI Act in June, designed to mitigate risks and prevent discriminatory effects, recent discussions faced challenges. During these talks, MEPs walked out due to a deadlock over the proposed approach to foundation models.

France, Germany, and Italy, among the larger member states, expressed reservations about certain regulations. They argued that making the code of conduct initially binding only for major AI providers could create a competitive advantage for smaller European providers but might reduce trust in them.

Regulation Focusing on AI Use

The three governments emphasized that rules of conduct and transparency should be universally binding, irrespective of the provider's size. Initially proposing no sanctions, they suggested the potential establishment of a sanction system if violations were identified over time. The paper envisions a European authority to monitor compliance with these standards.

Germany's Economy Ministry, jointly responsible for the topic with the Ministry of Digital Affairs, emphasized the need to regulate the application of AI rather than the technology itself. Euronews reported that Digital Affairs Minister Volker Wissing has expressed satisfaction with the agreement with France and Germany, highlighting the importance of regulating the use of AI rather than the technology itself "to play in the top AI league worldwide."

In August, international privacy watchdogs, including the UK's ICO, Canada's OPC, and Hong Kong's OPCPD, urged mainstream social media platforms to safeguard users' public posts from scraping. Their joint statement emphasizes that platforms have a legal responsibility to protect user data in most markets, per TechCrunch.

Related Article: Analysis Reveals Europe-China AI Military Research Collaboration Raising Concerns

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion