The Italian Data Protection Authority, Guarante, has issued an order to OpenAI to stop processing personal data using its ChatGPT model, according to a report by TechCrunch.

ITALY-TECHNOLOGY-AI
(Photo : MARCO BERTORELLO/AFP via Getty Images)
A photo taken on March 31, 2023 in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI web site, displaying its chatGPT robot. - Italy's privacy watchdog said on March it had blocked the controversial robot ChatGPT, saying the artificial intelligence app did not respect user data and could not verify users' age.

General Data Protection Regulation 

The authority is worried that the San Francisco-based company is breaching the European Union's General Data Protection Regulation (GDPR) and has unlawfully processed people's data. It has given OpenAI 20 days to respond to the order or face hefty fines. 

The GDPR applies to any European Union user whose personal data is processed. OpenAI's ChatGPT model, which can produce biographies of named individuals in the region on demand, has been crunching this kind of information, and it is not clear how people can ask the company to correct erroneous information.

The Italian regulator is also concerned about the lack of any system to prevent minors from accessing the technology. 

According to TechCrunch, as OpenAI does not have a legal presence in the EU, any data protection body has the capacity to take action by GDPR if it identifies threats to local consumers.

The Guarante statement also refers to a data breach that the service had earlier this month. OpenAI acknowledged that a conversation history function may have compromised some customers' financial information in addition to leaking users' chats.

Earlier models were trained using data taken from the Internet, including forums like Reddit, but OpenAI has been reluctant to disclose specifics about the training data used for its most recent generation of the technology, GPT-4.

The GDPR permits a variety of scenarios, from permission to the public interest, but the volume of data required to train big language models makes the legality issue more challenging. At the very least, it does not appear that OpenAI told the individuals whose data it used to train its commercial AIs.

Read Also: Microsoft Security Copilot: GPT-4 Powered Assistant to Help with Safety-What You Need to Know

Question of Lawfulness

The question of the lawfulness of this processing, or the legal basis on which OpenAI processed the data of Europeans, is at issue. Data protection authorities may request that OpenAI remove any personal information it has improperly processed relating to Europeans.

However, it is unclear if that would require the company to retrain models created using illegally obtained data.

Since OpenAI does not actively discourage children under the age of 13 from registering to use the chatbot, the Italian authority is also concerned about the possibility that minors' data will be processed by the firm.

The agency has been aggressively pursuing hazards to children's data in recent years, and it recently imposed a similar ban on Replika, an AI chatbot for virtual friendships, due to concerns about kid safety.

Related Article: OpenAI brings ChatGPT Plugins to Download, Helping Expand its Services to the Third Party

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion