According to OpenAI's CEO Sam Altman, the company will no longer use paying customer data to train its AI large-language models, including GPT.

Altman revealed in an interview with CNBC on Friday that OpenAI had not used customer data to train its AI models for some time, as customers had expressed concerns about privacy and data protection. 

US-AI-TECH-MICROSOFT-GOOGLE
(Photo: JASON REDMOND/AFP via Getty Images)
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023.

Privacy Concerns

OpenAI quietly updated its terms of service on March 1 to reflect this change. However, the company's updated terms of use note that they may use content from services other than its API, which could include text that employees enter into the popular chatbot ChatGPT.

Experts have already raised concerns about the use of AI chatbots, warning users about what they share with these tools. 

They claim that AI chatbots may be able to gather large amounts of data for targeted advertising. Ali Vaziri, a legal director on the data and privacy team of the law firm Lewis Silkin, said that the human-like capabilities of AI tools can be disarming to users.

Read Also: Microsoft Introduces Bing Chat, Features More than AI Text Generation, Available for Open Preview

The White House Addresses AI Risks

The rise of AI tools like ChatGPT has raised concerns in various fields, particularly on how it could impact jobs, security, and privacy.

This has prompted officials from the White House and Vice President Kamala Harris to meet with CEOs of four leading tech companies, including OpenAI, to discuss the risks associated with AI on Friday.

The first key area discussed was transparency. It is essential to ensure that individuals understand how AI systems work and how they are being used. The second area was the ability to evaluate and verify the safety, security, and efficacy of AI systems.

This underscores that there is a need to establish standards to evaluate AI systems, such as audits, testing, and certification processes. The third area discussed was the need for AI systems to be safe from malicious actors and attacks.

The CEOs made a commitment to collaborate with the administration to ensure that the American people can benefit from AI innovation while their safety and rights are protected. 

The White House also issued a statement outlining additional actions aimed at promoting responsible innovation and risk mitigation in AI. These actions include the Blueprint for an AI Bill of Rights, executive actions, the AI Risk Management Framework, and a plan to establish a National AI Research Resource.

OpenAI's decision not to use customer data for training its AI models is a step in the right direction toward data privacy and protection. However, the use of text from services like ChatGPT remains a concern for some.

The meeting between the White House officials and tech company CEOs also shows the importance of addressing the potential risks associated with AI while promoting innovation. 

Related Article: Snapchat to Feature Ads on My AI Powered by ChatGPT, But It Is Not What You Think (Thankfully)

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion