Artificial intelligence (AI) tools, including the widely-used ChatGPT, have demonstrated vulnerabilities that could potentially be exploited for malicious purposes, a study conducted by researchers at the University of Sheffield warned.

According to TechXplore, this research highlights a previously overlooked threat in AI, illustrating how text-to-SQL systems, which facilitate database searches through natural language queries, can be manipulated to compromise computer systems in practical settings.

Security Weaknesses in AI Tools

The investigation uncovered security weaknesses in six prominent commercial AI tools, successfully exploiting each. Among the tools assessed were BAIDU-UNIT, a leading Chinese intelligent dialogue platform employed across various sectors, and other systems like ChatGPT, AI2SQL, AIHELPERBOT, Text2SQL, and ToolSKE.

FRANCE-TECHNOLOGY-INTERNET-CHATGPT
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
This photograph taken in Mulhouse, eastern France on October 19, 2023, shows figurines next to the ChatGPT logo.

By posing specific questions to these AIs, the researchers claimed that they were able to coerce these AI tools into generating malicious code. 

This code, when executed, could lead to the leakage of sensitive database information, disrupt a database's normal functionality, or even result in its destruction. 

Notably, the study revealed that in the case of Baidu-UNIT, the researchers were able to obtain confidential server configurations and disable one server node.

Xutan Peng, a PhD student at the University of Sheffield and co-leader of the study, emphasized the significance of this discovery. 

He said: "At the moment, ChatGPT is receiving a lot of attention. It's a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services." 

Furthermore, the research underscored the perils of utilizing AI to learn programming languages for database interaction. For instance, a healthcare professional could ask ChatGPT to generate an SQL command for database interaction. 

However, the SQL code produced could inadvertently lead to significant data management errors without any prior warning, according to the team.

The researchers also revealed the potential for executing backdoor attacks, introducing a "Trojan Horse" into text-to-SQL models by manipulating the training data. Such an attack may not impact the model's overall performance but can be activated at any moment, inflicting real harm on users.

Dr. Mark Stevenson, a Senior Lecturer in the Natural Language Processing research group at the University of Sheffield, emphasized the complexity of large language models (LLMs) and the need for a deeper understanding of their behavior.

Read Also: Can ChatGPT Be Used for Scientific Writing? New Study Offers Answer

Baidu and OpenAI Responds to the Concerns

The researchers presented their findings at the ISSRE conference, a prominent platform for software engineering, and are collaborating with stakeholders in the cybersecurity community to address these vulnerabilities. 

Their work has garnered recognition from industry leaders like Baidu, who promptly addressed and rectified the identified vulnerabilities. OpenAI has also addressed all the specific concerns raised by the Sheffield researchers concerning ChatGPT. 

The researchers anticipate that these revelations will serve as a catalyst for the natural language processing and cybersecurity communities to unite in identifying and mitigating previously overlooked security risks in AI systems. 

They advocate for an ongoing collective effort to stay ahead of the evolving landscape of cyber threats. The findings of the team were published in arXiv.

Related Article: Study Suggests AI Chatbots Can Profile, Learn Personal Information via Normal Conversations

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion