Since 2022, the world has witnessed a surge in the popularity of Large Language Models (LLMs) like OpenAI's ChatGPT and Google Bard, prompting substantial investments from companies in their development and sparking an AI race.

It must be noted that these AI tools are frequently integrated into chatbots, and they scour the extensive expanse of the internet to learn and generate responses to user prompts.

However, researchers from the AI security startup Mindgard and Lancaster University caution that sections of these LLMs can be replicated in a matter of days for a mere $50. 

The acquired information could be exploited for targeted attacks, posing risks such as exposing confidential data, bypassing safeguards, providing inaccurate responses, or enabling further focused attacks, TechXplore reported.

Their findings, to be presented at CAMLIS 2023 (Conference on Applied Machine Learning for Information Security), demonstrate the feasibility of cost-effective replication of critical aspects of existing LLMs. 

OpenAI To Offer Commercial Version Of ChatGPT
(Photo : Leon Neal/Getty Images)
LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States.

Model Leeching in Large Language Models

The researchers employ a tactic known as "model leeching," wherein the LLM is engaged with specific prompts, divulging insights into its functioning. 

In their study, centered on ChatGPT-3.5-Turbo, the team reported replicating key elements of the LLM in a model one hundred times smaller. This replicated model served as a testing ground to uncover vulnerabilities in ChatGPT, resulting in an 11% increased success rate in exploiting these vulnerabilities.

Dr. Peter Garraghan of Lancaster University and Mindgard's CEO expressed fascination and concern regarding their discovery. He highlighted the significance of their work in demonstrating the transferability of security vulnerabilities between closed source and open source machine learning models, particularly given the widespread reliance on publicly available models.

"What we discovered is scientifically fascinating, but extremely worrying. This is among the very first works to empirically demonstrate that security vulnerabilities can be successfully transferred between closed source and open source Machine Learning models, which is extremely concerning given how much industry relies on publicly available Machine Learning models hosted in places such as HuggingFace," Garraghan said in a press statement. 

Read Also: OpenAI to Launch Major Updates to Make AI Software Development Cheaper and Faster, Luring Developers

Latent Weakness in AI Technologies

The research underscored the existence of latent weaknesses within these AI technologies, possibly even sharing common vulnerabilities across models. 

While businesses are poised to invest heavily in creating their own LLMs for diverse applications like smart assistants, financial services, and enterprise solutions, the researchers stressed the importance of acknowledging and addressing the associated cyber risks. 

Despite its transformative potential, Garraghan emphasized the necessity for careful consideration of cyber risks when adopting and deploying LLM technology.

"While LLM technology is potentially transformative, businesses and scientists alike will have to think very carefully on understanding and measuring the cyber risks associated with adopting and deploying LLMs," Garraghan noted.

Related Article: Scientists Develop 'Electric Tongue' That Mimics Human Taste Buds, Introducing Emotional Intelligence to AI

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion