Artificial intelligence (AI) systems may be more susceptible to targeted malicious attacks than previously believed, according to a recent study. 

The research highlights the prevalence of vulnerabilities in AI tools, particularly in the face of adversarial attacks, wherein manipulated data is fed into the AI system to induce misguided decision-making, Tech Xplore reported.

Technology Business
(Photo : Tung Nguyen from Pixabay)

Adversarial Attacks on AI

The study's co-author, Tianfu Wu, an associate professor of electrical and computer engineering at North Carolina State University, explained the concern surrounding adversarial attacks. 

For instance, strategically placing a specific type of sticker on a stop sign could render it virtually invisible to an AI system, posing potential risks in scenarios like autonomous driving. 

The study emphasized the need to address these vulnerabilities, especially in applications with real-life consequences. The research team investigated the commonality of adversarial vulnerabilities in AI deep neural networks, revealing that these weaknesses are more widespread than previously thought. 

The study cited the exploitable nature of these vulnerabilities, enabling attackers to manipulate the AI system's interpretation of data to suit their intentions.

Read Also: AI-Built Cancer Antibodies on the Horizon, Potentially Replacing Chemotherapy with AstraZeneca, Absci Collaboration

Enter QuadAttacK

Wu and his collaborators developed QuadAttacK, a software tool designed to assess deep neural networks for adversarial vulnerabilities. This tool monitors AI decision-making processes, learning how the system interprets data.

QuadAttacK then manipulates data to gauge the AI system's response, identifying vulnerabilities and demonstrating how attackers could deceive the system.

Surprisingly, the study found that widely used deep neural networks, including ResNet-50, DenseNet-121, ViT-B, and DEiT-S, are highly susceptible to adversarial attacks. 

The extent to which these attacks could be fine-tuned to manipulate AI systems was particularly notable, raising concerns about the robustness of AI in practical applications. QuadAttacK has been made publicly available to enable the broader research community to assess neural networks for vulnerabilities. 

While the study sheds light on the existing challenges, the next phase involves finding solutions to minimize these vulnerabilities. Wu acknowledges that potential solutions are in progress but await further results.

"Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacK watches these operations and learns how the AI is making decisions related to the data," Wu said in a statement. 

"This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability it can quickly make the AI see whatever QuadAttacK wants it to see," he added.

This study underscored the critical importance of enhancing AI systems' resilience against adversarial attacks, especially in applications where the reliability and safety of decisions impact human lives.

"Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities. We already have some potential solutions-but the results of that work are still forthcoming," Wu noted. 

Related Article: What Happens When AI Is Asked to Create a Bomb? Study Reveals LLMs' Susceptibility to 'Jailbreaks'

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion