TOPSHOT-KUWAIT-IRAQ-WAR-BUNKER

(Photo: Photo by RUSSELL BOYCE/POOL/AFP via Getty Images) TOPSHOT - British Royal Air Force personnel wait in a bunker wearing full Nuclear Biological and Chemical suits after a warning of a Scud missile attack on their base in Kuwait, 20 March 2003. AFP PHOTO/POOL/RUSSELL BOYCE (Photo by RUSSELL BOYCE / POOL / AFP) (

Artificial intelligence models used in chatbots could offer guidance for planning biological attacks, warns US think tank Rand Corporation.

In a chilling revelation, a recent report by the Rand Corporation has shed light on the potential risks associated with artificial intelligence (AI) in the context of biological warfare. 

The Dark Side of AI

Large language models (LLMs) could supply guidance for the planning and execution of a biological attack, raising concerns about the potential misuse of AI technology.

This report comes at a time when the rapid evolution of AI technology is outpacing regulatory oversight, creating a potential gap in existing policies and regulations. 

As The Guardian reports, the report has highlighted that the lack of understanding about biological agents has historically been a limiting factor in the success of previous bioweapon attempts, but AI could swiftly bridge these knowledge gaps, thereby increasing the risk of such attacks.

The Rand Corporation's research findings revealed that LLMs did not generate explicit biological weapon instructions. However, they offered guidance that could significantly aid in planning and executing a biological attack.

AI in BioWeapons Attack

One notable scenario devised by Rand involved an anonymized LLM identifying potential biological agents, including smallpox, anthrax, and plague. 

It discussed the relative likelihood of causing mass death with these agents, as well as the possibility of obtaining plague-infested rodents or fleas and transporting live specimens. 

This scenario also considered variables such as the size of the affected population and the proportion of pneumonic plague cases, which is deadlier than bubonic plague.

The researchers acknowledged that extracting such information from an LLM required "jailbreaking," which involves using text prompts that override a chatbot's safety restrictions.

Read Also: OpenAI to Launch Major Updates to Make AI Software Development Cheaper and Faster, Luring Developers

In another scenario, the unnamed LLM discussed different delivery methods for botulinum toxin, a deadly nerve-damaging agent. 

It weighed the pros and cons of using food or aerosols for dissemination. It even advised on a plausible cover story for acquiring Clostridium botulinum while appearing to conduct legitimate scientific research.

The LLM recommended presenting the purchase of C. botulinum as part of a project focused on diagnostic methods or botulism treatments, thereby concealing the true intent behind the acquisition.

A Closer Look

The researchers emphasized the need for rigorous testing of AI models, highlighting the importance of limiting the openness of LLMs to conversations like the ones explored in their report.

While the report raises alarming concerns, it also points out that it remains an open question whether existing LLMs' capabilities represent a new threat beyond the harmful information already available online. 

However, the risks are undeniable, and as AI continues to advance, policymakers, researchers, and AI companies must come together to address these concerns.

The potential misuse of AI in planning biological attacks will be a significant topic of discussion at the upcoming global AI safety summit in the UK. 

Stay posted here at Tech Times.

Related Article: ChatGPT Was Able to Give Better Medical Advice on Depression Than Real Doctors, New Study Shows

Tech Times Writer John Lopez
(Photo: Tech Times Writer John Lopez)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags: AI Bioweapons
Join the Discussion