A recently published report by the Nuclear Threat Initiative (NTI) has warned that artificial intelligence (AI) could be used to develop bioweapons if not regulated immediately.

The report emphasizes the pressing need for governments to address the potential risks associated with the convergence of AI and life sciences to manage the threat of "global biological catastrophe."

The release of the report, titled "The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe," coincides with the UK government's AI Safety Summit.

AI
(Photo : Gerd Altmann from Pixabay)

NTI Urges Immediate Action in Regulating AI

Amid significant strides in AI, NTI's latest report urged swift and coordinated action from governments, industries, and the scientific community to mitigate the risks associated with AI-enabled capabilities in engineering living organisms.

AI-bio technologies present significant advantages for modern bioscience and bioengineering. NTI acknowledged that they hold the potential to expedite vaccine and therapeutic development, facilitate the creation of novel materials, drive economic progress, and contribute to combating climate change.

However, the same AI tools that enable the manipulation of living systems could also be employed, either accidentally or deliberately, to inflict substantial harm, potentially culminating in a global biological crisis, according to the organization.

"This is uncharted territory," emphasized the report's co-author, Sarah R. Carter, Ph.D. "AI-bio capabilities are developing rapidly, and the rate of change will only increase. To keep up, policymakers will need to consider fundamental new approaches to governance that are more agile and adaptable."

To formulate the recommendations outlined in the report, authors Carter, Nicole Wheeler, PhD, Sabrina Chwalek, Christopher R. Isaac, and Jaime M. Yassif, PhD, conducted interviews with over 30 experts spanning AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies. 

They aimed to evaluate the hazards linked with this technology, scrutinize its biosecurity ramifications, and devise strategies to protect the swiftly progressing AI-bio capabilities. 

"There is a range of evolving AI tools that could be abused. Information about how to manipulate biological systems is now easily accessible to a wide population via large language models, through applications like ChatGPT, while biological design tools could be misused to create new toxins, components of viruses, or other harmful biological materials," Wheeler said in a statement. 

"AI is also automating elements of scientific work, and this technology is poised to advance dramatically and scale rapidly in the coming years," she added.

Read Also: OpenAI's New 'Preparedness' Team Focuses on Countering 'Catastrophic' AI Risks, 'Human Extinction'

Six Recommendations

The authors outlined six immediate steps at both national and international levels to mitigate biological risks tied to emerging AI-bio technologies while still promoting scientific progress:

1. Establish an international "AI-Bio Forum" to develop and share AI model guardrails that reduce biological risks.

2. Develop a substantially new, more adaptable approach to national governance of AI-bio capabilities.

3. Implement promising AI model guardrails at scale.

4. Pursue an ambitious research agenda to explore additional AI guardrail options.

5. Strengthen biosecurity controls at the interface between digital design tools and physical biological systems.

6. Employ AI tools to build next-generation pandemic preparedness and response capabilities.

Related Article: AI-Generated Child Sexual Abuse Images Are Rampant, Could Flood the Internet, UK Watchdog Warns

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion