As more than 50 countries gear up for national elections in 2024, concerns about exploiting artificial intelligence (AI) by malicious actors to spread disinformation have intensified, prompting researchers at George Washington University to conduct a new study. 

The study offers a unique quantitative analysis predicting an alarming escalation in daily bad-actor AI activity by mid-2024, heightening the potential threat to election integrity on a global scale, according to Tech Xplore.

Technology Developer
(Photo : Gerd Altmann from Pixabay)

Countering the Dangers of AI

Lead study author Neil Johnson, a professor of physics at GW, emphasized the necessity of understanding the battlefield to counteract the dangers posed by AI. 

"Everybody is talking about the dangers of AI, but until our study there was no science of this threat. You cannot win a battle without a deep understanding of the battlefield," he noted.

The research seeks to address critical questions concerning the utilization of AI by bad actors, exploring the "what, where, and when" of AI's impact and proposing strategies for control. 

Notably, the study identifies that basic Generative Pre-trained Transformer (GPT) AI systems are sufficient for bad actors to manipulate information on various platforms, emphasizing the simplicity of the tools employed.

Building upon a previously mapped road network spanning 23 social media platforms, researchers highlight that bad actors can establish direct links to billions of users globally without their knowledge, posing a significant challenge for cybersecurity efforts.

Read Also: AI Can Simulate Tree Growth and Shape in Response to Their Environments, Researchers Discover

AI-Powered Malicious Activity 

The research anticipates that AI-powered malicious activity will increase in frequency, becoming a daily occurrence by the summer of 2024. This projection is based on analyzing proxy data derived from historical instances where online electronic information systems were manipulated.

In examining data from automated algorithm attacks on US financial markets in 2008 and cyber attacks by China on US infrastructure in 2013, the researchers predicted the frequency of such incidents. Considering the ongoing advancements in AI technology, the study establishes a timeline for the expected rise in bad-actor AI activity.

To counteract the impending threat, the study recommends that social media companies employ strategies to curb disinformation. The focus should be the removal of larger pockets of coordinated activity while putting up with the smaller, isolated actors.

This approach aims to strike a delicate balance between minimizing the impact of AI-driven disinformation campaigns and preserving the unrestricted flow of information on these platforms.

The study underscores the urgency for a proactive and strategic response to the growing threat of bad-actor AI activity, particularly in the context of global elections.

"Although this work establishes a foundational framework for addressing bad-actor-AI risks, it also signals the necessity for continuous research in this field, especially considering the rapid advancement of AI technologies and the ever-changing landscape of the online community ecosystem at scale," the researchers concluded.

The findings of the research team were published in the journal PNAS Nexus.

Related Article: Amazon Researchers Find 'Shocking Amount' of Faulty Machine Translations in the Web

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion