In political discussions on social media, distinguishing between human users and artificial intelligence (AI) bots has become a perplexing challenge, according to a new study. 

Conducted by researchers at the University of Notre Dame, the study delved into the intricacies of AI bots infiltrating online political discussions. The researchers experimented on the social networking platform Mastodon, with the help of advanced AI models, where human participants interacted with these AI bots.

(Photo : Mohamed Hassan from Pixabay)

Can Humans Identify AI Bots in Political Discussions?

In a span of three rounds over four days, participants were tasked with identifying which accounts they believed belonged to AI bots. However, the results revealed a staggering misconception rate of 58% among participants.

Paul Brenner, a faculty member at Notre Dame and the study's senior author, highlighted users' significant challenge in discerning between human and AI-generated content. 

Despite being aware of the presence of AI bots, participants struggled to accurately identify them, indicating the bots' effectiveness in disseminating misinformation. 

"We know that if information is coming from another human participating in a conversation, the impact is stronger than an abstract comment or reference. These AI bots are more likely to be successful in spreading misinformation because we can't detect them," Brenner said in a statement

The study employed various LLM-based AI models, including GPT-4 from OpenAI, Llama-2-Chat from Meta, and Claude 2 from Anthropic. Each AI bot was created with distinct personas, ranging from individuals with diverse political perspectives to those adept at strategically spreading misinformation. 

Interestingly, the study found that the specific LLM platform utilized had minimal impact on participants' ability to detect AI bots. Brenner expressed concern over this finding, emphasizing the bots' indistinguishability, regardless of the AI model employed.

Particularly notable were two personas characterized as politically active females proficient in manipulating social media to spread misinformation. These personas proved to be among the most successful in deceiving users, highlighting the bots' efficacy in masquerading as genuine human participants.  

Read Also: Superhuman Unveils Instant Replies: AI-Powered Email Feature to Boost Your Productivity

Potential of AI Models in Amplifying Misinformation

Brenner underscored the alarming potential of LLM-based AI models to amplify the dissemination of misinformation online. Unlike traditional human-assisted bots, AI bots equipped with LLMs can operate on a larger scale, faster, and at a lower cost, posing significant challenges in combating misinformation. 

To mitigate the spread of AI-driven misinformation, Brenner proposed a multifaceted approach encompassing education, legislative measures, and enhanced social media account validation policies. 

Additionally, he emphasized the need for further research to evaluate the impact of AI models on mental health, particularly among adolescents, and to develop strategies to counter their adverse effects.

The findings of the study, titled "LLMs Among Us: Generative AI Participating in Digital Discourse," are set to be presented at the Association for the Advancement of Artificial Intelligence 2024 Spring Symposium at Stanford University in March. The paper is also published on the arXiv preprint server. 

Related Article: OpenAI Security Head Suggests ChatGPT Can Decrypt Russian Hacking Group Conversations in Pentagon Event 



ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion