A new study has revealed that propaganda generated by AI large language models, such as GPT-3 davinci, is highly persuasive, like real propaganda from Iran or Russia.   

The research, conducted by Josh Goldstein and colleagues from Georgetown University, aimed to determine the effectiveness of AI-generated propaganda in influencing individuals' opinions.

(Photo : memyselfaneye from Pixabay)

AI Generates Propaganda

Goldstein and his team identified six articles suspected to originate from Iranian or Russian state-aligned covert propaganda campaigns. 

These articles contained false claims regarding US foreign relations, such as Saudi Arabia's alleged commitment to funding the US-Mexico border wall and fabricated reports suggesting the use of chemical weapons by the Syrian government.

To assess the persuasiveness of AI-generated propaganda, the authors used GPT-3 to create new propaganda articles based on sentences extracted from the original propaganda pieces. Additionally, they incorporated sentences from unrelated propaganda articles to serve as templates for style and structure. 

In December 2021, the researchers presented both the actual propaganda articles and those generated by GPT-3 to over 8,000 US adults recruited through the survey company Lucid. 

At least 24.4% of respondents accepted the assertions without reading any article. Exposure to an authentic propaganda piece elevated this figure to 47.4%.

However, the influence of reading an article composed by GPT-3 was nearly equivalent, with 43.5% of participants endorsing the claims post-reading. Numerous AI-generated articles demonstrated comparable persuasiveness to those crafted by humans.

Interestingly, the study found that editing the prompts provided to GPT-3 and curating its output further enhanced the persuasiveness of AI-generated propaganda. 

In some cases, these human-machine teaming strategies resulted in AI-generated articles that were equally or even more persuasive than the original propaganda.

The implications of these findings are significant as they suggest that propagandists could exploit AI technology to produce convincing content with minimal effort. 

That raises concerns about the potential proliferation of misinformation and the manipulation of public opinion on various issues.

Read Also: Google's Apology Over AI Misrepresentation Sparks Debate, Addressing Diversity in Image Generation

AI-Generated Persuasive Propaganda

The study underscores the capability of large language models like GPT-3 to generate persuasive propaganda. It also highlights the role of human intervention in enhancing the persuasiveness of AI-generated content, raising questions about the ethical implications of such practices. 

Moreover, the researchers emphasize that AI-generated propaganda poses a significant threat to democratic processes, as it can be used to disseminate false information on a large scale. 

According to the researchers, the rapid advancement of AI technology further exacerbates this threat, as newer models like GPT-4 are expected to produce even more persuasive propaganda.

In light of these findings, the team suggests that future research could explore strategies to mitigate the impact of AI-generated propaganda campaigns. It includes improving detection methods for identifying inauthentic content and developing behavioral interventions to help users discern between AI-generated and genuine content. 

The paper "How persuasive is AI-generated propaganda?" was published in the PNAS Nexus journal.

Related Article: Microsoft Discovers State-backed Hackers From China, Russia, and Iran Are Using OpenAI Tools for Honing Skills



ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion