US cybersecurity firm Mandiant has sounded an alarm regarding the escalating utilization of artificial intelligence (AI) to orchestrate manipulative online campaigns. 

While AI has already been harnessed by bad actors for various digital intrusions, its recent surge in powering information manipulation has raised significant concerns.

AI-Powered Attacks On the Rise

Reuters reports that researchers at Google-owned Mandiant, based in Virginia, have uncovered instances since 2019 where AI-generated content, including fabricated profile pictures, played pivotal roles in politically-motivated online influence campaigns. 

These campaigns have spanned the globe, involving entities linked to governments such as Russia, China, Iran, Ethiopia, Indonesia, and more.

The dramatic expansion of a pro-China campaign known as Dragonbridge exemplifies the trend. Originating in 2019, focusing on pro-democracy protesters in Hong Kong, the campaign has evolved exponentially, infiltrating 30 social platforms and ten languages, as revealed by Sandra Joyce, Vice President at Mandiant Intelligence.

However, despite their growth, these campaigns' efficacy remains limited. As Sandra Joyce points out, "From an effectiveness standpoint, not a lot of wins there. They really haven't changed the course of the threat landscape just yet."

Read Also: OpenAI Faces Financial Crisis, Failing to Generate Enough Revenue

AI-Generated Fake Content

The same report noted that the proliferation of generative AI models, including the well-known ChatGPT, has played a pivotal role in creating sophisticated fake content, encompassing videos, images, text, and even computer code. 

Mandiant underscores the potential of generative AI in amplifying malicious operations, highlighting two crucial aspects: the ability to scale activities beyond inherent means and the production of convincing fabricated content. 

This potential empowers threat actors with limited resources to produce higher-quality content at scale.

Fake Images, Videos

One of the most intriguing implications is using AI-generated images and videos. Mandiant anticipates that AI-generated images and videos will soon be at the forefront of adoption due to their powerful impact in invoking emotional responses.

The Mandiant report provides one such case: In May 2023, there was a momentary dip in US stock market prices. This occurred when certain Twitter accounts, such as the Russian state media outlet RT and the verified account @BloombergFeed (which falsely presented itself as affiliated with Bloomberg Media Group), shared an AI-generated image of a supposed explosion near the Pentagon.

Mandiant's research also reveals that information operations actors have employed AI-generated video technology, ranging from customizable AI-generated human avatars to face swap tools, since 2021. These tools have facilitated the promotion of desired narratives, blurring the line between fact and fiction.

Moreover, AI's potential for generating text has also come under scrutiny. Although instances of AI-generated text are currently limited, Mandiant foresees its rapid adoption due to the accessibility and ease of use of available tools.

Stay posted here at Tech Times.

Related Article: Philanthropists Take a Stand in Pushing for 'Ethical AI' to Protect the World From Its Threats

 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion