Microsoft has released a report warning that China intends to use AI-generated content to sabotage elections this year in the US, South Korea, and India.

Research released on Friday by Microsoft's threat intelligence team states that the US tech company expects North Korea and Chinese state-sponsored cyber organizations to attack prominent elections in 2024.

Microsoft cybersecurity researchers expect Chinese and North Korean cyber actors to attempt to influence elections in India, South Korea, and the US as voters prepare for them. Microsoft predicts that to strengthen its position in these important elections, China will, at the very least, produce and share AI-generated material on social media platforms.

Beware of AI-Generated Content

Microsoft cautions that although AI-generated material is not yet very successful at swaying audiences, China is increasingly experimenting with adding effects to memes, videos, and audio files.

"We assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests," stated Microsoft in its blog. "Although the likelihood of such content affecting election outcomes remains low, China's continued experimentation in enhancing memes, videos, and audio could potentially become more effective over time."

The Microsoft report also shows that China conducted an AI-generated misinformation campaign during the January Taiwan presidential election, the first time a state-backed body did so. During the Taiwanese election, Storm 1376, a Beijing-backed organization, was quite active, distributing AI-generated jokes and phony audio on YouTube, targeting William Lai, the candidate who ultimately won.

Microsoft points out that Chinese organizations are still influencing operations in the US by posing contentious questions and identifying the subjects that separate US voters on social media. This strategy may teach Beijing about crucial voting demographics before the US presidential election.

Read Also: Google Takes Legal Action Against Crypto Scammers in Landmark Case 

As AI technology develops, the possibility of disseminating false information increases dramatically. Governments and government-backed cybercriminals can now reach more people with believable material by creating fake images, videos, and other content.

FRANCE-SECURITY-TECHNOLOGY-SCIENCE-COMPUTERS

This picture shows the Microsoft logo at the International Cybersecurity Forum (FIC) in Lille on January 28, 2020.

(Photo : DENIS CHARLET/AFP via Getty Images)

North Korea to Continue Its Money-Making Schemes

Furthermore, Microsoft stated in its report that North Korea's participation in these initiatives is more about making money for the nation than influencing elections. The nation is well-known for its involvement in supply chain software hacks and bitcoin theft, per ZDNet.

Microsoft stated: "Notably, Microsoft and OpenAI have seen how the North Korean actor known as Emerald Sleet uses AI-powered large-language models (LLMs) to power tools that improve their operations' effectiveness and efficiency. Microsoft and OpenAI collaborated to deactivate Emerald Sleet-related accounts and assets."

The tech giant also noted that the UN "estimates that North Korean cyber actors have stolen over $3 billion in cryptocurrency since 2017," adding that "heists totaling between $600 million and $1 billion occurred" in 2013.

Furthermore, in the forthcoming year of massive elections, key digital corporations have committed to fighting misleading AI material.

According to The Guardian, Microsoft was recently chastised by an official review group formed by the White House for "a cascade of errors" that let state-backed Chinese cyber operators access the email accounts of top US officials. This follows years of US and UK official accusations that China-backed hackers targeted politicians, journalists, corporations, and even the UK's election watchdog. 

Related Article: Japan, US, Philippines to Launch a Cyber Defense Alliance Against Cyberattacks 

byline quincy


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion