As concerns grow over the misuse of artificial intelligence (AI) in politics, experts called for action to address the potential threat posed by AI-enhanced images. 

The warning comes after Karl Turner, a Labour MP, shared a manipulated image of Rishi Sunak pulling a pint of stout at the Great British beer festival on X, prompting criticism from the Conservatives, according to The Guardian.

While it remains unclear whether AI tools were used to manipulate the image, experts emphasized that AI programs have made it increasingly easy to produce convincing fake content, including text, images, and audio. 

As major elections approach in the UK and the US next year, there are mounting fears that such technologies could undermine democratic processes.

AI
(Photo : Gerd Altmann from Pixabay)

Potential of AI to Deceive Voters

Wendy Hall, a computer science professor at the University of Southampton, stressed the need to prioritize AI-related risks in democratic systems, given the potential for AI-generated content to deceive voters.

"I think the use of digital technologies, including AI is a threat to our democratic processes. It should be top of the agenda on the AI risk register with two major elections - in the UK and the US - looming large next year," Hall told The Guardian. 

Shweta Singh, an assistant professor at the University of Warwick, echoed the call for ethical principles to ensure the trustworthiness of news consumed by the public, especially during election periods. 

"We need to act on this now, as it is impossible to imagine fair and impartial elections if such regulations don't exist. It's a serious concern, and we are running out of time," Singh also told the outlet.

Professor Faten Ghosn, the head of the government department at the University of Essex, also urged politicians to be transparent about using manipulated images and suggested implementing regulations requiring AI-generated content in political ads to be clearly marked.

This issue is part of a broader debate on how to regulate AI effectively. Darren Jones, the Labour chair of the business select committee, questioned the ability to identify deepfake photos and called for measures to address the issue before the next election. 

While the science department of the UK is in the process of seeking input on an AI white paper that advocates general principles for the development of technology, experts are advocating for specific measures to combat AI-generated disinformation.

Read Also: SettleMint Introduces 'AI Genie' to Help Developers Create Smart Contracts, Integrate Data, and More!

AI Companies Address the Issue

Leading AI companies have recognized the significance of dealing with this issue. Amazon, Google, Meta, Microsoft, and OpenAI, the creator of ChatGPT, have recently agreed to incorporate watermarking into AI-generated visual and audio content. This measure is designed to differentiate manipulated content produced by AI from genuine material.

In June, Brad Smith, the president of Microsoft, underscored the pressing need to address AI-generated disinformation well before the commencement of the upcoming year to safeguard the integrity of the 2024 elections.

With the potential for AI-enhanced images to disrupt democratic processes and mislead voters, experts stress the need for proactive measures to regulate the use of AI in political contexts.  

Related Article: Is AI Impacting the Environment? Quantifying AI's Carbon Footprint Is Possible, but Hard, Says Researcher

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion