Six big tech companies are reportedly planning to agree on a new accord to help mitigate artificial intelligence-generated materials meant to deceive the public during the elections.

Google, OpenAI, Microsoft, Meta, TikTok, and Adobe are all reportedly set to agree on their self-developed artificial intelligence (AI) safeguard. However, as Microsoft spokesperson David Cuddy implied, more companies aside from the mentioned could be part of the accord. 

Trump's 2020 Propaganda Film, A Truth Social Post Influence a Group of Loyalists to Watch for Ballot Mules

(Photo : (OLIVIER TOURON / AFP) (Photo by OLIVIER TOURON/AFP via Getty Images)
A dropbox is pictured ahead of the midterm elections at the City Hall in Mesa, Arizona, on October 25, 2022.

As per the Washington Post, the document resembles a manifesto wherein it asserts that there are risks to fair elections from AI-generated content, much of which is produced by the companies' tools and shared on their platforms. It also suggests ways to reduce that risk, such as marking content suspected of being AI-generated and informing the public about the risks associated with AI. 

One of the accord's motivating factors is reportedly the possibility of the election process' integrity being compromised by the deliberate, covert creation and dissemination of misleading AI information. 

Read Also: Experts Warn AI Misinformation May Get Worse These Upcoming Elections

AI Election Deception

The projected accord that remains to reportedly be a draft comes amidst a critical year for various elections happening in different countries and as deceptive GenAI political materials have increasingly become prevalent. 

Global political campaigns have already begun to use AI-generated content. AI was utilized to impersonate the voice of former president Donald Trump in an advertisement supporting Republican presidential candidate Ron DeSantis from the previous year. Imran Khan, a presidential candidate in Pakistan, spoke via artificial intelligence while incarcerated.

A robocall posing as President Biden was reportedly made in January, urging recipients not to cast ballots in the New Hampshire primary. An AI-generated voicemail was used for the calls. 

AI has also been used to bring back dead political figures for propaganda, as recently done in Indonesian dictator Suharto for the Indonesian Elections, wherein he was used to influence the electorate to support the Suharto-affiliated party. 

The government has likewise acted on the prevalent AI-generated misinformation with an FCC prohibiting robocalls with artificial intelligence-generated information.

AI Election Risks

AI continues to be a growing worldwide concern, most notably after the advanced tech was reportedly part of the top 10 dangers for the next two years as claimed by the World Economic Forum's "Global Risks Report 2024". The report placed AI-derived misinformation and disinformation and its consequences for societal polarization above climate change, war, and economic instability.

Regulators, AI experts, and political activists have reportedly pressured tech corporations to stop the proliferation of bogus election information. The new agreement is comparable to a voluntary commitment that the same businesses and a few others signed in July following a White House meeting.

In that promise, the tech firms committed to attempting to detect and mark any phony AI content that appeared on their websites. The businesses pledge in the new agreement to inform users about misleading AI material and to be open and honest about their efforts to detect deepfakes. 

Related Article: AI Deepfake Brings Back Indonesia's Dead Dictator for Upcoming Elections 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion