It is only months before the United States chooses its next government leaders again, but the preparations began, with OpenAI releasing its new Election Misinformation Policy to protect the public and voting sacredness. The company's main priority is to improve the safeguards on its AI tools and products which may be used to spread deepfakes, impersonation, fake news, and more.

OpenAI
(Photo : KIRILL KUDRYAVTSEV/AFP via Getty Images)

OpenAI Election Misinformation Policy is Now Here to 'Protect'

The latest blog post by OpenAI details the steps it is taking towards "protecting the integrity of elections," and this begins with the massive safeguards that it will impose on its products and AI tools. 

As we prepare for elections in 2024 across the world's largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.

OpenAI

As part of preventing the abuse of AI using the elections, OpenAI claimed that its DALL-E will have specific "guardrails" that will "decline requests" that ask to create real people images, including the candidates. Moreover, users would not be allowed to build applications to use for political campaigning and lobbying.

The company added that it also puts massive importance on transparency for its AI tools, particularly for the sources of information that ChatGPT would deliver, as well as the content provenance for the DALL-E. 

Read Also: No Fake Act: New Bipartisan Bill is Against Generative AI Using Artists, Singers to Create Unconsented Content

AI Tools to Combat Deepfake, Fake News, and MORE

OpenAI is going to the root of the issue by imposing safety measures on its AI tools alone to help combat the spread of fake news, AI-generated deepfakes, and misinformation that are prevalent online. 

The company said that with its new GPTs, users may report potential violations directly and easily flag when it is creating dubious content. 

The Massive Use of AI for Misinformation

Since last year, concerns against the use of AI to create persuading but fabricated content, a.k.a. misinformation, have been raised by advocates, politicians, and more.

This was due to the alarming rise of deepfakes filling the internet and social media platforms with misleading information meant to sway the general public into negative beliefs and ideals meant to disrupt society. 

With this, the Federal Election Commission (FEC) has looked into the regulation of political ads made from generative AI, especially with the false statements it embeds to the public. 

Apart from organizations, Big Tech companies have already stepped up to combat said misinformation online, with Google affirming its stand to bring legitimate news and information via its Search and YouTube platforms.

The massive concerns in this upcoming US 2024 Elections are real, and this is because of the significant spread of misinformation online via multimedia content using generative AI tools present. That being said, OpenAI is now taking its stand against the use of its AI tools to create such content, stemming the production of these misleading content and helping protect the right of suffrage for all.

Related Article: Deepfake Porn Rise to Top of Search Results from Google, Bing-Here's How to Submit a ReportIsaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion