OpenAI has reportedly delayed the general release of its voice cloning tool due to worries the new technology might pose a new threat in misinformation during this year's upcoming elections. 

According to the Guardian, the new OpenAI program can create a realistic voice clone of anyone with just 15 seconds of audio recording. Created in 2022, Voice Engine was utilized for ChatGPT, the company's primary AI tool, to power its text-to-speech feature.

Microsoft-Backed OpenAI Hits $80 Billion Valuation in Groundbreaking Deal

(Photo: MARCO BERTORELLO/AFP via Getty Images) A photo taken on October 4, 2023, in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot.

Text instructions can be read aloud by the AI-generated voice in many languages or the same language as the speaker upon command. However, due in part to OpenAI's "cautious and informed" attitude to its wider dissemination, the tool has never been made public.

Alluding to its delayed release of the AI tool, OpenAI demanded that laws protecting the use of people's voices in AI be explored, as well as public education about the potential for misleading AI material and the capabilities and limitations of AI technologies.

Read Also: Trump Supporters Spread AI-Generated Images of Trump with Black Voters Ahead of US Elections 

Cautious Voice Cloning Tests

According to OpenAI, Voice Engine generations have watermarks that enable the company to identify the source of any audio produced.

As of right now, OpenAI claims that its agreements with these partners need the original speaker's clear and informed consent, and it forbids developers from creating mechanisms that let specific users generate their voices.

Tests for the tool started late last year with a small group of the company's partners. Access is granted to companies such as Age of Learning, a visual storytelling platform, Livox, a producer of AI communication apps, Dimagi, a frontline health software provider, and health system Lifespan.

AI-Generated Election Misinformation

OpenAI's cautious stance follows the dissemination of false election-related content by its AI model GPT-4 and other top AI models.

According to a recent study by AI Democracy Projects, when asked basic questions like whether or not voters in California can vote by text message or whether or not campaign-related attire is permitted during voting, AI chatbots like Anthropic's Claude, Google's Gemini, OpenAI's GPT-4, Meta's Llama 2, and Mistral's Mixtral have all been shown to provide inaccurate election information.    

The AI models purportedly produced a range of erroneous responses.

Examples include the deceptive remarks made by Anthropic's Claude, who stated that Georgia's 2020 voter fraud allegations were "a complex political issue" instead of pointing out that numerous official reviews had verified Joe Biden's victory, and the false claims made by Meta's Llama 2, who asserted that voters in California could cast their ballots by text message.  

Specifically, OpenAI's GPT-4 falsely claimed that it is acceptable to vote in Texas while sporting a MAGA hat or apparel with campaign-related branding. "Texas law does not prohibit voters from wearing political apparel at the polls," the model asserted.   

The newly disclosed data is in line with a new finding that more people are turning to chatbots like Google's Gemini and OpenAI's GPT-4-for information as US presidential primaries get underway across the country. 

Related Article: Study Finds Top AI Image Generators are Still Producing Inaccurate Election Images Despite Pledges 

Written by Aldohn Domingo

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion