Google announces a multitude of policies and safeguards to help stop AI disinformation across its products and applications in the upcoming elections. AI-generated videos will be appropriately labeled for viewers as well as political ads created with AI.

The announcement comes alongside Google banning AI chatbot Gemini from answering any election-related questions and prompts ahead of the US and Indian elections, a move also meant to help stop the chatbot from spreading election disinformation.

(Photo : Pawel Czerwinski from Unsplash)
Google aims to reduce costs in operations by combining the teams from Waze and Maps products.

The tech giant shared its efforts to help users navigate AI-generated content in its products. Content produced using Dream Screen and other YouTube generative AI tools will soon have labels displayed on the platform.

Additionally, YouTube will soon display a notice alerting users to material generated by artificial intelligence (AI) and compel artists to identify when they have made realistic altered or synthetic content. A Google representative reportedly told CNBC that the action is consistent with the company's planned strategy for the elections, which is based on extreme "caution."

Read Also: Google Gemini AI Cannot Answer About US, Global Elections But Why?

Google's Gemini Restrictions

As for the Gemini restriction, users who try to ask Gemini election-related queries will now reportedly be met with a response that prompts them to use Google search instead of Gemini as it is still "learning how to answer" the question.

Though the company noted in its statement that not all election queries are subject to this restriction, it may appear that Google is making Gemini AI apolitical at this moment. Certain user queries will be handled by Gemini, but Google has not specified which queries are acceptable for the AI chatbot to answer rather than sending consumers to Google Search.  

The news follows Google's revelation last month that it was discontinuing its AI picture-creation tool due to several issues, including previous errors and divisive comments.  

The announcement coincides with digital platforms getting ready for a momentous year that will see elections in over 40 nations that will impact up to four billion people globally. According to data from machine learning startup Clarity, the amount of deepfakes generated has increased by 900% annually, raising severe worries about disinformation related to the election.  

Inaccurate AI Chatbots

Gemini's election restriction comes as a recent study found that AI chatbots, including Gemini and OpenAI's GPT-4, are still providing users with inaccurate election-related information

According to the study from the AI Democracy Projects , when asked basic questions like whether or not voters in California can vote by text message, or whether or not campaign-related attire is permitted during voting, AI chatbots like Anthropic's Claude, Google's Gemini, OpenAI's GPT-4, Meta's Llama 2, and Mistral's Mixtral all provided inaccurate election information.   

A range of erroneous responses were purportedly produced by the AI models. Examples of these include the deceptive remarks made by Anthropic's Claude, who stated that Georgia's 2020 voter fraud allegations were "a complex political issue," instead of pointing out that numerous official reviews had verified Joe Biden's victory, and the false claims made by Meta's Llama 2, who asserted that voters in California could cast their ballots by text message.

When the AI Democracy Projects evaluated the best AI models on January 25, 2024, OpenAI's GPT-4 reportedly said falsely that it is acceptable to vote in Texas while sporting a MAGA hat or apparel with campaign-related branding. The model asserted that wearing political attire to the polls is not illegal in Texas.

Related Article: Google's AI Bias Blunder with Gemini Prompts Commitment to Change

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion