Google has announced new restrictions requiring political advertisements using artificial intelligence to prominently disclose any digitally modified pictures or voices to prevent the growing danger of AI-generated false content. 

The new Google AI regulations will be imposed in November, nearly a year before the 2024 US presidential election.

The disclosure of AI-altered photos must be obvious, prominent, and displayed in a location where viewers are likely to see it, according to Google's revision to its political content policy, which will impact YouTube and other services.

According to a report from Reuters, this action is being taken as realistic false pictures, films, and audio recordings are becoming simpler to build using generative AI techniques, presenting a serious threat to the integrity of political campaigns.

Experts Raise Concerns

Several presidential campaigns, notably that of Florida GOP Governor Ron DeSantis, have already made use of this technology in the run-up to the 2024 elections. For instance, the DeSantis campaign launched an attack ad using visuals created by AI of former President Donald Trump embracing Dr. Anthony Fauci, a specialist in infectious diseases.

Additionally, the Federal Election Commission has started a procedure to possibly regulate AI-generated deepfakes in political advertisements, which might include artificial voices of political personalities stating things they have never said, per CBS News.

Although Google is not explicitly banning AI in political advertising, there are several loopholes to the rule, such as synthetic material that is irrelevant to the claims made in the advertisement. AI may also be used for editing processes including cropping, color correction, flaw repair, and background editing.

Digital information integrity specialists have, however, raised concerns about the growing use of AI-generated material in political campaigns and issued a warning. AI technologies can create convincing text, graphics, audio, and video content that might mislead or confuse voters.

Read Also: Reddit User Seeks Advice After Finding Out About Boyfriend's Affair With Dying Friend

Social media sites are especially susceptible to the transmission of false information during political campaigns since they are a key source of information for billions of people. However, the difficulties these platforms encounter make it harder and harder to keep up with the flood of false election-related content.

Major social networks have scaled down their efforts to prevent election-related misinformation, which has resulted in considerable reductions for election integrity and safety teams as well as responsible AI specialists.

2024 US Elections Integrity at Risk

A previous TechTimes report indicated the rise of generative AI technologies capable of quickly manufacturing artificial material aimed at specific audiences, such as computer-generated speech, video, and pictures, makes the 2024 elections and campaigns extremely high-risk. Generative AI may fabricate video footage, automated robocalls, modified audio recordings, and fake pictures that seem like local news stories to mislead voters or defame leaders, according to experts.

Several initiatives are being made to solve this expanding issue, including the creation of technologies that will provide greater insight into the origin and movement of materials. To assist photographers in authenticating their works, initiatives like Adobe's Content Authenticity Initiative have been launched.

According to Axios, Google, Meta, and Microsoft are actively implementing regulations and devising new strategies to tackle the specific challenges posed by AI-generated content. These efforts include AI systems for recognizing and filtering harmful information and fact-checking processes.

Related Article: New Chips in Huawei Phones Spark Controversy Over Alleged Trade Violations

byline -quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion