A new policy requiring disclaimers on political advertisements created with artificial intelligence could soon be reportedly introduced by the Federal Communications Commission as advocated by FCC Chair Jessica Rosenworcel.

The proposal on Wednesday seeks to start the FCC's regulatory process, which is expected to take many months to complete. The FCC action aims to close a huge loophole in the laws governing artificial intelligence in political advertising.

US-IT-MEDIA-POLITICS
(Photo by ALEXANDRA ROBINSON/AFP via Getty Images) An AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or "deepfake" at his newsdesk in Washington, DC.

New regulations are proposed for cable and satellite companies, as well as broadcast TV and radio. According to the proposed regulations, if political marketers on such platforms use AI-generated content in their advertising, they would have to disclose this on air.

The FCC does not govern social media and other internet-based media, such as streaming video services.

Political advertising would also need to include textual disclaimers in the files broadcasters must make public as part of the proposed rule.

The proposed legislation is being announced at the same time as the Department of Homeland Security's most recent federal bulletin claims to have observed concerns about generative artificial intelligence only getting stronger in the run-up to the elections. 

Read Also: Australia Struggles to Combat Political AI Deepfakes with Limited Legal Authority 

Dangers of AI During the Elections

According to studies the Department of Homeland Security produced and shared with law enforcement partners across the country, foreign and local actors may use technology to create serious barriers in the run-up to the 2024 election cycle.

Federal bulletins are sporadic messages that law enforcement partners get to inform them of specific dangers and issues. The alert claims AI capabilities may facilitate attempts to rig the 2024 US election cycle. It is anticipated that a variety of threat actors will attempt to influence and cause disruption during this election cycle.

Thanks to generative AI approaches, the 2024 election cycle will likely be more vulnerable to manipulation by foreign and domestic threat actors. These approaches could intensify emerging events, tamper with election protocols, or target election infrastructure.

During a recent Senate Intelligence Committee hearing, Director of National Intelligence Avril Haines cautioned Congress about the dangers of generative AI, pointing out that the technology can produce realistic "deepfakes" whose source can be hidden. 

According to the bulletin, the timeliness of AI-generated media relevant to an election can be just as critical as the content itself because it may take some time to refute or contradict the erroneous material spreading online.

The paper also highlighted the threat that remains internationally, citing a November 2023 incident in which an AI-produced film persuaded people in a state in southern India to vote for a certain candidate on election day, giving officials little opportunity to dispute the video. 

Projected US AI Legislations

Three proposals were also recently passed by the Senate Rules Committee to safeguard polls from the risks posed by artificial intelligence as US Election Day approaches.

Authorities are concerned about campaigns utilizing AI technology to trick voters in the 2018 elections as deepfake audio and video grow easier to produce.

Before November's elections, the committee voted on a roadmap of AI regulations proposed by Senate Majority Leader Charles Schumer (D-NY) and three bipartisan colleagues.

The proposals, which must be passed by both the House and the Senate to become law, emphasize the urgency of enacting election-related deepfake rules. 

Related Article: Humans Struggle in Identifying AI-Generated Media: New Research 

Written by Aldohn Domingo

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion