Cybersecurity experts are expressing growing concerns about the possible misuse of artificial intelligence (AI) in the upcoming 2024 US Presidential Election, despite the technology's potential for creating tailored campaign strategies and formulating policies.

Richard Heaton, director of Cartisian Technical Recruitment, recognizes the dual nature of AI, emphasizing that the same technology enhancing election security is a "double-edged sword" that could be "exploited to manipulate data and interfere with electoral processes," as reported by The US Sun.

The capabilities of AI in data analysis and pattern recognition raise fears of malicious actors using the technology to meddle with voting and spread misinformation. Heaton notes that the unethical use of AI in analyzing voter data could lead to targeted disinformation campaigns, while biased AI algorithms might amplify specific viewpoints or fake news on social media platforms, potentially swaying public opinion.

US Election Integrity Threatened

Cybersecurity experts particularly highlight the danger of deepfakes, synthetic media generated by AI to replicate a person's likeness in audio, photo, or video, being misused in the 2024 US Presidential Election. An example cited by Ben Michael, vice president of operations at Michael & Associates, involves an AI deepfake impersonating President Joe Biden in a robocall, discouraging voters from participating in New Hampshire's Democratic primary. This incident underscores the significant threat that deepfakes pose to election integrity as AI technology advances, presenting both opportunities and risks in the political landscape.

TechTimes recently reported a likely connection between the deepfake Biden robocall and ElevenLabs, Silicon Valley's leading voice-cloning startup. Pindrop, a synthetic audio security firm, identified ElevenLabs as the probable source of the deepfake Biden robocall, with independent analysis from the UC Berkeley School of Information supporting a high level of confidence in the attribution.

Read Also: iPhone Apps, Including Facebook and TikTok, Exploit Notifications to Collect User Data: Study

Experts point out that the accessibility of advanced AI tools capable of producing convincing deepfakes within seconds introduces a new and concerning threat in the 2024 US Presidential Elections. They added there is an elevated risk of more deceptive content circulating without proper labels, potentially deceiving voters in the days leading up to the election. Scenarios include portraying political candidates as falsely in a health emergency situation or uttering statements they never made.

Is There a Looming Disinformation Crisis in The US?

Responding to these concerns about AI misuse and deepfakes, lawmakers in several US states, including Hawaii, South Dakota, Massachusetts, Oklahoma, Nebraska, Indiana, and Wyoming, are introducing legislation to prohibit AI-generated media without proper disclosure within specific periods before elections.

In Nebraska, Democrats propose extending this ban to cover all deepfakes within 60 days of an election. Arizona Republicans suggest a bill allowing individuals depicted in "digital impersonation" to sue for relief or damages, per NBC News. Similar legislative efforts are seen in Idaho, Kentucky, Virginia, Ohio, South Carolina, and New Hampshire, which were introduced at the end of 2023, though they have yet to progress. However, filing these bills does not guarantee their eventual enactment into law.

However, Free Press, a human rights advocacy NGO, revealed that X, Meta, and YouTube repealed 17 hate and disinformation rules. According to PBS, YouTube said in June last year that it would no longer block video erroneously alleging the 2020 or prior US elections had significant fraud, flaws, or problems. This policy change promoted free discussion on unpopular or unproven political ideas.

The three social media giants have also cut their workers, including content moderators, raising worries about a possible disinformation crisis in 2024 compared to 2020, according to University of Washington misinformation specialist Kate Starbird.

Related Article: Taylor Swift Considering Legal Action Against Deepfake Porn Site Circulating Explicit AI Images

byline-quincy


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion