In response to the rising threat of deepfake technology, a novel tool called AntiFake has been developed by computer scientists from the McKelvey School of Engineering at Washington University in St. Louis.

This tool, developed by the team led by Ning Zhang, an assistant professor of computer science and engineering at the McKelvey School of Engineering, aims to protect voice recordings from unauthorized speech synthesis created through generative artificial intelligence (AI) tech. 

FILES-US-TECHNOLOGY-AI-DEEPFAKES-DISINFORMATION-MISINFORMATION
(Photo : OLIVIER DOULIERY/AFP via Getty Images)
This illustration photo taken on January 30, 2023 shows a phone screen displaying a statement from the head of security policy at META with a fake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons shown in the background, in Washington, DC.

About the AntiFake Tool

In contrast to traditional methods that detect synthetic audio post-attack, AntiFake employs adversarial techniques to make extracting characteristics from voice recordings challenging for AI tools, preventing the synthesis of deceptive speech.

Zhang noted that AntiFake ensures that voice data becomes challenging for criminals to exploit for synthesizing deceptive voices or impersonation.

By employing adversarial AI techniques originally associated with cybercriminal activities, AntiFake intentionally distorts recorded audio signals, making them slightly different from AI but still sounding right to human listeners.

"The tool uses a technique of adversarial AI that was originally part of the cybercriminals' toolbox, but now we're using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it's completely different to AI," Zhang said in a statement.

To evaluate the effectiveness of AntiFake, Zhang and Zhiyuan Yu, the study's first author and a graduate student, engineered the tool for broad applicability. 

They subjected AntiFake to testing against five advanced speech synthesizers, revealing a robust protection rate of over 95%, even when confronted with unfamiliar commercial synthesizers. 

The tool's accessibility to a diverse range of users was further confirmed through usability tests involving 24 human participants, according to the research team.

AntiFake is designed to secure brief snippets of speech, addressing the prevalent issue of voice impersonation. Zhang, however, envisioned the potential expansion of the tool's capabilities to safeguard extended recordings or even musical content, underscoring its role in the fight against disinformation. 

Read Also: Nicki Minaj Hopes Internet Gets "Deleted" After Seeing Deep Fake Video of Her, Tom Holland

Protecting Voice Recordings

Zhang said they eventually want "to fully protect voice recordings." While acknowledging the continuous development of new tools and features in AI voice technology, Zhang believes that using adversaries' techniques against them will remain effective, emphasizing the vulnerability of AI to adversarial attacks in the form of subtle perturbations. 

The study's abstract outlines AntiFake as a defense mechanism employing adversarial examples to prevent unauthorized speech synthesis. An ensemble learning approach enhances the generalizability of the optimization process, ensuring transferability to attackers' unknown synthesis models. 

Evaluation against five state-of-the-art synthesizers demonstrated AntiFake's efficacy, achieving over a 95% protection rate, even against unknown black-box models. Usability tests involving 24 human participants affirmed the tool's accessibility to diverse populations. 

The research team's findings were presented in the Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security.

Related Article: Chan Zuckerberg Initiative's Generative AI Project Aims to Revolutionize Medical Research to 'Cure' All Disease by 2100

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion