Computer scientists from the University of Waterloo have made a concerning discovery regarding the effectiveness of voice authentication security systems. 

They have identified a method of attack that can successfully bypass these systems with an alarming success rate of up to 99% after only six attempts.

COLOMBIA-AVIATION-BIOMETRIC-MIGRATION-SYSTEM-EL DORADO-FEATURE
(Photo : JUAN BARRETO/AFP via Getty Images)
Passengers use BIOMIG, the new biometric migration system, at El Dorado International Airport in Bogota on June 2, 2023. Colombian Migration launched a new biometric migration system for foreigners.

Deepfake Voiceprints

Voice authentication has become increasingly popular in various security-critical scenarios, such as remote banking and call centers, where it allows companies to verify the identity of their clients based on their unique "voiceprint."

During the enrollment process of voice authentication, individuals are required to replicate a designated phrase, which is then used to extract and store a distinct vocal signature or voiceprint on a server. 

In subsequent authentication attempts, a different phrase is utilized, and the extracted characteristics are compared against the stored voiceprint to ascertain access.

However, the researchers at the University of Waterloo have found that voiceprints can be manipulated using machine learning-enabled "deepfake" software, which can generate highly convincing copies of someone's voice using just a few minutes of recorded audio. 

Hence, developers introduced "spoofing countermeasures" to differentiate between human-generated speech and machine-generated speech.

The research team have created a method that bypasses these spoofing countermeasures, enabling them to deceive most voice authentication systems within only six attempts. 

They have identified the markers in deepfake audio that expose its computer-generated nature and have created a program to take out these markers, rendering the fake audio indistinguishable from real recordings.

During a evaluation conducted on Amazon Connect's voice authentication system, the researchers accomplished a 10% success rate within a brief four-second attack, which escalated to over 40% in under thirty seconds. 

However, when targeting less advanced voice authentication systems, they achieved an astonishing success rate of 99% after a mere six attempts.

Read Also: [Warning] ASUS Patch Needed: Router Owners Urged to Update Firmware to Address Critical Security Flaws

Thinking Like an Attacker

Andre Kassis, the lead author of the research study and a PhD candidate in Computer Security and Privacy, emphasizes that while voice authentication offers some additional security, the current spoofing countermeasures are fundamentally flawed. 

Kassis suggests that a secure system must be designed by thinking like an attacker, as failure to do so leaves it vulnerable to exploitation.

Urs Hengartner, a computer science professor and Kassis' supervisor, echoes this sentiment, highlighting the importance of deploying additional or stronger authentication measures in companies that rely solely on voice authentication. 

By revealing the vulnerabilities of voice authentication, the researchers aim to encourage organizations to enhance their security protocols to better protect against these types of attacks. 

The study was published in the proceedings of the 44th IEEE Symposium on Security and Privacy. 

Related Article: CISA Confirms Russian Ransomware Group's Attacks on US Government Agencies, Raising Concerns of Similar Assaults

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion