Banks and telecommunication companies plan to upgrade biometric authentication by adding voiceprints, but many still find it alarming.

Known as "voice biometrics," the voice print authentication confirms your identity by reading the sound patterns in your voice. Companies in the financial and telecommunications industry say it makes accessing your account fast and easy, reducing fraud. But how safe is it?

Banking companies were the first to push through with this idea since 2013, with Barclays as an early adopter.

In 2016, HSBC or Hongkong and Shanghai Banking Corporation implemented voice authentication features with 1.5 million customers signing up so far. 

HSBC also mentioned that the new system has significantly reduced telephone fraud, with almost 8,000 attempts and more than £200m in funds being protected.

Several firms were confident and even claimed that their voice recognition systems can "even distinguish between identical twins." 

However, a BBC reporter fooled HSBC's system in May 2017. After Joe Simmons mimicked his brother's voice, the bank let Dan Simmons' non-identical twin, Joe, access his account over the phone.

"They certainly can be duped - you don't need to be Mike Yarwood or Rory Bremner ... There have been a number of occasions when these things have been found to be not up to scratch," says Graham Cluley, one of the leading experts in computer security.

Now, even though voice authentication poses an additional threat to user privacy and security, banks and telecom giants alike have started adopting voice authentication tech. What's worse is that they're increasingly collecting voice print data without a way for users to opt out.

If recalled in 2018, reports from The Guardian revealed that HMRC had collected millions of voiceprints without their consent. The data, it turns out, had been gathered by a private company, KCOM - formerly Kingston Communications.

Responding to the concerns raised, HMRC says it has been working closely with the Information Commissioner's Office (ICO) to resolve problems. It says the system "is extremely popular with customers as it provides a quick and secure way to access their accounts."

Also Read: Android Users Warned Of Malware Disguised As Fake Security Updates | Tech Times

The Security Risks Brought by Voice Recognition

With the rapid rise of automated deepfake systems and the growing amount of online user voice data, the rush towards voice authentication is particularly problematic. 

A parade of online creators, smart televisions, and other devices are collecting and sharing this data, which increases accessibility while failing to secure or encrypt it.

In line with this, there were already reports concluding that deepfakes are targeting financial industries.

Earlier this year, CyberCube, a cyber risk analytics firm that tracks insurance companies, released a report depicting the exponential progress of artificial intelligence to create realistic biometric copies. Now, as businesses-like banking and telecommunications go online, deepfakes will soon be a major cyber threat for customers.

Voice data collection should at the very least provide informed users the option to opt out, yet in many cases, they are not even permitted to do so. It's yet another example of why the nation needs at least some kind of privacy protection that, at an extreme minimum, mandates that both data collection and use be controlled.

Related Article: 9 Popular WiFi Routers Are Found To Be Vulnerable To Security Breach, Study Says | Tech Times

This article is owned by Tech Times

Written by Thea Felicity

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion