The recent news of a family losing thousands of dollars to scammers using A.I. voice cloning technology is alarming. It has become increasingly clear that those with bad intentions can leverage the power of artificial intelligence to exploit unsuspecting victims.

Voice Cloning Fraud: The Growing Risk of AI Impersonation 

According to the story by Fortune, with the advancement of artificial intelligence, it is increasingly possible to create very convincing voices. A.I.- generated text-to-speech tools allow people to create voices with only a few seconds of initial audio. 

Unfortunately, this technology can also be abused for malicious purposes, such as cloning a person's voice for fraudulent gain. The case of Canadian citizen Benjamin Perkin is an example of the danger posed by voice cloning fraud.

Fraudsters Now Using Artificial Intelligence to Create Realistic Voice Clones 

As technology progresses, artificial intelligence offers all sorts of new possibilities, some of which include advances in the field of voice synthesizing. This means that, with just a few seconds of audio, one can create a perfect clone of someone's voice and use it to deceive others. 

A prime example of this was featured recently in the Washington Post, which reported on the story of Benjamin Perkin, a Canadian man whose elderly parents were tricked into sending the scammers thousands of dollars. 

Subscription-based Identity Verification to Combat Malicious A.I. Voice Cloning 

A "lawyer" had convened them and told them that their son was in jail for killing an American diplomat in a car accident and that he needed money for legal fees. What the elderly parents didn't know was that the "lawyer" had cloned Perkins' own voice and convinced them it was their son asking for the money.

The majority of these malicious A.I. voice cloning incidents have been linked to the use of free, anonymous accounts. To stop this, one company, ElevenLabs, has decided to release a new subscription-based plan that requires users to verify their identity to use their voice synthesizer. 

Verifying Senders of Suspicious Voicemails, Texts, and Emails 

This will help deter scammers from sending deceptive messages through their synthesizer, as they will be less likely to risk getting caught if they are required to authenticate themselves. 

Voice cloning is a very dangerous technology, and it's important to remember to be careful when giving out personal information. It's always best to verify the sender of any suspicious voicemails, texts, or emails before sending any money or other valuable information. 

Read Also: ChatGPT-powered AI Tutors to Teach Children in UAE, But it Wouldn't Replace Educators

The Consequences of A.I. Misuse: An Examination of the Benjamin Perkin Case 

Awareness of new technology can help protect users and their families from getting scammed. As technology is advancing rapidly, the potential for A.I. misuse needs to be kept in mind. 

The case of Benjamin Perkin is a testament to the kind of losses that can come from voice cloning fraud. It is imperative that steps are taken to ensure companies protect individuals against the dangers posed by invasive technology.

Related Article: Former Google Engineer Expresses Concerns Regarding AI Chatbots' Risks

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion