The emergence of a deepfake simulator on ElevenLabs was evidently seen this week after the company saw a surge in the number of misuse cases of the AI tool.

Instead of using it for entertainment, some users take it to the extreme. The model was now reportedly abused by some people to generate homophobic, racist, and transphobic content.

Misuse Cases in Deepfake Voice Simulator

AI Deepfake Voice Simulator
(Photo : Soundtrap from Unsplash)
Increasing Number of Deepfake Voice Misuse Cases Attributed to ElevenLabs AI Beta Platform

According to a report by Gizmodo, ElevenLabs first launched its AI platform in beta on Jan. 23. Earlier this year. The company announced that it would launch Prime Voice AI.

After some time, the company learned that more users were abusing the deepfake voice simulator.

Over Twitter, it saw a surge of misuses cases of the AI tool that allows users to imitate the voices of their favorite celebrities.

It turns out that there's a massive incident of AI usage on 4Chan, as well. In another report by Motherboard, some users posted their videos of AI-generated voices, which sound almost similar to an actor's voice.

Unethical Use of Prime Voice AI

There are a lot of voice recordings that appear to be exactly the same as in the case of Emma Watson. While it's fun to hear the such identical quality of voice from someone we don't know, some are unethically using the AI platform for their own good.

According to Engadget, people have taken advantage of the deepfake voice simulator to annoy other users on the channel.

In particular, they use the Prime Voice AI to speak homophobic or racist comments to someone.

While ElevenLab thought that many of the users were abusing their beta platform, it's not yet disclosed if all the audio clips on 4Chan used the company's AI voice tool. However, a "wide collection" of these files contained a link that directs to ElevenLabs' platform.

Related Article: Chinese Regulators to Propose Rules to Crack Down Deepfakes, Want to Promote Socialist Values

ElevenLabs Wants to Address the Issue

To prevent the widespread misuse of the AI platform, ElevenLabs decided to gather some feedback from the users regarding its deepfake voice software.

The company wants to develop a solution that will stop users from taking advantage of its AI platform. As such, it could improve the account verification for the users to limit the possible people who could use the voice simulator.

Additionally, they could require the people to add their ID or any bank information before they can utilize the AI tool.

It's also important for ElevenLabs to require them a sample of their cloning requests for the voices they want to copy.

It's no surprise that deepfake technology can either be used or abused to an extent. The power still lies on the companies to restrict users from negatively using the software.

Read Also: DetectGPT Can Know if Students Use ChatGPT for their Homework

Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion