ElevenLabs has unveiled a new multilingual voice generation model that can produce emotionally expressive AI audio content in nearly 30 languages, enabling creators to localize content for diverse international markets.  

ElevenLabs Comes Out of Beta and Releases Eleven Multilingual v2 - a Foundational AI Speech Model for Nearly 30 Languages
(Photo : ElevenLabs )

Eleven Multilingual v2 of ElevenLabs

The model called Eleven Multilingual v2 is the outcome of 18 months of in-house research. It incorporates innovative mechanisms to comprehend context, convey emotions, and synthesize distinct voices, enhancing the authenticity of the generated speech.

When inputted with text, the new model can identify approximately 30 written languages and generate speech that maintains the speaker's unique characteristics, including their original accent. This development allows for a consistent voice experience across 28 different languages.

The launch of Eleven Multilingual v2 follows the public release of Professional Voice Cloning on the platform. This feature allows users to create an accurate digital replica of their own voice. This cloned voice can now be utilized across the extensive array of languages supported by the multilingual model.

The supported languages include Chinese, Korean, Dutch, Turkish, Swedish, Indonesian, Filipino, Japanese, Ukrainian, Greek, Czech, Finnish, Romanian, Danish, Bulgarian, Malay, Slovak, Croatian, Classic Arabic, and Tamil. These languages join the previously available ones like English, Polish, German, Spanish, French, Italian, Hindi, and Portuguese.

Alongside these advancements, ElevenLabs has officially transitioned its platform out of Beta. The company's commitment to delivering reliable and cutting-edge tools to its global user base of over 1 million has prompted this step.

In the future, ElevenLabs plans to facilitate voice sharing on the platform, fostering opportunities for collaboration between humans and AI in audio content development.

CEO and co-founder of ElevenLabs, Mati Staniszewski, expressed the company's mission of universal accessibility and the elimination of linguistic barriers through AI voices. He noted the potential for greater creativity, innovation, and diversity resulting from these advances in accessibility.

"Our text-to-speech generation tools help level the playing field and bring top quality spoken audio capabilities to all the creators out there. Those benefits now extend to multilingual applications across almost 30 languages. Eventually we hope to cover even more languages and voices with help of AI, and eliminate the linguistic barriers to content," Staniszewski said in a statement. 

Read Also: Theoretical Physicists Call AI Chatbots Just 'Glorified Tape Recorders' as Fear of Artificial Intelligence Dies Down

Eleven Multilingual v2's Applications

The release of Eleven Multilingual v2 is set to have far-reaching implications. Independent game developers can now seamlessly translate gaming experiences and audio content for international players, enhancing engagement. 

Educational institutions can provide learners instant access to accurate audio content in various languages, facilitating language comprehension and pronunciation skills.

Moreover, content creators can utilize ElevenLabs' tools to enhance accessibility for people with visual impairments or distinct learning needs, enriching content by incorporating speech in multiple languages.

ElevenLabs initially introduced AI voice tools, such as synthetic voices and voice cloning, earlier this year. The new multilingual speech synthesis tool aligns with ElevenLabs' vision of making content universally accessible in any language and voice. 

Related Article: Can AI Bots Solve CAPTCHA Tests Faster and More Accurately Than Humans? Researchers Reveal the Surprising Answer

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion