A group of Chinese researchers discovered a vulnerability in each of the world's most popular voice assistants. No one was spared. Microsoft Cortana, Apple Siri, Amazon Alexa, Google Assistant, and even Samsung's now-defunct S Voice, have all been found to be susceptible to hacking.

The World's Top Voice Assistants Were Just Hacked

A team from Zhejiang University used a technique known as the DolphinAttack. They took normal voice commands and converted them into ultrasonic frequencies too high for humans to hear, but still perfectly clear for microphones on voice assistants. They were able to feed the voice assistants a series of high-frequency commands.

That kind of vulnerability presents a problematic scenario. Hackers might start creating audio files under such a frequency to control the voice assistants of unsuspecting victims, which is especially plausible since most assistants are always on.

The commands the researchers were able to make were quite complex, too. Simple ones, such as "OK Google" or "Hey Siri," worked. More specific ones, such as telling an iPhone to "call 1234567890" or telling an iPad to FaceTime a certain number, also worked.

Does The DolphinAttack Work On All Devices?

The hack works regardless of device the assistant is on. It worked when used on a MacBook and a Nexus 7 to open a malicious website. Telling an Amazon Echo device to "open the back door" worked, too. It even worked on an Audi Q3, commanding it to change locations.

All such possibilities sound scary. Not only may that kind of vulnerability lead to data breaches, it could also pose life-threatening dangers when one factors in unauthorized control of internet-of-things and smart home devices.

The hack has a few shortcomings, however. Right now, the ultrasonic frequencies may only affect voice assistants when played from 5 or 6 feet away. Also, the assistant would need to be activated first. What's more, Siri, Google Assistant, and others almost always notify the user when they're responding to a command. A person may simply dismiss the assistant if they hadn't intended on using it in the first place.

For now, the researchers recommend that device makers modify microphones so they don't intercept signals beyond 20 KHz, or simply ignore voice commands under inaudible frequencies.

The makers of these digitals assistants have yet to offer official world. It seems such a gaping security hole to miss, and yet every single one of them missed it. In a world where personal voice-enabled assistants are slowly becoming the norm, this is a massive security concern. For now, users may turn off their assistant's always-on feature to avoid the vulnerability. But there must be a more permanent solution.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion