In a revelation that challenges the security of autonomous vehicles, Kevin Fu, a professor at Northeastern University specializing in new technology exploration, discovered how to make self-driving cars hallucinate.

Fu and his team found an entirely new kind of cyberattack, an "acoustic adversarial" form of machine learning they dubbed "Poltergeist attacks," TechXplore reported. This new technique manipulates the perception of self-driving cars and drones, potentially threatening their safe operation.

Continental AG Showcases New Automotive Technologies
(Photo : Alexander Koerner/Getty Images)
HANOVER, GERMANY - JUNE 20: A driver presents a Cruising Chauffeur, a hands free self-driving system designed for motorways during a media event by Continental to showcase new automotive technologies on June 20, 2017 in Hannover, Germany. The company presented new clean diesel technology, cable-less and other advances in electric car charging, smartphone technology for rental cars, driverless car advances and robotic taxi services

Poltergeist Attacks on Self-Driving Cars

Poltergeist attacks diverge from traditional cyber threats, such as hacking or jamming. They create deceptive visual realities, similar to optical illusions, for machines employing machine learning for decision-making processes. 

The basis of this attack lies in the exploitation of optical image stabilization technology, a feature common in contemporary cameras across various devices, from smartphones to autonomous vehicles.

Typically, this technology counters movements or shakiness during image capture, ensuring clear and focused pictures. However, Fu's research uncovers a vulnerability in this system. 

Fu's team successfully manipulated the images by pinpointing the resonant frequencies of the sensors within these cameras, which are generally ultrasonic. This interference leads to misinterpretations by the machine learning algorithms, potentially resulting in significant misjudgments by autonomous systems. 

Fu provides a vivid analogy, comparing this phenomenon to a skilled opera singer shattering a wine glass by hitting its resonant frequency. In this case, they induced the sensors to register misleading information by accurately striking the right resonant note, distorting the images.

Read Also: US Defense Department Pours $800,000 Into Research Preventing Cyberattacks on Self-Driving Cars, UAVs Networks

Threats to the Safety of Autonomous Systems

The implications of Poltergeist attacks extend far beyond mere inconvenience. According to the team, they pose genuine threats to the safe operation of autonomous systems, particularly those deployed on fast-moving vehicles. 

For example, a manipulated perception could lead a driverless car to recognize a non-existent stop sign, causing an abrupt halt in a potentially hazardous situation. 

Alternatively, it could trick a car into disregarding an actual object, such as a person or another vehicle, resulting in a collision. Fu underscores the urgency for engineers and developers to confront and mitigate these vulnerabilities. 

"Technologists would like to see consumers embracing new technologies, but if the technologies aren't truly tolerant to these kinds of cybersecurity threats, they're not going to be confident and they're not going to use them," Fu said in a press statement. "Then we're going to see a setback for decades where technologies just don't get used."

The emergence of Poltergeist attacks serves as a stark reminder of the critical need for rigorous cybersecurity measures in the ever-evolving landscape of autonomous systems.

Fu's research sheds light on a hitherto unexplored avenue of cyber threats, prompting the industry to redouble efforts in fortifying these technologies against potential adversities. 

Related Article: 'Futuristic' Self-Driving Cars May Leave People With Disabilities Behind, Researchers Fear

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion