Artificial Intelligence: Rushed?
(Photo : Getty Image)

In 2021's International Conference on Learning Representations (ICLR), a team of skilled researchers hailing from the University of Maryland showcased an attack technique that could slow down deep learning models optimized to perform fast and sensitive operations.

The attack is entitled "DeepSloth." It aims to target adaptive and deep neural networks -- a wide variety of deep learning architectures, specifically made to cut down computations, leading to a faster processing function.

The Risk of Rushing Artificial Intelligence Systems

The world has seen an increasing interest in deep learning and machine learning security over the last few years. With this comes multiple papers, techniques, and innovations related to hacking and defending extensive types of neural networks.

But what makes DeepSloth all the more interesting?

According to The Next Web, the researchers responsible for DeepSloth got to present an error within the technique they created two years before the presentation in 2019.

What DeepSloth seems to entail is that the machine learning community discovers multiple accidental errors from their previous and ongoing works regularly.

However, multiple researchers, developers, and retail companies compete against one another by racing to offer deep learning in numerous PC and mobile applications.

Read Also: AI and Its Takeover of Video Game Graphics: A Closer Look

The rush that goes into their versions of Artificial Intelligence systems can cause critical risks in the future.

For example, DeepSloth was only found recently after two years of creation.

This problem shows that rushed Artificial Intelligence systems could have further issues that manufacturers and researchers fail to notice at the time of production.

If numerous applications receive faulty AI systems, more and more companies and users will find themselves puzzled because of the challenges that they will face. Even so, irreparable damages might also take place, such as data leakages and identity exposure.

Harmful Effects of Artificial Intelligence

Tudor Dumitras, a cybersecurity expert and a member of Maryland Cybersecurity Center, stated in The Next Web that he thought of the harmful ways that deep-shallow networks might be exploited.

Along with Yigitan Kaya, a Ph.D. student from the University of Maryland, and his colleagues, he started to wonder if any adversary could lead the system into a state of overthinking.

To sum it up, they wanted to know if the energy savings along with the latency that they got from earlier exit models such as SDN are durable enough against attacks. They named it the "DeepSloth Attack."

Similar to adversarial attacks, DeepSloth primarily relies on carefully made input that could manipulate a machine learning system's behavior. But unlike other attacks that force a target model to provide incorrect predictions, DeepSloth is focused on terrorizing computations.

DeepSloth Attack is capable of slowing down multiple shallow-deep networks to prevent them from creating early exits and forcing them to push through with the entire computation of all layers.

However, The Next Web reported that the attack could cause even more critical threats.

Dumitras stated that the DeepSloth Attack can force a deep learning model to send all of the inference to the cloud. This event could lead to wasted energy and server resource, and other destructive impacts.

Related Article: Engineers to Put Fingertip in Robots to Feel 'Sense of Touch' and Avoid Error

This article is owned by Tech Times

Written by Fran Sanders

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion