Artificial intelligence has carved out a big space for itself in the pantheon of revered sci-fi works, from 2001: A Space Odyssey to the more recent Her and Ex Machina, all three of which superbly paint a picture of what exactly AI is — and what could happen if it becomes too smart for its own good.

SpaceX CEO Elon Musk thinks AI poses a great danger. In fact, he thinks it could start the third world war and is more dangerous than North Korea. Musk is both right and wrong, though. Right, because on paper, AI is indeed a dangerous sandbox to experiment in. Wrong, because, well — AI is kind of still dumb.

Image Recognition Mistakes Turtle For A Gun

AI is increasingly being relied on for lots of things, including the ability to detect stuff, to recognize voices, and tell different faces apart. But as a new research proves, it's quite easy to fool these algorithms.

A team of researchers from Massachusetts Institute of Technology wanted to determine how easy it is to trick a neural network into consistently misidentifying an object. They used an "adversarial image," or a type of photo that's specifically designed to trick object recognition programs, using specific patterns to do so.

Adversarial Images

Adversarial images don't always work. Cropping, zooming, and other sorts of image manipulations can often result in the system correctly identifying the image. What the team wanted was to create an adversarial image that worked every single time.

So they were able to build an algorithm that can fool an AI using adversarial images applied to both 2D images and 3D-printed objects. The adversarial images they created would trick the AI regardless of the object's angle. The team was even able to fool Google's Inception v3 AI: it identified a turtle as a rifle.

So how did they do it?

One Pixel Attack

This is how image recognition works: the software measures an image's pixels and matches that to an internal blueprint of a given object's dimensions. If it finds a match, it recognizes the object.

The team changed just one pixel. This is called the "one pixel attack," according to the researchers. It identifies weaknesses and makes slight changes to cause the AI to see something else entirely. That's why Google's own algorithm thought the turtle was a rifle.

It wasn't perfect, though. Using the one pixel attack, the system was only able to fool the AI 74 percent of the time. Check it out in action below:

So what does this mean? Even with the gloom and doom stories associated with AI, it being the harbinger of world destruction and total domination, perhaps the technology is just not there yet. That's a problem because object recognition, a form of AI, is quickly becoming a common element in smart policing, as Gizmodo notes. The researchers just proved that this system, advanced as it may seem, is entirely exploitable.

Think of it backward: what if AI recognized guns as turtles? What would that mean for security and public safety, especially if the world is quickly relying on AI in surveillance and recognition systems?

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion