Google Brain, the company's in-house deep learning research project, has created a new software that could turn an age-old TV trope into a reality.

The trope, most especially seen on criminal-themed shows, sees law enforcement or detectives unabashedly bark "Can you enhance that?" when referring to an obviously grainy security footage that simply can't be enhanced in reality.

Zoom And Enhance

With Google Brain's new project, however, reality is catching up to fiction. The new software can create detailed images from tiny, pixelated source images, potentially making "Zoom and enhance" a possibility.

The software starts with pixelated, almost unrecognizable source images, and adds more detail to it. For example, for a source image of 8 x 8 pixels, the software can patch it up and enlarge it to 32 x 32. It extracts an amazing amount of information from a low-detail source, basically.

As it stands, there's really no way to enhance, or add more detail to an image than what's already present. Sure, a person can apply clever maneuvers, such as raising some attributes — brightness, contrast, etc. — to dig out what isn't invisible in the base image. But actually enhancing it so as to carve out the source's nooks and crannies? Impossible.

How It Works

So how does Google Brain do it? With the help of neural networks, of course; two, to be exact.

The first part, called the conditioning network, attempts to map the base 8 x 8 image against other high-resolution images. It converts these more detailed images to 8 x 8 and determines if there's a match.

The second part, called the prior network, takes advantage of a PixelCNN implementation to add realistic, high-resolution details to the base image. The prior network amasses a sizable number of high-resolution images ready to be added onto the base image. Then, when the pixelated source is blown up, the network then tries to add matching pixels.

For example, if the prior network sees the top part of the base image as pixelated, it might determine it as an eyebrow, therefore adding pixels to make it look as such. In layman's terms, the software fills in the gaps where details should be.

To end up with a final product, the two outputs from two networks are blended together, with the rendered image usually containing the plausible addition of new details, as noted by Ars Technica.

Real-World Testing

More impressively, Google Brain's technique was successful in real-word testing. Humans were shown a real, high-resolution photo alongside the "processed" image, and the subjects were fooled 10 percent of the time. For bedroom images, however, the result was relatively higher at 28 percent.

What's noteworthy is that, in all honesty, the image rendered by the software isn't even real. The additional details are at best guesses and nothing more. This aspect, while groundbreaking, could lend problems once the software reaches real-word usage, raising questions about accuracy of surveillance footage, forensics, and more. Of course, it could help enhance the photo of, like say, a person law enforcement is trying to look for, but it wouldn't actually be a real rendering.

There's obviously a ways to go before this implementation could push for mainstream use, but now that we know image-enhancing is indeed a possibility, crime shows can at least have something to fall back on when accused of the trope.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion