Google's Pixel 2 lineup reportedly delivers one of the best imaging experiences from a smartphone, according to users. Now, its portrait mode goes open-source.

Industry experts apparently agree that the smartphone's remarkable imaging capabilities are supplied by the company's proprietary artificial intelligence framework. Courtesy of its built-in machine-learning technology, the smartphone is capable of identifying what the user intends to capture in the image. Now that it has been made available to the public as DeepLab-V3+, the company hopes developers can test out and improve upon it even further.

Semantic Image Segmentation

The company made the announcement on its research blog and indicated that the source code will be distributed via its TensorFlow AI framework. Unlike most flagship models from competitors that rely on dual camera setups, its latest model uses semantic image segmentation to categorize each pixel of the photo. It is supposedly able to assign labels such as sky, dog, person, road, and more based on the information it has acquired from its database, which constantly gets updated with new items.

Outstanding Performance

What makes the Pixel 2 portrait mode's programming unique is its capability to produce stunning images on a software level. A large number of smartphones that are advertised as imaging powerhouses normally rely on hardware to acquire the desired effect. Now that Google openly shared its semantic image segmentation code for free, consumers should expect it to expand its capabilities in the future.

"We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-the-art systems, train models on new datasets, and envision new applications for this technology," wrote Google on their research blog regarding DeepLab-V3+ going open-source.

Looking back, the company can proudly claim that its imaging framework became what it is today thanks to the innovation of processes, hardware, data collection, and sorting.

Focusing On The Right Subject

Images taken by the Pixel 2 tend to stand out due to the results produced by its portrait mode. Google explains that the open-source data for DeepLab-V3+ does not mean it's the technology used by its smartphones. Furthermore, it does not guarantee that results will be exactly the same as the handset's camera, but it should be close to what the semantic image segmentation framework can handle.

The engineers behind the software describe its results as "accuracy levels that were hard to imagine even five years [ago]."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion