Google Allo Can Now Transform Your Selfies Into Stickers And You've To Thank Neural Networks For It
With the upswing of emoji usage, online methods of communication and interaction have increasingly become more peppered with stickers, cartoons, GIFs, and all sorts of animations that are often better-equipped in expressing what we want to get across.
Sometimes, even, they can replace words and phrases altogether and still hammer the same effect, if not a greater one. Google knows how often people use these forms of communication so it's upping its stickers game.
New Allo Feature Makes Stickers Based On Your Face
Google is introducing a new feature on its Allo messaging app that can generate cartoon stickers out of one's selfies. Cartoon generation based on real photos isn't new: Nintendo has done it before with the 3DS, in which players can generate Miis based on selfies. But Google is, of course, a punch more advanced than the pack because it employs a more complex technique.
Its new feature generates stickers with the use of machine learning and neural networks, mapping out one's facial features to convert them into illustrated versions.
Lamar Abrams, whose credits include storyboarding for Steven Universe, is responsible for the animated stickers, which users can customize even further. The result is an array of different stickers featuring your own animated self that can be used in Allo conversations.
How Google Uses Neural Networks To Turn Selfies Into Cartoons
If you want to know how Google manages to convert selfies into illustration by virtue of neural networks, it published a blog post detailing the whole thing, which is definitely worth a read. But basically, the clever engineers at Google asked themselves how they can enable an algorithm to pick out qualitative features of faces in the same fashion as we humans do — meaning, computers, in addition to analyzing images pixel per pixel, must also take note of the surrounding visual context.
This search led to some experiments on some of Google's general purpose computer vision neural networks. The team found out that a few neurons among the millions in its neural networks were adept at focusing on components they weren't trained to look at. This concept seemed useful for the team in building stickers from selfies.
Then, it was just a simple process of elimination. The neurons already know how to abstract away things they didn't need, so all Google had to do was to provide human-labeled examples "to teach the classifiers to isolate out the qualities that the neural network already knew about the image."
Google worked with artists to design the illustrations, and it trained the network to find a certain animated counterpart that matches a given selfie the closest. In some instances, the network got it wrong. So artists made more illustrations and added them to the pile.
Google says that the illustrations are not meant to replicate the person's facial features, rather, it's more about breaking the rules of representation. To that end, the illustrations look wacky and cartoon-like, as opposed to full-on digital renders based on selfies.
Noting the popularity emoji has generated over the years, Google is excited about the reception to its neural network-powered stickers.
"[I]t's not hard to imagine how this technology and language evolves. What will be most exciting is listening to what people say with it."
From Our Sponsor
How To Shop Smart: 5 Characteristics Of A Smart ShopperYou may love shopping, you may be a bargain hunter....but are you a smart shopper?