Microsoft announced a set of updates to its Project Oxford machine learning system at its Future Decoded conference in the UK, which includes a new API that will be able to detect emotions based on the expressions it reads in a still image.

Project Oxford is Microsoft's machine learning service that is part of its Azure portfolio which allows developers to create smarter apps by using the developed APIs that include facial-recognition technology, speech processing, language understanding interlace, and visual tools.

Microsoft has previously unveiled a tool that is able to guess a person's age in a photo, but now the company is taking its intelligence one step further by developing emotion detection for images.

In the latest update to Project Oxford's facial-recognition service, Microsoft announced that the software can now "look" at images, identify the faces in the photo, and determine what the subject is feeling by ranking levels of common emotions that include happiness, anger, surprise, sadness, fear and disgust.

The company launched the public beta of Project Oxford's Face API so that developers can try it out for themselves in hopes that they will create apps that are based on emotion. The technology could also be applied to existing apps, such as social networks that will now have the ability to display only happy pictures. It could also be used for marketers looking to gauge reactions to a store display or movie.

Microsoft released MyMoustache earlier this week in honor of Movember, which uses the technology to identify and rate facial hair.

However, it's important to remember that the software is based on machine learning, so there is a chance it won't guess the emotion correctly every time. When there are multiple people in the photo, it will take its best guess at identifying each individual's feelings.

Along with this emotion detecting update, Microsoft also revealed other updates to help developers create smart apps. It will roll out an update to its speech recognition API in December, which will allow the software to better hear and understand people when they are in loud public spaces. It will also roll out a new video feature that would let users cut smartphone videos to include only when people start moving in the shots later this year, as well a spell-checking API service that will update the dictionary with new slang words and brand names.

Source: Microsoft
Via: Business Insider

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion