Facial recognition technology gets a significant improvement to help identify people with darker skin tones. Microsoft claims that improvements with its AI made it possible.
The Importance Of Accuracy
Research shows that there is still a technological bias with the identification system when it comes to a range of skin colors. However, an announcement from the Redmond-based tech company suggests that adjustments are on the way reduce errors. According to the information provided by the developers, the industry can experience error reductions by as much as nine times for women, and up to 20 times for both sexes with darker skin.
Nowadays, facial recognition tech's importance is highlighted by the variety of applications that rely on its accuracy. Apple demonstrated its Face ID system that makes the iPhone X stand out from competitors. On the other hand, law enforcement and security experts want to make sure that the technology is dependable enough to identify criminals and more. The improvement detailed by Microsoft might be the assurance that is needed.
Racial Bias Gets The Blame
Sources point out that facial recognition software and technology continues to make these mistakes due to the mostly uniform datasets it uses. This implies that the developers maintain a biased approach when it comes to the selection of images being fed into the system. The company acknowledges the previously-mentioned probability and promises that its AI-powered platform now relies on larger and more diverse datasets this time around.
For example, in 2015, Google Photos' machine learning technology incorrectly identified two black people as gorillas. The user, a software engineer, posted it on Twitter and noted that the algorithm used by the platform seems fine, but suggested that the data used to train the artificial intelligence might be to blame.
Google Photos, y'all fucked up. My friend's not a gorilla. pic.twitter.com/SMkMCsNVX4 — jackyalciné PDX @ IndieWebSummit 2018! (@jackyalcine) June 29, 2015
"If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases," explained Hanna Wallach, a senior researcher from Microsoft.
A Big Step Forward
Microsoft currently has an ongoing contract with the government to use its facial recognition tool to help identify potential threats. News about its improvement with darker skin tones should mean a great step forward for authorities. However, there are those who are worried that it might be used to identify and separate even more families at the border.
The company is connected to the U.S. Immigration and Customs Enforcement (ICE), which explains why there is speculation about it. Yet, the company reveals that it is not involved in any projects that are related to the ongoing issues about immigrant families and their separation.