Amazon should stop offering its Rekognition facial recognition software to law enforcement agencies due to a major glitch, according to artificial intelligence (AI) experts.

Several AI scholars have called on Amazon and other tech companies to halt all sales of their programs because of an error in how they detect people. It turns out many facial recognition software cannot read the faces of dark-skinned individuals.

The experts believe this glitch might result in institutional biases against people of color.

Glitch In Facial Recognition Software

The issue with facial recognition software was first uncovered by Massachusetts Institute of Technology researcher Joy Buolamwini.

Buolamwini launched an extensive review of the ability of such software to recognize human faces accurately. She found that many programs had a much higher error rates in identifying the gender of darker-skinned females compared to lighter-skinned males.

Her findings has helped some tech companies, such as Microsoft and IBM, to make their own audit and improve their facial recognition software.

However, Amazon remains adamant about its Rekognition program, publicly criticizing Buolamwini and her findings.

Buolamwini's fellow AI experts, including this year's Turing Award winner Yoshua Bengio, rallied behind her to defend the credibility of her work. They urged Amazon to stop offering its software to police and other law enforcement agencies.

Several politicians have also begun looking into ways how to limit the use of such software to analyze people's face.

Buolamwini argues that individuals should have a say in how they are subjected to computer vision tools.

"There needs to be a choice," the MIT researcher said.

"Right now, what's happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it's almost too late."

Meanwhile, Bengio gave his thoughts on the issue through an email to Bloomberg.

He highlighted the importance of having discussions on what is socially and ethically acceptable when it comes to using new technologies. He said the recent case brings to light such concerns in a clear manner, and that it is a good way to increase people's awareness on the matter.

Bengio hopes that this will lead to tech companies adopting internal rules and government agencies enforcing regulations to ensure that "the best-behaving companies" are not left at a disadvantage compared to their peers.

Possibility Of AI Biases

Many researchers have raised concerns in the past about the possibility of AI being abused. They fear that such software could be programmed to mimic certain biases of whoever developed them. For instance, if a programmer feeds their AI mostly using photographs of white men, then the program would work best when used to scan faces of white men.

This could lead to serious incidents such as in the case of AI programs for driverless cars. A previous study suggested that some computer vision software have difficulties in recognizing pedestrians that have darker skin tones.

Such issues with facial recognition software need to be addressed, especially since more government institutions and businesses are starting to adopt such programs for their operations.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion