Artificial intelligence (AI) has been extensively used in various fields and concerns about its societal impacts have grown for the past few years as well. At the heart of those concerns is AI's tendency to be biased against particular groups.

Now, Amazon appears to address worries as it plans to provide warning cards offered by its cloud computing division, according to a report by Reuters

ITALY-RETAIL-ECONOMY
(Photo : MARCO BERTORELLO/AFP via Getty Images)
This picture shows Amazon's logo on the company's premises in Brandizzo, near Turin, March 22, 2021.

AI Service Cards

Amazon's AI Service Cards will be made available to the general public so that its corporate clients may understand the restrictions placed on particular cloud services, such as speech transcription and facial recognition.

According to the company, the initiative would aim to control privacy, prohibit improper use of its technology, and explain how its systems operate. 

However, Reuters noted that Amazon is the first one to do such an initiative. Cloud player International Business Machines Corp already did this in the past. Google, a subsidiary of Alphabet Inc., has recently released further information about the datasets it used to train some of its AI. 

The service cards will be rolled out at the same time as Amazon's annual cloud conference in Las Vegas. 

According to Michael Kearns, a professor at the University of Pennsylvania and an Amazon scholar since 2020, the decision to issue the cards was made in response to privacy and fairness audits of the company's algorithms.  

Kearns noted that the cards would heed ethical concerns with AI, especially at a time of imminent tech regulations. 

As a starting point for its service cards, which Kearns anticipates will become more in-depth over time, Amazon selected software that deals with sensitive demographic concerns. 

Read Also: Bias AI? Study Finds that AI Algorithms Can Identify Someone's Racial Identity Based on X-rays

Addressing Racial Bias

In 2019, Amazon disputed a study that claimed the technology had trouble determining the gender of people with darker skin tones. The service is called "Rekognition." 

However, following the murder of George Floyd in 2020, the business banned the usage of their facial recognition technology by law enforcement. 

In one of the service cards obtained by Reuters, it explains that Rekognition is unable to support matching "images that are too blurry and grainy for the face to be recognized by a human, or that have large portions of the face occluded by hair, hands, and other objects."  

It also cautions against comparing cartoon characters' faces to those of other nonhuman beings.

Bias has long been a major concern with AI technologies. For instance, in a recent study published in Philosophy and Technology, academics from Cambridge's Centre for Gender Studies claim that AI recruiting tools are superficial and comparable to "automated pseudoscience."

They assert that it is a dangerous example of "technosolutionism," which they define as the use of technology to address difficult problems like discrimination without making the required investments or changes to organizational culture. 

Related Article: Offensive Robot? Experiment Finds Flawed AI Making Racial and Gender Stereotypes  

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion