In recent years, there has been an increase in the usage of AI tools that are advertised as a solution to the lack of diversity in the workforce. These tools range from chatbots and CV scrapers to aid companies in hiring employees. 

Users of such tools claim that it eliminates gender and ethnic biases in hiring by utilizing algorithms that analyze job applicants through their speech patterns, expressions, and other aspects.  

However, researchers from Cambridge's Centre for Gender Studies contend that AI recruiting tools are superficial and equivalent to "automated pseudoscience" in a recent report published in Philosophy and Technology

GERMANY-ELECTRONICS-FAIR-IFA
(Photo : JOHN MACDOUGALL/AFP via Getty Images)
A visitor walks past a display featuring South Korean consumer goods giant LG's "Moodup" refrigerators at the Internationale Funkausstellung (IFA), the international trade show for consumer electronics and home appliances, on September 2, 2022 at the fair grounds in Berlin.

"Techsolutionism"

They claim it is a risky instance of "technosolutionism" -  the use of technology to address complex issues like discrimination without making the necessary investments or alterations to organizational culture. 

The researchers have collaborated with a group of Cambridge computer science undergraduates to develop an online AI tool to refute claims that AI eliminates bias in the workplace, according to the university's press release, which was also picked up by BBC

The "Personality Machine" shows how random adjustments in facial expression, dress, lighting, and backdrop may provide radically different personality readings, which could be the difference between rejection and advancement for the current crop of job applicants competing for graduate positions. 

The Cambridge team asserts that because AI is programmed to look for the employer's ideal applicant, it may eventually encourage uniformity rather than variety in the workforce when it is utilized to reduce candidate pools. 

Read Also: Offensive Robot? Experiment Finds Flawed AI Making Racial and Gender Stereotypes

"Insignificant Data Points"

According to the researchers, persons with high education and experience may be able to defeat the algorithms by imitating the attitudes and actions that the AI is built to recognize. 

Furthermore, they contend that the applicants who are deemed the best will probably end up being those who most closely resemble the current workforce because algorithms are developed using historical data. 

"By claiming that racism, sexism, and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world," co-author Dr. Eleanor Drage said in a statement. 

The researchers noted many businesses now analyze candidate videos using AI, evaluating applicants for the "big five" personality traits: extroversion, openness, agreeableness, conscientiousness, and neuroticism. They said that this is similar to how lie-detection AI evaluates candidates.

According to Euan Ong, one of the study's student developers, these tools are trained to predict personality based on recurring themes in photographs of people they have previously encountered. 

As a result, they frequently discover erroneous correlations between personality and seemingly unrelated aspects of the image, like brightness. 

Drage said that even though companies who use such tools are not acting in bad faith, she argued that there is "little accountability" for how these products are tested.

"As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be 'de-biased' and made fairer." 

Related Article: Bias AI-Based Hiring Tools Face NYC's New Bill! Agencies Need to Provide This to Continue Using the Tech

This article is owned by Tech Times

Written by Joaquin Victor Tacla

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion