A new study by University of Melbourne researchers raises questions about AI's potential discriminatory tendencies towards parents in the workforce. 

The study examines gender bias and its impact on hiring practices, aiming to shed light on the complexities surrounding AI algorithms.

Child
(Photo : Charles from Pixabay)

Blind Hiring Process to Remove Bias by OpenAI's ChatGPT

The research indicates that gender bias is deeply ingrained in the hiring process. On average, women in Australia earn 23% less than men and face challenges such as being less frequently invited for job interviews and receiving more critical evaluations. 

To combat this bias, blind resume screening and AI-based decision-making have been suggested as potential solutions. The idea is that by hiding applicants' names during the hiring process, AI can remain impartial and free from human stereotypes.

However, the study challenges the effectiveness of these strategies, especially when applied to AI algorithms. The research delves into the nuances of gender bias in AI algorithms and their implications for hiring practices. 

The study finds that AI inadvertently ingests and uses gender signals - more subtle than a name -during the decision-making process. This issue becomes more pressing with the emergence of powerful generative AI like ChatGPT.

In the study, ChatGPT was evaluated for gender bias in hiring. The researchers created CVs for various occupations, ensuring they were highly quality and competitive. The CVs were then modified to signal whether the applicant was male or female, and a parental leave gap was added to some of the CVs. 

Although all applicants had identical qualifications and job experiences, ChatGPT ranked parents lower in every occupation, irrespective of their gender. The presence of a parental leave gap seemed to influence the algorithm's perception of an applicant's qualifications.

The findings pointed to a noteworthy concern. Despite efforts to eliminate gender bias explicitly, AI algorithms may still inadvertently perpetuate discrimination through other mechanisms, such as considering an applicant's parenthood status. 

Given that women are often more likely to have parental leave gaps, this bias can have unintended consequences for female applicants.

Read Also: Is ChatGPT Becoming Dumber? New Study Claims AI Chatbot's Performance Is Deteriorating

Language Liability

Additionally, the study highlights language liability. Subtle differences in language were observed between male and female applicants when describing skills and education in CVs. 

Even after removing names and pronouns, AI representations linked these language nuances to gender, allowing AI to predict an applicant's gender based on language and potentially influence the CV ratings.

The study raises important questions about the complexities of addressing bias in AI algorithms, especially in the context of hiring decisions. While blind resume screening may work to some extent for human hirers, the study suggests that it may not suffice for AI.

The researchers stated, "Well, our research shows that while 'blind resume screening' may work for humans, it doesn't for AI. Even if we drop all identifying language-the shes, hers and names-other language is signaling gender."

They added that "careful auditing of biases can remove the most obvious layer of discrimination, but further work needs to be done on proxies that can lead to bias but may not be as obvious."

Related Article: Can ChatGPT, Other Large Language Models Flag Fake News?

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion