Amid the swift adoption of AI technology by Human Resources (HR) departments, a collaborative study involving NUS Business School, the International Institute for Management Development (IMD), and The Hong Kong Polytechnic University addresses a significant inquiry into how jobseekers interpret the integration of AI in the selection and recruitment process.

Associate Professor Jayanth Narayanan, affiliated with NUS Business School, discloses that his interest is rooted in a personal incident. He recounts an instance where a close friend's evaluation for a position, facilitated by video interviewing software, produced feedback indicating a lack of enthusiasm during the interview.

Job Interview Handshake
(Photo : Tumisu from Pixabay)

AI in Hiring Process

Professor Jayanth underscores the possibility of varied outcomes had a human interviewer been involved. Human evaluators, possessing perceptual abilities, might have detected indications of illness, potentially prompting inquiries regarding the candidate's health. 

He further elaborates on this by pointing out that a human interviewer could have recognized the candidate's determination to engage despite their unwell condition.

From 2017 to 2018, the study encompasses over 1,000 participants representing diverse nationalities, primarily in their mid-30s. These participants, drawn from Amazon's crowd-sourcing platform Mechanical Turk, took part in four scenario-based experiments to gauge perceptions of the role of computer algorithms in recruitment.

The initial pair of experiments investigated how algorithms impact job applicants' perception of fairness in the hiring process, while the subsequent two delved into the underlying factors contributing to lower fairness scores.

The findings reveal a sense of skepticism among job seekers regarding the role of AI in recruitment. They tend to view algorithmic decision-making as less reliable and impartial compared to methods involving human assistance. 

Even candidates who experience positive outcomes through algorithm-driven recruitment processes tend to perceive higher fairness when humans are involved in resume screening and hiring decisions. 

The fundamental cause of this difference in fairness perception primarily arises from AI's constraints in identifying the distinct attributes of candidates. Unlike human recruiters, who possess the ability to perceive subtle qualities that set each candidate apart, AI systems could miss crucial characteristics, which might result in the exclusion of qualified candidates, according to the researchers. 

This discovery questions the prevailing belief that algorithms guarantee impartial assessments and alleviate human biases.

Read Also: Bill Gates Reveals His Vision for How AI Could Transform the Education System

AI and Human Collaboration

To achieve synergy between AI technology and the human element in recruitment, Professor Jayanth proposes a cooperative strategy, envisioning algorithms as partners in decision-making alongside human recruiters.

Professor Jayanth suggests, "For example, algorithms can flag that the recruiter is not shortlisting enough women or people from a minority group. Algorithms can also flag the uniqueness of a candidate compared to other applicants."

As AI's influence is projected to amplify, Professor Jayanth anticipates heightened prevalence and accessibility in recruitment. Nevertheless, he notes the pivotal role of human oversight.

He underscores that while algorithms have a significant role, assessing candidates for job suitability should continue to be a human prerogative.

He questions the wisdom of relinquishing an essential organizational aspect to algorithms, advocating a prudent and conscious approach to automation. He cautions against blindly pursuing cost savings through automation, which could potentially mechanize tasks inherently enjoyable for humans.

The study brings to light a nuanced perspective on AI's integration in recruitment, shedding light on job seekers' skepticism and the limitations of algorithmic decision-making.

The findings of the team were published in the Journal of Business Ethics. 

Related Article: OpenAI's ChatGPT Joins the Ranks in Top 1% for Original Creative Thinking

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion