New research by psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain, has unveiled a concerning revelation: humans can inherit biases from artificial intelligence (AI) systems, potentially leading to a perpetuating cycle of skewed decision-making.

AI technology has gained acclaim for its ability to mimic human conversation, creating a perception of high reliability. Its integration into various sectors aims to augment specialist decision-making and reduce errors. However, according to psychologists, this reliance on AI isn't without its risks, particularly regarding biases present in AI outputs.

Artificial Intelligence
(Photo : Gerd Altmann from Pixabay)

Historical Human Decisions

The study notes that AI models are trained on historical human decisions, meaning if these data contain systematic errors, the algorithm will internalize and reproduce them. It claims evidence supporting the assertion that AI systems not only inherit human biases but also magnify them.

Perhaps the most striking revelation from Vicente and Matute's research is the bidirectional nature of this phenomenon. It's not just AI inheriting biases from human data, but humans potentially absorbing biases from AI, setting the stage for a potentially dangerous feedback loop. 

In three experiments, volunteers undertook a simulated medical diagnosis task. One group received support from a biased AI system, demonstrating a consistent error, while the control group conducted the task without AI involvement. 

It is worth noting that the AI, the medical diagnosis task, and the specific ailment in question were entirely fabricated to avoid real-world consequences.

Participants who received aid from the biased AI replicated the same errors the AI exhibited. In contrast, the control group did not manifest such mistakes. According to the study, it provided evidence that AI recommendations directly influenced participants' decision-making.

Read Also: Author Discovers AI-Generated Book on Amazon, Raising Questions About Intellectual Property Rights

AI Biases?

However, the most significant revelation emerged when volunteers who had interacted with the AI system went on to perform the diagnosis task independently. They persisted in replicating the AI's systematic error, even without any AI assistance.

This indicates that participants who initially received support from the biased AI carried forward their bias into a context where AI guidance was nonexistent, thereby demonstrating an inherited bias. 

Notably, this was absent in the control group, who commenced the task without any reliance on AI support. This research underscores that biased information from an AI model can negatively influence human decision-making. 

Moreover, it notes the necessity for evidence-based regulation to ensure AI's equitable and ethical use. The study argues that this should encompass the technical aspects of AI and the psychological dynamics governing AI-human collaboration.

"Although our experiments simplify a potential real-world setting, we believe that our controlled experimental task can help to analyse which basic psychological processes mediate on human-AI collaboration," the study's authors wrote. The study's findings were published in Scientific Reports.

Related Article: EU, UN Express Concern Over the Risk of Tech Like AI and Quantum Computing Being Weaponized by Countries Like China

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion