A new study sheds light on how humans hold AI-based assistants accountable for outcomes even when these systems only serve as supportive instruments in decision-making processes.

While future AI systems might operate autonomous vehicles without human input, current AI assistants are primarily designed to provide supportive information, such as navigation and driving aids. 

Despite this distinction, the study highlights that people tend to attribute partial responsibility to AI assistants in real-life scenarios, whether things go well or not.

AI
(Photo : Gerd Altmann from Pixabay)

Responsibility Attribution to AI Assistants

The investigation, led by Louis Longin from the Chair of Philosophy of Mind, delves into the nuances of responsibility attribution to AI assistants. 

The team, collaborating with Dr. Bahador Bahrami and Prof. Ophelia Deroy, aimed to understand how individuals assess responsibility in situations involving human drivers using different types of AI assistants.

"We all have smart assistants in our pockets. Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver's seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential," Longin said in a statement.

The study involved 940 participants who evaluated scenarios wherein human drivers interacted with a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation tool.

The participants were also asked to determine the level of responsibility they assigned to the navigation aid and to what extent they viewed it as a tool. 

Despite participants' assertion that these smart assistants were essential tools, there existed a paradoxical perception that these AI assistants shared some responsibility for the successes or failures of human users. Interestingly, this divided sense of responsibility did not emerge for non-AI-powered navigation instruments.

Read Also: AI-Controlled Drone Beats 3 Human Champion Pilots in Head-to-Head Races

AI More Responsible for Positive Outcomes

Moreover, the study uncovered that smart assistants were deemed more responsible for positive outcomes than for negative ones. That could be attributed to the different moral standards people might apply to credit and blame.

When a positive outcome is achieved, the study suggests that people might be more lenient in assigning credit to non-human systems. Additionally, the study found no significant variance between smart assistants communicating through language and those conveying information via tactile means.

While both types provided similar information, such as alerts about potential obstacles, the richer interaction facilitated by language-based AI systems like ChatGPT appeared to contribute to a greater tendency for anthropomorphization or the attribution of human traits to objects.

In essence, the study's results suggest that AI assistants are perceived as more than mere tools, although they still fall short of human standards. These findings are anticipated to influence the design of AI assistants and contribute to the ongoing social discourse surrounding these technologies.

As Longin emphasizes, organizations involved in the development of smart assistants should consider the broader societal and moral implications of their products. The study's findings were recently published in the journal iScience. 

Related Article: AI-Generated Sports Articles Suffer Mockery on Social Media Prompting Gannett to Halt LedeAI Experiment

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion