As artificial intelligence (AI) advances rapidly, questions regarding its ability to foster trust have become increasingly pertinent. 

This is particularly true with the proliferation of AI agents, with concerns emerging regarding their capacity to establish trust similar to human interactions. The question then arises: Can AI agents build trust similar to humans? A new study says yes.

(Photo : Peace,love,happiness from Pixabay)

AI Agents Can Build Trust Similar to Humans

Recent research suggests that AI agents have the capacity to establish trust similar to that observed in human interactions. 

The study, led by Yan (Diana) Wu from San Jose State University, indicates that AI can exhibit human-like trust and trustworthy behavior through self-learning processes, similar to how trust develops among humans.

Wu, along with collaborators Jason Xianghua Wu from the University of New South Wales, Kay Yut Chen from The University of Texas at Arlington, and Lei Hua from The University of Texas at Tyler, emphasized that the ability of AI to cultivate trust is not merely confined to learning game strategies but represents a significant advancement in creating intelligent systems capable of fostering social intelligence and trust autonomously.

"Human-like trust and trustworthy behavior of AI can emerge from a pure trial-and-error learning process, and the conditions for AI to develop trust are similar to those enabling human beings to develop trust," Wu said in a statement.

"Discovering AI's ability to mimic human trust behavior purely through self-learning processes mirrors conditions fostering trust in humans," she added.

Read Also: Google's Apology Over AI Misrepresentation Sparks Debate, Addressing Diversity in Image Generation

The Trust Game

The authors underscored the importance of contrasting AI agents with human decision-makers to gain insights into AI behaviors across diverse social contexts. 

Using interactive learning, AI agents can adapt their social behaviors, offering a novel approach to exploring cooperative behaviors in various decision-making scenarios.

The study, conducted through a series of experiments based on the trust game, employed deep neural network-based AI agents trained solely through repetitive interactions with one another, devoid of any prior knowledge or assumptions about human behaviors. 

The findings reveal that AI agents, under specific conditions, exhibit behaviors similar to those observed in human subjects participating in the trust game.

Furthermore, the research delves into the factors influencing the emergence and levels of cooperation among AI agents in the trust game, shedding light on the mechanisms underlying trust-building in AI systems. 

"This study offers evidence that AI agents can develop trusting and cooperative behaviors purely from an interactive trial-and-error learning process," the researchers wrote.

"It constitutes a first step to build multiagent-based decision support systems in which interacting artificial agents are capable of leveraging social intelligence to achieve better outcomes collectively," they added. 

The paper, titled "Building Socially Intelligent AI Systems: Evidence from the Trust Game Using Artificial Agents with Deep Learning," was published in the Journal of Management Science.

Related Article: Scammers Sell Fake Super Bowl Tickets on Social Media: How Do You Protect Yourself From This AI Tactic?



ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion