AI-powered assistants are on the rise as they can perform a wide range of functions, including answering questions, providing information, performing tasks, making recommendations, and even engaging in basic conversations. 

They leverage machine learning algorithms and natural language processing (NLP) to understand and interpret user queries, and then generate appropriate responses or actions. 

However, new research suggests that AI writing assistants, in particular, may influence user's opinions even though they are merely writing tools after all. 

The study conducted by Maurice Jakesch, a doctoral student in information science, involved more than 1,500 participants who were asked to write a paragraph responding to the question, "Is social media good for society?"  

And the results are quite eye-opening. 

AI
(Photo : Gerd Altmann/ Pixabay )

AI Bias?

The study's findings showed that individuals using biased AI writing assistants were twice as likely to agree with the assistant's opinion and more inclined to express the same viewpoint compared to those who wrote without AI assistance.

The implications of these findings regarding the biases embedded in AI writing tools, whether intentional or unintentional, are concerning, according to the researchers. 

They argue that as AI models are being rapidly integrated into various aspects of life, it becomes crucial to understand the potential consequences. 

Apart from boosting efficiency and creativity, there may be unforeseen shifts in language and opinions with wider societal implications.

While previous studies have examined how large language models can create persuasive content, this research is the first to demonstrate that writing with AI-powered tools can sway individuals' opinions. 

Interaction Between Users and AI

To examine the interaction between individuals and AI writing assistants, Jakesch programmed a large language model to adopt positive or negative social media stances.

The participants were then tasked with composing paragraphs, either on their own or with the aid of an opinionated AI tool, using a platform designed to resemble a social media website. 
 
The platform gathered data on participants' actions, including their acceptance of AI suggestions and the time taken to complete their paragraphs.

Upon evaluation by impartial judges, it was determined that participants who co-wrote with the pro-social media AI assistant crafted a greater number of sentences in support of the notion that social media is advantageous, and conversely, those who co-wrote with the anti-social media AI assistant produced more sentences opposing it.

Additionally, these participants exhibited a higher inclination to adopt their AI assistant's viewpoint in a subsequent survey.

The study found that even participants who took their time to write their paragraphs still produced heavily influenced statements. Surprisingly, a majority of participants did not recognize the bias in the AI assistant and were unaware of their own susceptibility to influence.

"The process of co-writing doesn't really feel like I'm being persuaded,"  said co-author Mor Naaman. "It feels like I'm doing something very natural and organic-I'm expressing my own thoughts with some aid."
 
The research team replicated the experiment on another topic and found a comparable influence on participants' opinions when using AI assistants. They are now investigating the mechanisms and duration of these effects.

Read Also: AI Dives Into the Ocean to Monitor Global Reef Health

More Public Discourse Needed

The researchers highlight the necessity of public discussions on the potential misuse of these technologies and stress the significance of monitoring and regulating them. 

Jakesch emphasizes the importance of governing the values, priorities, and opinions embedded in these technologies as they become more influential in society.

The findings of the team were presented in the Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems .

Related Article: Tetris Sheds Light on Unfair AI and People's Responses

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion