The rise of AI tools such as ChatGPT, Google Bard, and many more is challenging the information ecosystem of the Internet. With the increasing rate of AI-generated content, there are concerns about how people can identify what is credible and what is fake online.

According to Tech Xplore, this is even more pronounced with a new study from Mainz University of Applied Sciences and Johannes Gutenberg University Mainz (JGU), which found that online users rated both AI-generated and human-created content as similarly credible.

FRANCE-INTERNET-TECHNOLOGY-CHATGPT
(Photo : SEBASTIEN BOZON/AFP via Getty Images)
This illustration photograph taken on October 30, 2023, shows the logo of ChatGPT, a language model-based chatbot developed by OpenAI, on a smartphone in Mulhouse, eastern France.

Perceived AI Credibility

The researchers noted that AI-driven systems were autonomously generating content based on the database they were trained in. That means they do not have relative human oversight compared to traditional platforms such as Wikipedia.

Hence, the study sought to answer how users perceive the credibility of human-generated and AI-generated content in different user interfaces, with over 600 English-speaking participants joining this endeavor.

Martin Huschens, Professor of Information Systems at Mainz University of Applied Sciences and one of the study's authors, expressed his surprise with their findings.

"Our study revealed some really surprising findings. It showed that participants in our study rated AI-generated and human-generated content as similarly credible, regardless of the user interface," Huschens noted.

"What is even more fascinating is that participants rated AI-generated content as having higher clarity and appeal, although there were no significant differences in terms of perceived message authority and trustworthiness-even though AI-generated content still has a high risk of error, misunderstanding, and hallucinatory behavior," he added. 

Read Also: Teachers, Students May Soon Use ChatGPT, OpenAI Explores Educational Applications for Classrooms

Delicate Balance

The study focuses on how AI-generated content is perceived and the risks of its further distribution. The researchers ultimately want users to think critically in evaluating information from various online platforms. 

They advocate for a delicate balance between the comfort of AI-powered tools and responsible information use. Furthermore, the study urges users to be aware of the limitations and inherent biases in these systems, a concern that has been a recurring theme for many experts.

Professor Franz Rothlauf, specializing in Information Systems at Johannes Gutenberg University Mainz, emphasized that the study findings indicate that in the era of ChatGPT, discerning between human and machine language and text production has become increasingly difficult.

As AI operates based on statistical guessing rather than true understanding, the researchers argue that mandatory labeling of machine-generated knowledge will be essential in the future. Without such labeling, the boundaries between truth and fiction may blur, making it challenging for people to distinguish between them.

The research team underscores that it is the responsibility of the scientific community and a social and political challenge to raise awareness among users about the responsible use of AI-generated content. The study's findings were published on the arXiv preprint server. 

Related Article: "AI Hasn't Come So Far," Says Laureate, Echoing World's Greatest Scientists During Laureate Forum

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion