The addition of "trust" and "distrust" buttons on social media platforms, alongside the standard "like" button, could play a vital role in reducing the spread of misinformation, according to a study conducted by researchers from University College London (UCL).

The study found that incentivizing accuracy by incorporating trust and distrust options led to a substantial decrease in the reach of false posts.

Touch Screen
(Photo : Edar from Pixabay)

Incentivizing Trustworthiness

Professor Tali Sharot, one of the co-lead authors, highlighted the escalating problem of misinformation, or "fake news," which has polarized the political landscape and influenced people's beliefs on crucial matters like vaccine safety, climate change, and diversity acceptance.

Traditional approaches like flagging inaccurate content have shown limited effectiveness in combating this pervasive issue.

"Part of why misinformation spreads so readily is that users are rewarded with 'likes' and 'shares' for popular posts, but without much incentive to share only what's true," Sharot said in a press release. "Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared."

The researchers developed a method to encourage trustworthiness, which resulted in significantly reducing the sharing of false information. In a related study published in Cognition, Sharot and her colleagues demonstrated the power of repetition in shaping the sharing behavior on social media.

Participants were more likely to share statements they had encountered multiple times, assuming that repeated information carried greater credibility, even if it was misinformation. To test a potential solution, the researchers conducted six experiments using a simulated social media platform with 951 participants.

The platform allowed users to share news articles, half of which were inaccurate. In addition to the traditional "like" and "dislike" reactions and reposting options, some versions of the experiment introduced "trust" and "distrust" reactions.

Read Also: Maryland School District Takes Legal Action Against Social Media Companies Over Mental Health Crisis

Gravitating Towards Trust

The study found that users gravitated towards the trust and distrust buttons, surpassing the usage of like and dislike buttons. Moreover, participants started posting more accurate information in order to receive "trust" reactions.

Computational modeling revealed that after introducing trust and distrust reactions, users became more discerning about the reliability of news stories before deciding to repost them.

Interestingly, participants who engaged with the platform versions featuring trust and distrust buttons ended up with more accurate beliefs, according to the study. 

Laura Globig, a PhD student and co-lead author, emphasized the potential of incorporating buttons indicating the trustworthiness of information into existing social media platforms. She also acknowledged the complexity of real-world implementation, considering various influences.

But she noted that with the significant risks associated with online misinformation, integrating trust and distrust options could be a valuable addition to ongoing efforts in combating the spread of false information.

The study provides insights into a potential solution to tackle the challenge of misinformation, offering hope for a more reliable and trustworthy social media environment. 

By promoting accuracy and incentivizing trustworthiness, the "trust" and "distrust" buttons seemed to foster a more informed and discerning online community. The study was published in the journal eLife.

Related Article: Fake Rich Social Media Trend: Here's Why You Should Not Fall For It

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion