A new study suggests that AI could improve the reliability of Wikipedia. Wikipedia's reliability has been a subject of concern despite being treated as a go-to online library for some users.

This concern arises from its open editing system, which means that anyone can contribute, leading to the potential for inaccurate or misleading information. 

Ensuring that every edit is factually sound can be challenging, as contributors vary in their expertise on specific topics. Additionally, while there are guidelines in place, not all articles may meet the same standard of accuracy. 

This dynamic nature of contributions and the potential for outdated information raises concerns about the platform's overall reliability for factual information.

Wikipedia
(Photo : Gerd Altmann from Pixabay)

All About Samaya AI

A recent article published in Nature Machine Intelligence sheds light on the efforts of a London-based AI company, Samaya AI, to enhance the reliability of Wikipedia's reference system, TechXplore reported.

The company employs artificial intelligence (AI) to scrutinize sources, distinguishing between reliable and questionable ones, and provides its own suggestions.

Fabio Petroni, co-founder of Samaya AI, emphasized the potential of AI, stating that it can assist in locating superior citations - a task demanding proficiency in language and adeptness in online search. 

Their model, known as SIDE, was trained on an extensive dataset of Wikipedia entries. It was subsequently employed to evaluate previously unexamined articles, scrutinizing sources and presenting alternative reference options. Wikipedia users then evaluated the outcomes of this process.

According to the researchers, when SIDE classified Wikipedia sources as unverifiable and provided its own recommendations, users favored SIDE's suggestions approximately 70% of the time. Remarkably, in around half of the instances, SIDE proposed the same sources that Wikipedia initially suggested.

"We demonstrate that existing technologies have reached a stage where they can effectively and pragmatically support Wikipedia users in verifying claims," Petroni said.

He further noted that forthcoming research endeavors will concentrate on assessing references in Wikipedia that go beyond text on the internet, encompassing mediums like images, videos, and printed publications.

Read Also: Will AI Replace Humans? Top 10 Jobs That Might Be Affected

Significance of Wikipedia Verifiability

The study's abstract underscores the significance of verifiability as a foundational content policy for Wikipedia. It highlights the need for enhanced tools to aid humans in upholding the quality of references on the platform. 

SIDE distinguishes Wikipedia citations that may not robustly substantiate their claims and proposes more reliable alternatives from the web. The model was trained on existing Wikipedia citations, leveraging the collective expertise of numerous Wikipedia editors.

Through crowdsourcing, the study discerned that for the top 10% of citations most likely to be deemed unverifiable by the system, human users favored the suggested alternatives from SIDE over the initially cited reference in 70% of cases.  

They developed a demonstration to interact with the English-speaking Wikipedia community, discovering that SIDE's initial citation suggestion was favored twice as often as the current Wikipedia citation for the same top 10% of claims considered potentially unverifiable by SIDE. It underscores the promise of an AI-powered system working with humans to improve Wikipedia's credibility. 

Related Article: New AI-Powered 'Eye Clock' Predicts Molecular Age, Illnesses Through Tear Fluid

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion