Large language models (LLMs) have garnered significant attention for their ability to generate human-like texts and perform various language-related tasks. However, a new study by Kevin Matthe Caramancion from the University of Wisconsin-Stout has explored whether LLMs can effectively identify fake news. 

The study evaluated the performance of prominent LLMs, including OpenAI's Chat GPT-3.0 and Chat GPT-4.0, Google's Bard/LaMDA, and Microsoft's Bing AI, by feeding them fact-checked news stories and assessing their ability to distinguish between true, false, and partially true/false information.

Using a test suite of 100 fact-checked news items from independent fact-checking agencies, the models were evaluated based on their accuracy in classifying the news items compared to the verified facts provided by the agencies.

Software-internet-US-INTERNET-SOFTWARE-AI-OPENAI
(Photo : STEFANI REYNOLDS/AFP via Getty Images)
This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. - The company behind the ChatGPT app that churns out essays, poems or computing code on command released on March 14, 2023, a long-awaited update of its artificial intelligence (AI) technology that it said would be safer and more accurate than its predecessor.

Can LLMs Fact-Check?

The study found that OpenAI's GPT-4.0 demonstrated superior performance among the LLMs tested. However, it is important to note that all models still lagged behind human fact-checkers, underscoring the invaluable role of human cognition in detecting misinformation.

While misinformation remains a pressing challenge in the digital age, the development of reliable fact-checking tools and platforms has been a priority for computer scientists. Despite the progress made, there is a need for a widely adopted and trustworthy model to combat misinformation effectively. 

Caramancion's study sheds light on the potential of LLMs in addressing this issue but also emphasizes the importance of combining AI capabilities with human fact-checkers to achieve optimal results.

The research highlights the need for continued advancements in LLMs and the integration of human cognition in fact-checking processes. Future studies could explore a broader range of fake news scenarios to further evaluate the performance of LLMs.

Caramancion's future research plans focus on studying the evolving capabilities of AI and how to leverage these advancements while recognizing the unique cognitive abilities of humans. 

Refining testing protocols, exploring new LLMs, and investigating the symbiotic relationship between human cognition and AI technology in news fact-checking are Caramancion's key areas of interest.

Read Also: Columnist Asks ChatGPT to Make Autobiography, Describes How It Came Up with Many Errors

AI Hallucinations

The issue of AI hallucinations may also be relevant to the study on LLMs and their ability to detect fake news. AI hallucinations refer to instances where AI models generate false or misleading information that appears convincing or accurate but lacks a basis in reality.

The possibility of AI models generating false information has long concerned many experts. By equipping these models with more fact-checking capabilities, concerns about AI hallucinations may be addressed.

Related Article: AI-Detection Tools Flag Down Works from Non-Native English Speakers-Does it Discriminate?

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion