The rise of artificial intelligence (AI) chatbots was a game-changer, so much so that within the first few months of its introduction, educational institutions had to draft guidelines to prevent students from misusing the technology to complete their school assignments.

The impact of AI was bizarre, with an unexpected surge of robot-generated papers and APIs that could transform drab websites into dynamic ones.

As AI became more ubiquitous, experts grappled with ways to prevent people from exploiting AI text generators for nefarious purposes. One solution proposed was AI detectors, which could flag AI-generated text.

But a recent mathematical proof shows that it may be nearly impossible to know for sure if ChatGPT or any AI model made a piece of text. 

Is it possible to detect AI-generated content?

This revelation poses a significant challenge in the fight against those who cheat on tests and essays and spread false information using the latest technology. 

To address these concerns, several people have suggested putting hidden watermarks in AI-generated text or looking for unique patterns.

In December 2022, TechCrunch reported that an OpenAI visiting researcher was working on a method to "statistically watermark the outputs of a text [AI system]."

Scott Aaronson, a computer science professor, explained that whenever a system generates text, such as ChatGPT, the tool will include an "unnoticeable secret signal" revealing where the content originated.

Also in January, OpenAI unveiled its own AI detector but acknowledged that it is "not fully reliable." According to the company's evaluations, the indicator correctly identifies 26% of AI-written text as "likely AI-written," while 9% of the human-written text is incorrectly labeled as AI-written.

However, according to Soheil Feizi and his team at the University of Maryland, these techniques may not be as reliable as we thought. NewScientist reports that Feizi's team used AI-based tools to rewrite AI-generated text with and without watermarks and fed it to several text detectors. They found that the accuracy of the majority of the detectors was reduced to around 50%, leading to a significant drop in performance.

The Next Best Solution

In a study titled "Can AI-Generated Text be Reliably Detected?", Feizi demonstrated the difficulty of detecting AI-generated text as models become more human-like in the distribution of words using a mathematical proof called an impossibility result. This implies that detectors will either generate many false positives or not enough, allowing AI-generated text to slip through the cracks.

Read Also: Microsoft Bug Enables Users to Modify Bing Search Results, Expose Outlook Emails, Etc.

Feizi believes that even the best detector will not be very effective for all practical purposes, making it nearly impossible to distinguish between AI-generated and human-generated text. As a result, it may be challenging to combat issues related to AI-generated text.

Instead of relying on AI detectors, NewScientist reports that Yulan He of King's College London suggests focusing on understanding and weighing the risks and benefits of AI generative models. 

By doing so, people can mitigate the potential harm posed by AI generative models and capitalize on their benefits.

Stay posted here at Tech Times.

Related Article: OpenAI May Reportedly Launch GPT-5 in 2023 With "Exceptional" Human-Like Skills

 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion