ChatGPT is no stranger to headlines, making a name for itself over the course of the last several months with some stellar topics. The artificial intelligence, which is built upon OpenAI's GPT-3 AI pipeline, leverages the Reinforcement Learning model to answer practically any question thrown at it, laid out in a dialogue format. 

To the vast majority of its users and the mass public, ChatGPT is a newfound social icon. For students, it's a free pass on graded content, as witnessed in the recent news wherein several Stanford students have come out admitting to using the bot on final exams. For Microsoft specifically, it's a $10 billion investment, one that could forward its plans with Bing and create a far more powerful search engine to rival Google. 

To Arvind Narayanan, a Princeton computer science professor, ChatGPT is nothing more than a "bullshit generator," one that flaunts inconsistent and oftentimes incorrect content, now which is being pushed beyond the field of education and into even journalism. In a now month-old newsletter titled "AI Snake Oil," Narayanan explains how AI's "inability to discern truth" is one of its major drawbacks, a problem that has seemingly gone over the heads of many of its users. 

Related Article: China's Search Engine to Launch ChatGPT-Like AI Service

"ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training," explains Narayanan in the introduction. "That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it's always convincing, so it's hard to tell the difference." 

Even OpenAI itself admits that ChatGPT is no god or intellectual guru, explaining on its help page that the bot "is not connected to the internet, and it can occasionally produce incorrect answers." The firm highlights how ChatGPT has limited knowledge of events beyond 2021 and "may also occasionally produce harmful instructions or biased content." 

Narayanan continued his ideas in an interview with The Markup, detailing how CNET's use of ChatGPT in over 75 news articles proved to show several with multiple errors. While it can be stipulated as being incredibly harmful to readers, given that CNET gave no disclosures on its use of the bot, the rudimentary issue here is in a growing over-reliance on AI and technology for workflows and activities that, as of yet, a human is still the most capable of performing. 

"This was not a case of malice, but this is the kind of danger that we should be more worried about where people are turning to it because of the practical constraints they face," Narayanan tells The Markup. "When you combine that with the fact that the tool doesn't have a good notion of truth, it's a recipe for disaster." 

Despite the "intelligence" monicker, AI is still artificial in nature. Narayanan's statements go hand in hand with similar ideas put forth by The Atlantic's Ian Bogost, who likens ChatGPT to a toy. In his article, Bogost explains how reliance on ChatGPT not only can be hindered by its limited insight and depth, but may also "lead to a loss of genuine human connection." To make matters even more complicated, Bogost didn't write that line - ChatGPT did as Bogost. 

Leveraging ChatGPT for the everyday raises many ethical questions, the most prominent one being authenticity. For that very reasy, it's hard to see the bot stealing away our jobs in the near future, and Narayanan agrees. It'll take some time before such a concept grows outer weight, but language models as journalists, educators, or even physicians is hard to grasp at its current level. As such, it's best to follow in Bogost's shoes and see ChatGPT, as well as similar AI, as nothing more than a plaything. 

For now, that is.

Read Also: OpenAI's ChatGPT is Developing an iOS App - Demo Version Shared on LinkedIn

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion