AI Chatbots Like ChatGPT Worries Schools, Anti-Cheating Software

Schools and anti-cheat software companies brace for AI-backed fraud.

ChatGPT is a new artificial intelligence (AI) chatbot made by Microsoft-funded startup OpenAI. More than another product of machine learning, the system is known for providing profound and nearly human-like answers to submitted questions.

This extensive language model developed by OpenAI is intended to help users by giving knowledge on a variety of subjects and aiding in question-and-answer sessions.

AI Tool Poses Risks of Academic Fraud

Users from all across the world are now giving this AI product a lot of traction. In fact, the ChatGPT login page for the tool consistently informs users that so many people are using it at once that it cannot accept any more requests.

The prototype AI is usually experiencing high traffic as OpenAI continues to scale its systems.

It is also important to note that the system may occasionally produce inappropriate or biased content and inaccurate or misleading information. It is also not meant to give advice.

But it won't be long before the tool becomes more advanced and reliable, particularly for middle schoolers with essay homework or college students writing a scientific paper. This is the reality: students will try AI to make school works for them. This poses a challenge for academic organizations as well as plagiarism software. Where do we draw the line with AI-generated texts?

When asked how it is learning more about everything, the AI explains it to us. ChatGPT depends on OpenAI updates and improvements, and its ability to provide information is limited to its pre-existing knowledge base.

Turnitin vice president for artificial intelligence Eric Wang told Bloomberg that what ChatGPT is presenting is a giant leap forward, and people behind the plagiarism detector service were caught off guard by its capabilities.

Wang stated that for the time being, ChatGPT's answers should be easily identifiable by both teachers and Turnitin software. The service contains numerous factual errors, and its language model tends to generate linear sentences and choose broad, prominent words rather than the occasionally narrower vocabulary that a student would choose.

These early aberrations generate signals that Turnitin and other anti-plagiarism tools may be able to detect. But is it enough for an AI-powered chatbot from a Silicon Valley startup that has received $1 billion in funding from Microsoft Corp alone?

Looking at Possible Solutions

With the rise of fraud issues, OpenAI creator believes it is time to place a watermark on AI-generated texts to ensure system safety.

According to TechCrunch, an OpenAI visiting researcher is working on a method to "statistically watermark the outputs of a text [AI system]." Scott Aaronson, a computer science professor, stated in a lecture that whenever a system generates text, such as ChatGPT, the tool will include an "unnoticeable secret signal" revealing where the content came from.

Another solution would be adding the context of current events to writing assignments or asking students to record themselves explaining what they wrote, said Annie Chechitelli, Turnitin's chief product officer.

Funnily enough, somewhere out there, there is also an AI bot for that.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics