The prominence of AI models like ChatGPT has sparked various discussions, from their potential to replace web searches to concerns about job displacement and existential threats. 

Despite this notion about artificial intelligence (AI), these models heavily rely on human input, as stated by John P. Nelson, a postdoctoral research fellow in Ethics and Societal Implications of Artificial Intelligence at the Georgia Institute of Technology.

FRANCE-TECHNOLOGY-OPENAI
(Photo : JOEL SAGET/AFP via Getty Images)
This illustration photograph taken with a macro lens shows an 'OpenAI' logo reverse projected onto a human eye at a studio in Paris on June 6, 2023.

Abilities of AI 

Nelson argued that AI models, including ChatGPT, lack the ability to learn, evolve, or stay current without human intervention. They require constant human engagement for content generation, interpretation, programming, and even hardware maintenance. 

These AI models, though complex, cannot independently generate new knowledge and are fundamentally intertwined with human knowledge and effort.

The operation of large language models like ChatGPT hinges on predicting sequences of characters, words, and sentences based on extensive training datasets. 

These datasets, such as the one used for ChatGPT, comprise vast amounts of public text from the internet. However, their output can be skewed towards frequent sequences in the training data, potentially leading to inaccuracies.

"ChatGPT can't learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware," Nelson wrote in an article published in The Conversation

The process of feedback is essential in shaping the behavior of these models. Users can rate responses as good or bad, influencing the model's learning process. Nelson noted that AI models like ChatGPT lack the ability to compare, analyze, or evaluate information on their own.

They can only produce text sequences similar to those humans have previously used when making comparisons, analyses, or evaluations. 

Read Also: Harvard Business School Professor Says Small Businesses Should Start Using AI Tools Like ChatGPT

AI's Human Labor

The AI model's seemingly "intelligent" responses are, in reality, the result of extensive human labor that instructs the model on what constitutes a good answer. Many hidden human workers contribute to improving the model and expanding its content coverage. 

Nelson cited a Time magazine investigation that revealed Kenyan workers were employed to label inappropriate content to train ChatGPT, highlighting the reliance on human input.

Nelson showcased the significance of feedback in addressing ChatGPT's tendency to provide inaccurate answers, a phenomenon known as "hallucination." 

The model requires training on specific topics and heavily depends on human-generated feedback. It cannot evaluate news accuracy, assess arguments, or make informed judgments about topics. 

"In short, far from being the harbingers of totally independent AI, large language models illustrate the total dependence of many AI systems, not only on their designers and maintainers but on their users," Nelson noted. 

"So if ChatGPT gives you a good or useful answer about something, remember to thank the thousands or millions of hidden people who wrote the words it crunched and who taught it what were good and bad answers," he added.

Related Article: Is ChatGPT Becoming Dumber? New Study Claims AI Chatbot's Performance Is Deteriorating

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion