Every dystopian science fiction starts with the idea that humans will be overrun by robots. 90s movies and books believed this would happen to us in the 2000s. And decades later, we find ourselves with robots in almost every field.

We also have artificial intelligence (AI), sparking fears that it will replace humans and displace them from their jobs. AI is only going to be more advanced from now on, especially with an AI chatbot that has garnered millions of users in just less than a week.

Lo and behold, ChatGPT - the next big thing that may just be as disruptive and transformative as the Internet. And it might as well inspire upcoming dystopian fiction.

ChatGPT, which stands for "Generative Pretrained Transformer 3," is a massive language model developed by OpenAI. It is structured on a deep-learning architecture known as a transformer, enabling the tool to provide human-like responses to natural language prompts.

This tool is like a personal assistant or secretary who would heed every request you make except for coffee. You can call it a slave or even your best buddy since it can help you with your research, compose a song or a poem, and even listen to your relationship problems.

These qualities were all made possible by language models developed for natural language processing. It helps the machine understand the nuances and quirks of a natural language text, enables speech recognition by predicting the next word in a spoken sentence, provides summaries to queries, and understands the sentiment or tone of the text. 

This is why ChatGPT is very human-like and it is scary yet fascinating at the same time. But to understand how this chatbot came to be, we must trace its roots back to the earliest forms of language models.

Brief History of Language Models

Researchers started creating rule-based systems to evaluate and produce language in the 1950s and 1960s. These programs could produce sentences that followed predetermined rules since they were based on formal grammar. However, they still had trouble dealing with the ambiguity and complexity of natural language.

But in the 1980s and 1990s, large amounts of data, such as in the form of text corpora, were employed to create more realistic and nuanced natural language. Statistical language models began to surface at this time, and they used probability theory to determine the likelihood of a word or phrase emerging in a particular context.

Researchers then started investigating neural network-based models in the early 2000s, which use layers of synthetic neurons to understand patterns in linguistic input. As data and processing capacity expanded, these models got more potent, and they are now the standard method for language modeling.

Language models had a long way to go, but all roads may have just led them to a human-like chatbot such as ChatGPT. And though it may seem like an overnight success, it was years in the making that, started in 2015.

Read Also: Microsoft Closed Internal Metaverse Team to Focus on ChatGPT

ChatGPT's Timeline

ChatGPT Timeline
(Photo : Tech Times owns this image.)

December 11, 2015: The Birth of OpenAI

OpenAI was officially established on December 11, 2015, by several tech luminaries such as Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba, among others. 

The company's co-founders were worried about the possible dangers of highly developed AI, including the loss of jobs, disruption of the economy, and even existential threats to civilization. Hence, they founded OpenAI.

In short, the company was built in way of accepting that the future is AI and that there is nothing to worry about. But we also know that business is business; from then on, AI was an attractive market that needed to be exploited.

June 11, 2018: Laying the Groundwork with GPT-1

The first GPT (Generative Pre-trained Transformer) language model, GPT-1, was unveiled by OpenAI on June 11, 2018.

GPT-1 represented at the time a substantial advancement in the field of natural language processing as it boasts 117 million parameters and the capacity to produce a coherent and contextually relevant response to a particular prompt. It was ultimately the groundwork for ChatGPT's success.

It is also worth noting that Elon Musk stepped down from the company on February 21, 2018, due to conflicts of interest since he was also the CEO of several companies, such as SpaceX and Tesla.

February 14, 2019: GPT's Second Installment

GPT-2 was introduced on Valentine's Day in 2019. Due to the model's extraordinary size and language-generating skills, it attracted significant attention from the AI community and the media. It was the largest and most potent language model at the time of its introduction, with 1.5 billion parameters, proving that such models will only get larger and larger as the years go by.

July 22, 2019: Microsoft-OpenAI Partnership Announced

Microsoft has been working with OpenAI since the two businesses announced their partnership to create new AI technology in 2019. As part of the collaboration, Microsoft contributed $1 billion to OpenAI and committed to working with the firm to create fresh AI tools and services.

Since then, Microsoft has taken a leading role in assisting OpenAI's research and development initiatives, particularly those connected to GPT and ChatGPT. 

This partnership was crucial in expanding the resources available to OpenAI. Microsoft has a specialized hardware and cloud computing infrastructure and boasts expertise in machine learning. 

With a boost from Big Tech, it was clear that GPT was on its way to disruption.

June 11, 2020: The Largest and Most Powerful GPT

GPT-3 was released by OpenAI on June 11, 2020. With 175 billion parameters, GPT-3 was the largest and most potent language model ever created, marking a significant turning point in the field of natural language processing. 

November 30, 2022: ChatGPT is Born!

ChatGPT started making waves upon its introduction in November of last year. It employs OpenAI's GPT-3.5 language technology, a model trained with a vast collection of text data from various sources. The bot can respond to follow-up questions and even acknowledge its errors.

But most impressively, it can understand hundreds of languages, including various dialects and mother tongues. It can also generate contextually-relevant responses due to its vast amount of data.

However, it is not guaranteed that ChatGPT is proficient in all of these languages since it depends on the quantity and quality of text data fed to this chatbot. This goes to show that ChatGPT's capacities are finite and are not as omnipotent and all-knowing as we assume. 

The responses to this viral chatbot have been diverse so far. Microsoft's founder Bill Gates said he is excited about the potential of ChatGPT and hailed it as the "most important innovation" right now.

But most importantly, it is challenging Google's dominance in the search engine field as Microsoft integrated AI capabilities into Bing, which would enable it to respond to user prompts and requests. 

ChatGPT is shaking Google and is causing quite a stir in the tech scene.

ChatGPT's Tech Solutionism Tendencies

While I concur that this tool is the next big thing after the Internet and will only advance from henceforth, a looming danger must not be neglected from this viral chatbot as it may provide "tech solutionism."

Techsolutionism is the idea that technology in itself is enough to solve all of our problems. It was first coined by Evgeny Morozov, a Belarusian-American author, researcher, and journalist, in his book "To Save Everything, Click Here: The Folly of Technological Solutionism".  

Morozov argues that technological solutions neglect the complex nature of social, political, and economic factors that underline some of the most pressing issues we face. Although ChatGPT's large language model promises to provide context-relevant responses, it remains to be limited and hinges on the data that it was fed with. And this will never be enough. 

For one, its current training data was cut off in 2021. This means that it has no access to current and real-time information, and it can never replicate the jobs of journalists in decentralizing relevant and up-to-date information.

ChatGPT also boats extensive knowledge of the arts - it can criticize artwork, write poems, and compose music. However, it can only replicate and recreate, and it will never capture the romantic ardor of Shakespeare's writing or Taylor Swift's deeply personal songwriting since it revolves around data and not imagination or inspiration.

AI, in general, has tech solutionism tendencies if humans are overdependent on this technology for various solutions. I still believe that a future where robots control humans is pure fiction, but such a scenario would be possible if we miss the flaws and errors of AI and misapply it to our daily lives.

In a philosophical sense, technologies are only as good as the meanings we ascribe to them and how they translate these meanings. 

ChatGPT will only be as good as the attention and clamor that it receives. It will only be as good as the meanings ascribed since it is a language model, not a human brain.

OpenAi tested its Technology on the DOTA eSports Event

Did you know that OpenAI's Five defeated DOTA world champions, team OG, in an eSports event? The aim of testing Five at the event was to demonstrate the capabilities of this innovation and showcase OpenAI's tech, Five, potential in the field of AI. However, the company retired this program after crashing the eSports champs. 

Related Article: Conversation With ChatGPT: Multiverses, Singularity, Future of Machine Learning

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion