Swedish researcher Almira Osmanovic Thunström may have realized this to another level following her successful artificial intelligence (AI) experiment.

AI bot GPT-3 Produces an Output in Just Two Hours

An AI algorithm, named GPT-3, created at Gothenburg University by Thunström, makes researchers wary after the AI completes a 500-word academic thesis about itself with complete citations and references in just about two hours.

GPT-3 has accomplished the paper by fulfilling the criteria set by Thunström. Though its outcome was not that impressive, it was enough to show the capacity of the AI to produce quality output if trained well enough. According to her, the output of AI may potentially raise legal and ethical concerns. This is especially important because the AI was also given the capacity to express its consent in getting the paper published in an academic journal.

In what appears to be both remarkable and concerning, GPT-3 was involved in the decision to publish its academic paper. Thunström had first asked the AI if it consented to publish its work, to which it replied "yes," as if sentient. After this, she asked GPT-3 if any conflicts of interest were to be published, and it responded "no."

An important concern of publishing non-human work is now raised, and Thunström reveals that this has been the objective. The goal, according to her, was to spark public discussions and multi-level debates on the role of AI in academic publishing.

AI bot Sparks Concerns Over Academic Publishing Standards

"We just hope we didn't open a Pandora's box," says Thunström.

In Scientific American, Thunström wrote about this experiment claiming that academic publishing may have to evolve as a system and give way to a future in which AI-produced theses and writings are valued. Along with this, a human researcher's role in a study and the weight of their publications as credentials may also change. All this is in recognition of possible studies in the future that could have components produced by AI which cannot be credited solely to the researchers.

Conversations on AI sentience are nothing new to experts. Last month, an AI technology called LaMBDA was claimed by a Google engineer, Blake Lemoine, to have become sentient. It was an AI chatbot trained to engage in a dialogue and participate in real-time conversations. Whether the AI was sentient is another question, but Lemoine claims it is.

"My opinions about LaMDA's personhood and sentience are based on my religious beliefs," Lemoine says.

Though AI sentience has been a conversation among many, experts still say it is not yet advanced enough to resemble human intelligence. Thunström says the AI research community continues to attempt replicating the GPT-3 experiment. Similar studies find that GPT-3 can write about anything. Other than "multi-level debates" on AI-made manuscripts, it is clear that so much more can be revealed by the continuous research on AI technology.

Related Articles: An 'AI Robot Chef' Is the Head Cook of a Pasta Restaurant in Japan

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion