With the rise in popularity of AI chatbots in various domains, a new study delves into the potential of ChatGPT in generating scientific content in the field of veterinary neurology.

According to TechXplore, the study assessed abstracts and introductory sections of both original research papers and those generated by ChatGPT in veterinary neurology. This evaluation was conducted using AI output detectors and plagiarism detectors. 

Additionally, the research involved the opinions of 13 Board-Certified neurologists tasked with determining the originality and reliability of the content.

ITALY-TECHNOLOGY-AI
(Photo : MARCO BERTORELLO/AFP via Getty Images))
A photo taken on March 31, 2023 in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI web site, displaying its chatGPT robot

Nuanced Use of AI in the Academe

The lead author, Samira Abani from the University of Veterinary Medicine Hannover, emphasized the importance of understanding the nuanced use of AI tools in academic settings. 

She said, "Like any other technology, an AI tool like ChatGPT can either pose a threat to scientific integrity and transparency or assist researchers, depending on how they are used. I strongly recommend the integration of education on both the proper use and potential misuse of AI-based tools in academia as a fundamental aspect of good scientific practice."

Jasmin Neßler, the corresponding author from the same institution, highlighted the need for interdisciplinary collaboration to establish guidelines for responsible AI use. 

Neßler noted: "We believe that the popularity of AI requires interdisciplinary scientific collaboration to establish clear guidelines for its responsible use, ensuring integrity and transparency in published literature. Banning AI tools may not always be the most effective approach to preventing misuse; instead, we should embrace this opportunity to harness AI's potential for the benefit of society."

Read Also: Can ChatGPT Be a Co-Author in a Scientific Study? Here's What Researchers Say 

Challenges Encountered

The study revealed that field experts encountered challenges distinguishing between ChatGPT-generated and human-written abstracts, especially when the subject matter was less familiar. 

Only four out of 13 reviewers accurately identified the AI-generated text in these cases. However, as familiarity with the topic increased, the accuracy improved, with seven out of 13 reviewers correctly identifying the AI-written abstract.

Furthermore, the inclusion of the introduction and references significantly aided reviewers in differentiating between human-written and AI-generated content. It led to a notable increase in their performance, with approximately two out of three texts being correctly identified.

While the research acknowledges the potential of AI-based tools in scientific writing, it also raises concerns about the need for caution and the importance of proper education in their study to maintain published literature's integrity and transparency. 

The increasing influence of AI in academic writing calls for interdisciplinary collaboration to establish clear guidelines for their responsible application in scientific practice.

"We suggest integrating education on both proper use and potential misuse of AI-based tools in academia, as part of good scientific practice for both pre- and post-graduate students in university programs," the authors concluded. 

The study's findings were published in the journal Frontiers in Veterinary Science.

Related Article: ChatGPT Was Able to Give Better Medical Advice on Depression Than Real Doctors, New Study Shows

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion