In a new study, researchers say that ChatGPT can forge convincing medical data, which will make it easier to publish fabricated research that can cause doubt regarding the legitimacy of the work. 

TECHNOLOGY IT ARTIFICIAL INTELLIGENCE CHATGPT
(Photo : NICOLAS MAETERLINCK/BELGA MAG/AFP via Getty Images)
Illustration picture shows the ChatGPT artificial intelligence software, which generates human-like conversation, Friday 03 February 2023 in Lierde.

ChatGPT's Danger to the Medical Field

Despite the positive capabilities of ChatGPT that brought convenience to others, this type of technology can also be inevitably used in an inappropriate manner. Researchers now claim that this will make it easier to take advantage of publishing fraudulent research.

According to a report from Cosmos Magazine, ChatGPT's fabricated medical data passes as convincing to many, which put the scientific community in danger as it relies heavily on data. Additionally, detecting fabrications is difficult since the work is completely made up and falsified.

Based on a study published in Patterns, researchers tested the accuracy of free online AI detectors. ChatGPT was asked to generate an abstract for a scientific paper regarding the effects of two different drugs on rheumatoid arthritis from 2012-2020 data. 

Surprisingly, it produced a convincing abstract that gave real numbers, with claims that one drug works better than the other. This can come off as a dangerous affirmation for a chatbot to produce. Researchers stated that they were able to highlight the danger of fabricated research, along with the reasons and potential remedies for it. 

Also Read: AI Ethics Student Cheats Using ChatGPT Chatbot

There will be a bigger problem when a certain person uses data that is non-existent and only came from the chatbot that produced fabricated results to conduct a study, which they stated may easily bypass human detection and can be published in the future.

"Within one afternoon, one can find themselves with dozens of abstracts that can be submitted to various conferences for publication," they noted. Some users can also use this technology to write manuscripts with fabricated data and falsified results. Once these works are published, they will quickly pollute legitimate research and affect the industry of legitimate works. 

AI's Benefits to Healthcare

Despite the researchers' warning of its danger, Interesting Engineering reported that this can also be used in a positive manner. Using artificial intelligence is not that bad for research, including grammar-checking and concluding legitimate results found in studies. This will lessen the researchers' time and process of conducting a certain study. 

As artificial intelligence emerges in different sectors like manufacturing and engineering, AI can also help the healthcare industry by providing faster and better results, 24/7 monitoring, and prevention of medical errors with human supervision.

In order for people to use ChatGPT safely and correctly, researchers concluded that the technology is needed to take in future tests and considerations to study the fraudulent data and its implications. 

As per The Healthcare Technology Report, Open AI managed to pass a version of the United States Medical Licensing Examination. It is currently being used to capture both information and draft insurant denial appeals of the patient. 

Related Article: Can ChatGPT Help Health Tech Startups? Here are 5 Ways Why it's Transformative in Healthcare

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion