
AI woes continue, and the US Food and Drug Agency (FDA) faces the same problem with "Elsa," their latest generative AI tool, which allegedly hallucinates information and misinterprets real studies.
The AI tool was exposed by none other than the FDA's own employees, who revealed how unreliable the machine learning technology is in checking drugs that are undergoing the approval process.
The employees said that double-checking the facts is a must, especially for those that undergo Elsa's checking and processing, despite this being touted as the way to fast-track drug approvals under the agency.
FDA's Elsa AI 'Hallucinates' Studies, Says Employees
A new report from CNN reveals that current and former employees of the FDA have come forward to talk about the issues with its Elsa generative AI, the tool which the agency launched last month. Three employees have reported that the Elsa AI tool has hallucinated studies and misinterpreted legitimate research, which led the technology to share fake or untrustworthy information.
One specific source claimed that "it hallucinates confidently," with Elsa AI having significant problems, which is a massive problem in what it should be doing for the agency, and that is to expedite drug checks approvals.
The FDA adopted the Elsa AI tool as a way to fast-track its operations and push more drugs into the market for Americans to use, leveraging machine learning in the approval process, which usually takes a lot of time.
Double Check Facts, Don't Trust Elsa AI
Engadget reported that, as per the CNN investigation, the unnamed employees claimed that information from the AI that has not been double-checked is considered "unreliable," particularly with its hallucination problems.
However, FDA Commissioner Marty Makary stated that there were no talks within the agency regarding "those specific concerns." Commissioner Makary also noted that using Elsa and joining its training program is still voluntary over at the FDA.
AI and Its Hallucination Issues
Since the dawn of AI, there has been one underlying problem that researchers and users have highlighted as one of its main flaws, and that is hallucinations. Many revealed that AI would opt to make up fake news or information to give users instead of coming back empty-handed on their responses.
However, that problem did not go away over the years, and while it has lessened thanks to the developments from various companies, it is still a problem that the digital world faces to this day. Advocates, experts, and news publication companies have claimed that the public should still rely on human-written news and information when gathering facts or research.
Most of the companies that operate their generative AI, including OpenAI, Google, Apple, Perplexity, and more, are still riddled with hallucination problems and could potentially confuse people with fake or made-up information.
These dangerous implications of the latest technology persist despite the growing number of agencies and institutions adopting more machine learning to serve the people.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.