In a lawsuit against Colombian airline Avianca, two lawyers reportedly used ChatGPT to research on cases involving aviation mishaps against the company. However, these lawyers later became apologetic to an angry judge in a Manhattan federal court after citing fictitious and fake cases made entirely by AI. 

The AI chatbot fooled the two lawyers, who thought that the information they got was real. It turned out that the lawyers lacked fact-checking for this case against the airline for an injury incurred on a 2019 flight.  

ChatGPT
(Photo : OLIVIER DOULIERY/AFP via Getty Images)

Lawyers Cited ChatGPT's Bogus Case Law

Lawyers Steven A. Schwartz and Peter LoDuca of the law firm Levidow, Levidow & Oberman have been making headlines for mistakingly using ChatGPT-generated information they presented to the court. According to ABC News, these two lawyers are now facing possible punishment for including references to past court cases that did not exist, and this alone could end their careers. 

As many as six of the cited cases appear to be bogus judicial decisions with fake quotes and false internal citations, and the judge has noticed this because some pieces of information do not match the ones on the papers. 

The airline's lawyers also wrote to the judge and said they could not find some of the cases referenced in the brief. It was then made known to the court that these lawyers resorted to ChatGPT, thinking it would give them the information they could not find on a standard method of searching.

Read Also: Texas Federal Judge Implements Measures to Prevent AI-Generated Arguments in Court

Is ChatGPT to Blame for These Lawyers' Mistakes?

The original case involved a man suing the airline over an alleged personal injury, and his attorneys submitted a brief that cited some previous court cases in an attempt to prove, using precedent, why the case should move forward. The lawyers have pleaded to the judge not to penalize them for using ChatGPT.

According to BBC News, Schwartz noted that LoDuca did not know how the legal research had been carried out as he had not been part of the research. Schwartz said he regretted relying on the AI chatbot and vowed never to use it again for legal research in the future.

Lawyers who work for the firm have been ordered on Thursday to explain why they should not be disciplined in a hearing set on June 8. The legal team claimed that this resulted from the "carelessness" of Schwartz but did not do it in bad faith.

ChatGPT and What It Brings the World

OpenAI's ChatGPT is one iconic AI development in the world. Since its launch in November 2022, many have been hooked and amazed at its ability to bring massive information promptly. 

The company has already improved ChatGPT with an LLM upgrade via the latest GPT-4, claiming that this next-gen model will help it bring better capabilities compared to GPT-3. However, there are multiple cases in which experts and professionals warned against its use, especially since it can bring wrong data or information to users.

Even with the AI chatbot warning that it is still in its development stages, some people were already satisfied with what AI can bring, which could result in a lack of fact-checking or researching on their end.

ChatGPT is a powerful tool that can help create content for users in seconds. However, it is important to note that most of its sources of information are from the internet.

Related Article: ChatGPT is Not Omnipotent: Things You Should Not Do with OpenAI's Chatbot

Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion