Two lawyers and a law firm have been fined $5,000 each by a federal judge after submitting fake legal research in an aviation injury claim, blaming the AI language model ChatGPT for the incident, according to a report by AP.

Judge P. Kevin Castel characterized their actions as bad faith but acknowledged their apologies and efforts to rectify the situation. He stated that while it is acceptable to use reliable artificial intelligence tools for assistance, attorneys are responsible for ensuring their filings' accuracy.

The judge criticized the lawyers and their firm, Levidow, Levidow & Oberman, P.C., for submitting non-existent judicial opinions with fabricated quotes and citations generated by ChatGPT. 

Furthermore, he noted that they continued to defend the fake opinions even after their existence was questioned by judicial orders.

ISRAEL-SCIENCE-TECHNOLOGY-AI
(Photo: JACK GUEZ/AFP via Getty Images)
The logo of US artificial intelligence company OpenAI is pictured during a talk by its co-founders at the campus of Tel Aviv University in Tel Aviv on June 5, 2023.

Lawyers Respond to the Ruling

In response to the ruling, the law firm stated that it would comply with the order but disagreed with the finding of bad faith. 

They maintained that they made a good faith mistake in failing to recognize that technology could generate fictitious cases. The firm is considering the possibility of an appeal.

The lawyers involved, Steven A. Schwartz and Peter LoDuca, have gained attention due to their reliance on ChatGPT-generated information presented in court. The original case revolved around a man suing an airline for personal injury, and the attorneys cited previous court cases to argue for the case's progression based on precedent. 

The lawyers pleaded with the judge not to penalize them for using ChatGPT.

Schwartz clarified that LoDuca was unaware of how the legal research was conducted as he was not involved in the process. Schwartz expressed regret for relying on the AI chatbot and vowed never to use it again for legal research in the future.

It is important to note that OpenAI, the organization behind ChatGPT, includes a disclaimer on the AI model's homepage, acknowledging the potential for generating incorrect information. Additionally, ChatGPT's knowledge is limited to information available up until September 2021, meaning it lacks updates or developments beyond that point. 

Read Also: OpenAI Adds 'Function Calling' to GPT-4, GPT 3.5 Turbo

"Hallucinatory Tendencies"

ChatGPT has been reported to have "hallucinatory tendencies" or generate incorrect information. In fact, a radio host has sued OpenAI after the chatbot allegedly provided false and defamatory information regarding a legal case.

Mark Walters filed his complaint in the Superior Court of Gwinnett County.

As per the allegations, on May 4, 2023, Fred Riehl, a journalist, and subscriber of ChatGPT, interacted with the platform regarding the Lawsuit. Riehl shared an URL link to the complaint hosted on the Second Amendment Foundation's website and asked for a summary of the accusations.

However, ChatGPT reportedly provided a misleading summary in response, which falsely implicated Walters in engaging in fraudulent activities and misappropriating funds from the foundation.

The summary contained incorrect information about Walters' role and actions within the organization, as per the complaint.

Related Article: ChatGPT Makes 'Dad Jokes' - Are They Funny?

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion