The renowned former lawyer of the previous President of the United States, Donald Trump, is now in the hot seat after it was revealed that he used Google Bard's content generation and cited three fake court cases made up by the AI. Michael Cohen is currently facing prison time for his tax evasion case which he pleaded guilty to, alongside the other charges against him. 

Former Trump Lawyer, Michael Cohen, Admits Using Bard

Michael Cohen
(Photo : Spencer Platt/Getty Images)

filing last Friday revealed that another lawyer has allegedly fallen victim to AI's hallucinations, and this is with the infamous former lawyer of Trump, Michael Cohen. It was previously said that Cohen, including his legal team, presented a federal judge with three cited cases that were made up entirely by Google's Bard.

US District Judge Jesse Furman wrote in a filing that these cases were nonexistent, asking David Schwartz, Cohen's lawyer, to explain why it contained fake information. The document in question was Cohen's motion to reduce the three-year probation against him.

Moreover, Schwartz is also facing questioning if Cohen helped him draft the motion.

Cohen then admitted to giving his version of the motion and submitting it to court, further stating that he used Google's Bard, thinking it was a "super-charged search engine," and not like the AI chatbot, ChatGPT.

Read Also: Google Bard Gives Incorrect Answer About JWST; AI Chatbot's Accuracy Now Questioned

Citing Three Fake Court Cases: Bard Hallucination?

The three fake court cases included in the motion took the spotlight in Cohen's criminal case, and now, his lawyer is facing potential sanctions for the use of misleading information. Cohen also said that he did not know that Schwartz would use his draft and not bother to check the content's legitimacy.

It was not claimed that Bard had hallucinations which led to this court fiasco, but it is similar to a case earlier this year, with fake information making its way to an official.

AI Hallucinations and Lawyers Using It

Earlier this June, a similar case to Cohen was brought up in court, and while this case is not centered on AI hallucinations, it still brought a significant highlight to this issue. Lawyers Steven A. Schwartz and Peter LoDuca of the law firm Levidow, Levidow & Oberman made it to the headlines not because of their skills, but because of relying on ChatGPT to cite cases beneficial to their claims against Avianca, a Colombian airline.

This infamous case is also one of the top examples of AI hallucinations, as industry experts call it. 

While most times, AI technology promises to be factual, careful, and truthful in their generations, there have been cases where it brought fake or misleading information to users. While for some it had been harmless, others who have used it for important situations have since been burned by the AIs.

Initially, it was only ChatGPT that was significantly criticized for the alleged hallucinations it brought users, with many thinking that this problem is yet to see a fix for the entire industry. One of the most recent cases regarding AI's misinformation is with Michael Cohen, with the disbarred lawyer claiming to have used Bard before, offering him three fake cases, while mistaking it as a search engine. 

Related Article: Can AI Avoid Having Hallucinations? Here's What Researchers Say

Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion