Court documents released on Friday show that Michael Cohen, a former fixer for former President Donald J. Trump, unwittingly used artificial intelligence to fabricate legal citations in a motion to end court supervision early (via The New York Times). 

The shocking revelation calls into question Cohen's credibility, as he previously pleaded guilty to campaign finance violations and served time in prison in 2018.

FRANCE-TECHNOLOGY-AI
(Photo: Photo by LIONEL BONAVENTURE/AFP via Getty Images)
This photograph taken in Toulouse, southwestern France, on July 18, 2023, shows a screen displaying the logo of Bard AI, a conversational artificial intelligence software application developed by Google. (Photo by Lionel BONAVENTURE / AFP)

Cohen Enlisted Google Bard to Make Bogus Legal Citations

In a sworn declaration, Cohen admitted that he had used Google Bard, an artificial intelligence program, to generate fictitious legal citations. 

These citations were then unknowingly incorporated into a motion filed by his lawyer, David M. Schwartz, before a federal judge in Manhattan. Cohen, who is now out of prison and following release conditions, had requested relief from ongoing court supervision.

Reports tell us that the central issue is Cohen's ignorance of the nature of Google Bard. He admitted to failing to keep up with "emerging trends (and related risks) in legal technology" and failing to recognize that Google Bard, similar to ChatGPT, could generate citations that appeared authentic but were in fact fictitious.

The NY Times notes that the potential fallout from this revelation is substantial, especially given Cohen's anticipated role as a key witness in the Manhattan criminal case against Donald Trump.

Trump's legal team has consistently portrayed Cohen as unreliable, and this incident offers fresh ammunition to further discredit him.

Read Also: Google Settles Chrome Incognito Tracking Case, Lawsuit Asks for $5 Billion

Lawyers Caught Tapping AI in Courts

Judge Jesse M. Furman, presiding over the case, issued an order on December 12 expressing difficulty locating the three cited decisions provided by Schwartz. 

In response, Schwartz is now required to furnish the court with documentation or offer a comprehensive explanation regarding the non-existent cases. Additionally, the judge stipulated that Schwartz may face sanctions for citing false cases to the court.

This incident is not isolated, as it marks the second time this year that lawyers in the Manhattan federal court have invoked artificial intelligence programs to cite non-existent legal decisions. 

The use of AI, in this case, Google Bard, echoes a similar episode involving ChatGPT earlier in the year when attorney Steven Schwartz used fabricated cases in a civil case.

The broader implications of lawyers relying on AI-generated content in legal proceedings raise concerns about the technology's potential misuse. 

While AI has the potential to aid in legal research, the Cohen incident highlights the importance of vigilance and verification in the legal profession.

Meanwhile, in September, a prominent UK court of appeal judge made history using ChatGPT to summarize a legal topic. 

Lord Justice Birss, an expert in intellectual property law, described the AI-powered chatbot as "jolly useful" after providing him with a good summary of a specific legal area.

"I think what is of most interest is that you can ask these large language models to summarize information. It is useful and it will be used and I can tell you, I have used it," he told the press.

Stay posted here at Tech Times.

Related Article: How ChatGPT Is Helping Poker Players Win

Tech Times
(Photo: John Lopez)
Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion