Texas Federal Judge Brantley Starr has implemented measures to prevent the use of AI-generated arguments in court, according to a report by TechCrunch.

In response to a recent incident where an attorney utilized a language model to supplement his legal research, resulting in the presentation of fabricated cases and precedent, Judge Starr has introduced a new requirement for attorneys appearing in his court.

The requirement states that attorneys must confirm and declare that no part of their filing was drafted by generative artificial intelligence (AI), or if AI was involved, that it was thoroughly reviewed and verified by a human.

This move comes as a precautionary measure to avoid the repetition of such a situation and maintain the integrity of the courtroom proceedings.

Judge
(Photo: Daniel Bone from Pixabay)

No AI Chatbots in Court

The decision was made at the federal site for Texas's Northern District, where judges have the authority to establish specific rules for their courtrooms. 

The newly added regulation called the "Mandatory Certification Regarding Generative Artificial Intelligence," aims to ensure transparency and accountability in legal filings. 

According to the certification requirement, attorneys are obligated to file a certificate stating that their submission does not contain any content generated by AI, such as ChatGPT, Harvey.AI, or Google Bard.

Alternatively, if AI assistance was employed, it must be affirmed that the AI-generated language underwent a thorough accuracy check conducted by a human using trusted sources such as print reporters or traditional legal databases.

The appended form for attorneys to sign explicitly covers various aspects, including "quotations, citations, paraphrased assertions, and legal analysis."

Although AI excels at summarization and the retrieval of precedents, large language models can commit mistakes and may produce inaccurate information, which is explicitly admitted by OpenAI's ChatGPT as well. 

Read Also: NVIDIA's New AI Tech Makes NPC Interactions More Natural! Here's What To Know About Avatar Cloud Engine

AI's Hallucinations and Biases

The memorandum accompanying Judge Starr's decision provides a comprehensive explanation for its necessity. It highlights the immense power and versatility of AI platforms in numerous legal applications, such as generating forms, assisting in discovery requests, identifying potential errors in documents, and predicting questions during oral arguments.

However, the memorandum emphasizes that legal briefing is not a suitable use for current AI systems due to inherent limitations. 

The memo raises concerns about AI's propensity for hallucinations and biases. It points out that AI models can fabricate information, including quotes and citations.

Additionally, while attorneys are bound by an oath to uphold the law and impartially represent their clients, AI is developed by individuals who are not subjected to such obligations.  

The certification requirement put forth by Judge Starr's office serves as a significant step in ensuring the accountability and accuracy of legal filings. While this particular measure is limited to one judge in one court, it might encourage other judges to follow suit and adopt similar rules.

Related Article: G-7 Leaders Want to Develop an AI Framework Called the 'Hiroshima AI Process' After the Recent Summit

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion