AI
AI

Mike Lindell’s Attorneys Utilized AI for Brief, Judge Discovers Nearly 30 Errors

Photo credit: arstechnica.com

Legal Missteps in MyPillow CEO’s Defamation Case

A recent development in the ongoing defamation case involving MyPillow CEO Mike Lindell has raised significant concerns regarding the use of artificial intelligence in legal documents. A lawyer for Lindell acknowledged that AI was utilized in crafting a brief that contains nearly 30 flawed citations, including inaccuracies and references to non-existent cases, as noted by a federal judge.

US District Judge Nina Wang pointed out in her order to show cause that the brief contained various defects. These issues include misquoted cases, erroneous representations of legal principles, and citations that are purportedly from authoritative courts but do not exist. “The Court identified nearly thirty defective citations in the Opposition,” Judge Wang stated.

In light of these findings, Judge Wang has ordered attorneys Christopher Kachouroff and Jennifer DeMaster to clarify why they should not face sanctions. The court is also considering whether these attorneys should be referred for disciplinary measures due to potential violations of professional conduct standards.

Kachouroff and DeMaster are representing Lindell against a lawsuit initiated by Eric Coomer, a former employee of Dominion Voting Systems. Both attorneys signed a February 25 brief that included the contested citations. In an April 21 hearing, Kachouroff, who serves as lead counsel, confirmed that generative AI was indeed employed for the brief, as reported by Judge Wang.

The judge noted ongoing difficulties in obtaining explanations from Kachouroff regarding the inaccuracies in the citations. “Time and time again, when Mr. Kachouroff was asked for an explanation of why citations to legal authorities were inaccurate, he declined to offer any explanation,” Wang detailed. It was not until she specifically inquired about the use of AI in the brief that Kachouroff acknowledged its involvement, underscoring concerns about the reliability of texts produced with such technology.

Source
arstechnica.com

Related by category

Lyft’s AI ‘Earnings Assistant’ Provides Tips for Drivers to Boost Their Income

Photo credit: www.theverge.com Lyft has introduced a new tool called...

OpenAI Reverses Update that Transformed ChatGPT into Overly Flattering Assistant

Photo credit: arstechnica.com Users of ChatGPT have expressed dissatisfaction with...

OpenAI Reverses Its Overly Glossy ChatGPT Update

Photo credit: www.theverge.com OpenAI Reverts Latest GPT-4o Update Amid Personality...

Latest news

Explained: Google Search’s Fabricated AI Interpretations of Phrases That Were Never Said

Photo credit: arstechnica.com Understanding Google's AI Interpretations of Nonsense Challenging the...

Exploring Mars: Volcanic History and Evidence of Ancient Life

Photo credit: www.sciencedaily.com A recent study involving a researcher from...

Wisconsin Supreme Court Suspends Milwaukee Judge for Assisting Man in Evading Immigration Authorities

Photo credit: www.yahoo.com MADISON, Wis. (AP) — The Wisconsin Supreme...

Breaking news