Lawyers Become Poster Children for Failure to Verify ChatGPT Information
When Everyone in the Legal World Knows Your Name
We are sure that New York lawyers Steven Schwartz and Peter LoDuca are not especially happy to have become famous by way of failing to vet the accuracy of ChatGPT which made up cases and citations that become a part of the brief they submitted to New York Federal Judge P. Kevin Castel.
The lawyers’ client, Roberto Mata, sued the airline Avianca, claiming he was injured when a metal serving cart struck his knee on a flight to Kennedy International Airport in 2019.
When Avianca requested that Judge Castel toss out the case, Mr. Mata’s lawyers objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.
The airline’s lawyers and the judge were unable to find the referenced decisions or the quotations cited and summarized in the brief.
We know you can guess what happened. ChatGPT made it all up.
The “Schwartz Defense”
Schwartz, a lawyer with Levidow, Levidow & Oberman, threw himself on the mercy of the court, explaining in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”
Mr. Schwartz told Judge Castel that he had no intent to deceive the court or the airline. He said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.” So much for the ethical duty of competence with technology.
Schwartz told Judge Castel that he had asked ChatGPT to verify that the cases were real – and it replied that it had.
We wondered how it was possible that he knew he needed verification but didn’t understand that verification could not come from ChatGPT itself?
Ask a liar whether it’s telling the truth? Sounds like a bad idea to us . . .
Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations.” The judge set a hearing for June 8 to discuss potential sanctions.
Reports of the hearing made it sound excruciatingly painful for the lawyers. In fairness, lawyer Peter LoDuca, though his name appeared on the brief, did not conduct any of the research in question. Mr. LoDuca said in an affidavit that he had “no reason to doubt the sincerity” of Mr. Schwartz’s work or the authenticity of the opinions.
Mr. Schwartz said that he had used ChatGPT “to supplement” his own work and that, “in consultation” with it, found and cited the half-dozen nonexistent cases. He said ChatGPT had assured him that Varghese was a real case. He submitted a copy of the exchange with ChatGPT to the court.
He asked for a source and ChatGPT gave him a legal citation.
He asked the AI if other cases the chatbot had provided were fake.
ChatGPT replied, “No, the other cases I provided are real and can be found in reputable legal databases.”
Moral of the story: Never ask a liar if he’s telling the truth.
The Judge Sanctioned the Two Lawyers on June 22
The judge ordered the two attorneys and their law firm to pay a $5,000 fine in total. The attorneys and their law firm were ordered to notify each judge falsely identified as the author of the fake case rulings about the sanction. At least ChatGPT referenced real judges. The judge said he might have been more lenient if the lawyers hadn’t “continued to stand by the fake opinions after judicial orders called their existence into question.”
In his order, the judge wrote, “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
What ChatGPT Itself Advises
ChatGPT simply made things up. It is not alone in hallucinating – every generative AI we are aware of suffers from similar hallucinations. The authors spend considerable time warning lawyers attending our artificial intelligence webinars to validate everything that ChatGPT and other AI products produce. This story will now become a permanent and prominent part of the popular CLE that we dubbed: “The Rise of AI in the Legal Profession: Lawyers Brace for Impact.”
Ironically, ChatGPT suggested that title.
With all the hoo-ha and so many articles written about this incident, we decided to have a chat with the AI itself. Curiously, Mr. Schwartz never thought to do something so logical.
We asked ChatGPT, “If a lawyer uses Chat GPT to write a brief, including citations, what resources should the lawyer use to validate that the cases are real and the citations are correct?”
The AI suggested using reliable legal databases such as Westlaw, LexisNexis or Bloomberg Law. The full answer is too long to include here, but there was more helpful advice as well.
We asked why Google Scholar was not included to offer an alternative to paid legal databases.
We were impressed by its reply which included an apology. The reply included practical steps for using Google Scholar to verify cases and citations.
We particularly liked the first part of ChatGPT’s extensive answer.
“Apologies for not including Google Scholar in my previous response. Google Scholar is indeed a valuable resource that lawyers can use to verify cases and citations, especially when they don’t have access to paid legal databases. Google Scholar provides access to a vast collection of legal opinions, court decisions, law journals, and scholarly articles. While it may not be as comprehensive or specialized as paid legal databases, it can still serve as a useful tool for lawyers on a limited budget.”
Nicely nuanced – paid legal databases are better, but for a solo or very small firm, it acknowledged that Google Scholar might be a reasonable resource. Well done. It even added, “Thank you for pointing out the importance of free resources, and I apologize for any oversight.”
Final Words
Other attorneys, including author Nelson, have had experience with ChatGPT citing non-existent cases, articles, books and invalid hyperlinks. As one leading law firm has advised its attorneys sternly, when using AI, “You must validate everything coming out of the system. You have to check everything.”
This highly publicized misadventure will certainly be a poster child for that advice.
Start the discussion!