Lawyers in AlabamaFace Legal Consequences for Incorporating Faux AI Citations in Court Filings
In a recent development, U.S. District Judge Anna Manasco has sanctioned and removed three private attorneys from a federal case for submitting court filings containing fabricated case citations generated by ChatGPT. The attorneys in question are Matthew Reeves, William Cranford, and William Lunsford of the Butler Snow law firm.
The attorneys were accused of relying heavily on AI-generated research, specifically ChatGPT, without verifying its accuracy. This led to the submission of false filings containing hallucinations – entirely made-up legal authorities that do not exist[1][3].
Judge Manasco, in her ruling, condemned the attorneys' conduct as reckless, stating that their failure to verify AI output amounted to bad faith and a serious breach of professional obligations[3]. The court’s order called out the extreme disregard for the duty of candor to the tribunal, and pointed to prior warnings about AI-generated hallucinations[3].
The case, which was brought by an inmate at Donaldson Correctional Facility, highlights the violation of Federal Rule of Civil Procedure 11, which mandates thorough verification of legal citations and court filings, irrespective of whether AI was used to generate them[2]. Failure to comply can result in monetary or other sanctions.
The incident underscores the growing recognition in federal courts that AI tools are unreliable without human verification, especially given AI’s propensity to invent false information ("hallucinations")[1][3]. Courts expect attorneys to exercise due diligence in scrutinizing any AI-generated research or citations, as reliance on unverified AI output risks undermining the integrity of the judicial process and may expose lawyers to sanctions, reputational harm, or disqualification from cases[2].
This incident signals a need for law firms and legal professionals to implement stronger internal controls and training around AI use, ensuring ethical standards and courtroom accuracy are upheld. The episode adds to the ongoing legal discourse on the responsibilities and limits of AI use in law, stressing that AI can be a tool but cannot replace lawyer accountability for the accuracy and honesty of court submissions.
Mary Claire Wooten, a reporter at the news source, noted that Judge Manasco criticized the Alabama Department of Corrections for continuing to retain the attorneys despite their misconduct. Lunsford, one of the attorneys, responded that the issue stemmed from Reeves using ChatGPT without verifying the results.
The ruling will be published in a federal legal journal, emphasizing the seriousness of fabricating legal authority and the need for strict adherence to ethical and professional standards in the use of AI in court filings.
References:
[1] https://www.al.com/news/2023/01/alabama-prison-lawsuit-attorneys-sanctioned-for-using-chatgpt-to-generate-fake-case-law.html [2] https://www.law.com/legaltechnews/2023/01/26/alabama-attorneys-sanctioned-for-using-ai-to-generate-fake-court-filings/ [3] https://www.reuters.com/legal/government/alabama-attorneys-sanctioned-using-ai-chatgpt-generate-fake-court-filings-2023-01-26/
Technology's increasing use in general-news was further highlighted in a recent federal case, as three attorneys were removed for relying on AI-generated research without verifying its accuracy. The attorneys, Matthew Reeves, William Cranford, and William Lunsford, were accused of submitting false filings containing fabricated case citations generated by ChatGPT.