Skip to content

Lawyer in Australia admits fabricating AI-generated evidence in a murder trial

Spurious submissions fabricated quotes from a hypothetical speech to the legislature and invented citations, allegedly from the Supreme Court, were presented.

AI-assisted attorney issues apology for fabricated statements in homicide trial
AI-assisted attorney issues apology for fabricated statements in homicide trial

Lawyer in Australia admits fabricating AI-generated evidence in a murder trial

In a series of recent incidents, AI-generated errors have caused delays and raised concerns about the accuracy and integrity of court submissions.

Last year, Judge P Kevin Castel accepted the apologies and remedial steps of lawyers who admitted to using ChatGPT to submit fictitious legal research in an aviation injury claim. The judge imposed a fine of $5,000 on the lawyers and the law firm involved, expressing his dissatisfaction with the events and emphasizing the importance of the court's ability to rely on the accuracy of submissions made by counsel.

Similar incidents occurred in the UK, where dozens of AI-generated fake citations were put before courts across several cases. Justice James Elliott, in one such case, ruled that a minor client was not guilty of murder due to mental impairment. The errors were discovered by Justice Elliott's associates, who could not find the cases and requested copies.

The use of AI in court submissions has been a subject of discussion, with the Supreme Court releasing guidelines last year for how lawyers should use AI. These guidelines emphasize the need for lawyers to personally verify all AI-generated content before filing to prevent hallucinations or factual/legal inaccuracies. They must adhere to professional conduct rules, including ethical obligations not to submit erroneous or fabricated information.

UK High Court Justice Victoria Sharp warned that providing false material as if it were genuine could be considered contempt of court or, in the "most egregious cases," perverting the course of justice, which carries a maximum sentence of life in prison.

Lawyers using AI must comply with applicable rules of professional conduct, including competence in using AI and ensuring accuracy of filings. AI-generated material must be carefully checked for accuracy, especially the correctness of citations and factual statements. Some courts require counsel to certify this verification, with failure potentially leading to sanctions.

The responsibility of verifying AI-generated content extends to verifying citations and legal authorities. Since AI tools can produce fabricated case names or inaccurate legal citations, lawyers must not submit AI outputs without independent verification. Failure to do so risks ethical violations and adverse court actions.

Using AI tools is likened to having a junior associate draft a pleading — the signing lawyer bears full responsibility for the content and must ensure accuracy through their professional judgment. Some adjudicatory bodies are developing specific rules and policies to regulate AI use in pleadings and court submissions, reinforcing these standards.

These guidelines are intended to uphold the integrity of the judiciary and maintain professional ethics in legal practice. They stress that AI use should not compromise judicial independence or impartiality, and that AI may be used for internal judicial tasks under supervision rather than replacing human judgment.

In light of these incidents, it is clear that the responsible use of AI in court submissions is crucial to maintaining the integrity of the justice system. Lawyers must exercise caution and verify all AI-generated content before filing to ensure the accuracy and reliability of their submissions.

Technology advancements in legal practices, such as the use of AI, have sparked general-news headlines following incidents of AI-generated errors causing delays and raising concerns about the accuracy of court submissions. In the UK, Justice Victoria Sharp, highlighting the gravity of the situation, warned that providing false material as if it were genuine could be considered contempt of court or perverting the course of justice, which could lead to a crime-and-justice sentence of life in prison. Therefore, it's crucial that lawyers using AI carefully check all generated content for accuracy, ensuring ethical obligations are met and the reliability of submissions is maintained, upholding the integrity of the justice system.

Read also:

    Latest