A senior Australian barrister, Rishi Nathwani KC, apologised after defence submissions in a Supreme Court of Victoria murder trial were found to contain AI-generated fabricated quotes and nonexistent case citations. The errors, discovered by court staff, caused a 24-hour delay; the minor defendant was later found not guilty by reason of mental impairment. Justice James Elliott emphasised that any AI output must be independently and thoroughly verified and pointed to existing court guidance on lawyers' use of AI. The case mirrors previous international incidents where AI hallucinations produced fake legal authorities.
Senior Australian Lawyer Apologises After AI-Generated Fabrications Filed In Murder Trial

A senior Australian barrister has apologised to a judge after court submissions in a murder trial were found to contain fabricated quotations and invented case citations produced by generative artificial intelligence.
The error occurred in the Supreme Court of Victoria and underscores growing concerns about the risks of unverified AI-generated material in legal practice worldwide.
Who Was Involved
Defence counsel Rishi Nathwani KC (King's Counsel) told Justice James Elliott that he accepted "full responsibility" for the inaccurate information filed in the trial of a teenager charged with murder, according to court documents seen by The Associated Press. Nathwani said on behalf of the defence team:
"We are deeply sorry and embarrassed for what occurred."
What Happened
The submissions contained fabricated quotes attributed to a speech to the state legislature and citations to Supreme Court decisions that do not exist. Elliott's associates discovered the problem when they could not locate the cited authorities and asked the defence to provide copies. Defence lawyers later admitted the citations "do not exist" and that the filing contained "fictitious quotes." The documents do not identify which generative AI system produced the false material.
The error caused a roughly 24-hour delay in proceedings that Justice Elliott had hoped to finish on Wednesday. On Thursday, Elliott found the defendant — who cannot be identified because he is a minor — not guilty of murder by reason of mental impairment.
Judicial Reaction and Guidance
Justice Elliott criticised the events as "unsatisfactory" and stressed the fundamental importance of the court's ability to rely on the accuracy of counsel's submissions. He reiterated Supreme Court guidance issued last year that states:
"It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified."
The defence said it had checked some initial citations and then wrongly assumed the remaining references were accurate. The same submissions had been shared with prosecutor Daniel Porceddu, who did not independently verify them.
Context: Similar International Incidents
The Victoria incident echoes earlier cases overseas where AI hallucinations produced fictitious legal authorities. In the United States in 2023, a federal judge fined two lawyers and their firm $5,000 after ChatGPT was blamed for producing fake legal research in an aviation-injury claim; Judge P. Kevin Castel said the lawyers "acted in bad faith" but accepted their apologies and remedial steps. Later that year, legal filings in another U.S. matter included invented rulings attributed to AI-assisted research tools.
In the U.K., High Court judge Victoria Sharp warned that presenting false material as genuine could amount to contempt of court or, in the most serious cases, perverting the course of justice — an offence that can carry a maximum sentence of life imprisonment.
Other courts have also seen AI used in novel ways: in 2025, a New York defendant submitted a video that used an AI-generated avatar to make an argument, and families have used AI-generated videos to present victim impact statements. These developments have intensified calls for strict verification and clear professional rules governing lawyers' use of generative AI.
Implications
The Victoria case highlights the need for rigorous verification procedures whenever legal professionals rely on AI tools. Courts and legal regulators are increasingly emphasising that lawyers must independently confirm any authorities, quotes or factual material generated by AI before placing them before a judge.
The incident is likely to prompt renewed attention to professional standards, training and the implementation of practical checks to prevent AI-related errors from undermining confidence in the justice system.
Help us improve.

































