You have 0 free articles left this month.
Advertisement
Big Law

Judge rips lawyers for submitting AI-generated blunders in murder trial

Lawyers representing a boy accused of murder have come under fire from a Melbourne judge for submitting AI-generated documents riddled with errors and misleading information.

August 19, 2025 By Grace Robbie
Share this article on:
expand image

A Melbourne Supreme Court judge has slammed lawyers representing a boy accused of murder for filing misleading documents that were generated by artificial intelligence – and left unchecked.

Justice James Elliott expressed his frustration, telling the Supreme Court in Melbourne that the use of AI is unacceptable when lawyers fail to thoroughly verify the information they submit.

 
 

“It is not acceptable for AI to be used unless the product of that use is independently and thoroughly verified,” Justice Elliott told the Supreme Court in Melbourne as reported by 9News.

The problematic AI-generated documents were submitted during proceedings involving a 16-year-old boy who was found not guilty because of mental impairment in the murder of a 41-year-old woman in Abbotsford in April 2023.

According to 9News, the court heard that both the prosecution and defence, as well as two psychiatrists, agreed the boy was experiencing severe schizophrenic delusions at the time of the killing. The boy’s identity cannot be legally revealed.

Despite this, the boy’s lawyers – including senior barrister Rishi Nathwani KC and junior Amelia Beech – failed to properly check the submissions before filing.

The documents were riddled with errors – including references to non-existent case citations and inaccurate parliamentary quotes – and were also unsigned by both barristers and solicitors.

The flawed documents were also shared with prosecutors, who did not fully verify the contents before creating their own submissions based on the errors.

Even after apologising and re-filing corrected documents, the defence’s revised submissions still contained further inaccuracies, including references to non-existent legislation and appeals.

“Revised submissions were not reviewed by either side ... and referred to legislation that did not exist, an act that was appealed that never occurred,” the judge said as reported by 9News.

Justice Elliott stressed that “the manner in which these events have unfolded is unsatisfactory”, emphasising that the court’s ability to rely on submissions is “fundamental to the administration of justice”.

Appearing before the court, King’s counsel Rishi Nathwani accepted full responsibility for filing submissions that contained fabricated quotes and non-existent case judgments generated by AI.

“We are deeply sorry and embarrassed for what occurred,” he told Justice Elliott on behalf of the defence team, as reported by ABC News.

This incident adds to the growing, non-exhaustive list of cases in Australia and abroad where AI-generated material has been submitted to courts.

Just last month, Melbourne law firm Massar Briggs Law was ordered to personally pay costs for filing documents that contained citations that were either incorrect or non-existent.

Such incidents have prompted calls within the legal profession for stricter safeguards.

Chief Justice Andrew Bell has highlighted the need for judicial vigilance, stressing that practitioners who use generative AI must “verify that all references to legal and academic authority, case law and legislation are only to such material that exists, and that the references are accurate and relevant”.

You need to be a member to post comments. Become a member today