You have 0 free articles left this month.
Advertisement
Big Law

‘Hard rule’: AI cannot ethically exist without qualified lawyers

There is a non-exhaustive and growing list of cases in both Australia and beyond where AI-generated material has been placed before a courtroom, pressing home the urgent need for legal practitioners to only utilise the technology if they have hard and fast rules in place.

August 13, 2025 By Naomi Neilson
Share this article on:
expand image

Just last month, Melbourne law firm Massar Briggs Law was ordered to personally pay costs for submitting material that contained citations that were either incorrect or did not exist. Following an extensive review of the fabrications, a junior lawyer admitted to using generative artificial intelligence (GenAI) during the research stage.

In the matter of May v Costaras, before NSW Supreme Court’s Chief Justice Andrew Bell, it was quickly clear the respondent had used GenAI to prepare oral submissions, not least because many were “not intelligible and did not engage with the matters raised on appeal”.

 
 

Justice Bell said that, in addition to incorrectly referencing citations, there was a list of authorities that did not exist. The court was satisfied that the respondent had no understanding of what she was relying on.

“This has been and remains a serious issue: how does generative AI produce facially credible citations to non-existent cases, still less provide paragraph references to such cases? And if it does, what reliance can be placed on other legal references or propositions so produced?” Justice Bell said in the judgment’s written reasons.

Dame Victoria Sharp, president of the King’s Bench Division of the High Court of Justice, recently observed in Ayinde v The London Borough of Haringey that those who use artificial intelligence to conduct legal research “have a professional duty to check the accuracy of such research by reference to authoritative sources”.

The duty on lawyers to conduct their own research is no different from the responsibility of practitioners who rely on the work of a trainee solicitor to obtain information, Dame Sharp said.

“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must be taken by those within the legal profession with individual leadership responsibilities … and by those with the responsibility for regulating the provision of legal services,” Dame Sharp said.

In endorsing Dame Sharp’s observations, Chief Justice Bell said there was a need for judicial vigilance and the absolute necessity that practitioners who do use GenAI “verify that all references to legal and academic authority, case law and legislation are only to such material that exists, and that the references are accurate and relevant”.

Ryan Zahrai, founder and principal lawyer of Zed Law, said his firm does not shy away from GenAI – and has actually built the practice, and the services offered to clients, around the technology. While big on innovation and evolution, Zahrai said he is just as concerned with ensuring there are appropriate checks and balances in place.

“Generative tools like ChatGPT are powerful, but they’re far from infallible. When used responsibly, AI can streamline tasks, enhance research, reduce admin burden, and free up lawyers to focus on the high-value strategic work clients actually care about.

“But that requires robust internal processes, supervision and a hard rule: nothing goes to court, to a regulator, or to a client without being checked by a qualified lawyer,” Zahrai said.

Speaking to Lawyers Weekly, Zahrai said the risks are at the highest when lawyers use “off-the-shelf” tools and are copy-pasting. This leaves them vulnerable to putting something before a client or a courtroom that is incorrect and contains hallucinations.

“These models are great, but they don’t think or put words together in a stream that makes the most sense. If you don’t have a human verification layer, which is basically the expertise of a lawyer, then that’s when the risks arise. That’s when you have fake citations and all sorts of ethical and professional responsibility breaches,” he said.

Last December, when introducing the Supreme Court’s practice note on the use of GenAI, Chief Justice Bell made it abundantly clear that inaccuracy – or “laziness” – could not enter the profession: “It is not satisfactory, and it won’t be tolerated by the Supreme Court.”

“Judges are entitled to expect counsel and solicitor advocates to present cases that are relevant and in fact exist, so we’re making no apologies for that, and I don’t think anyone should think we should,” Chief Justice Bell said during a briefing to the profession.

Zahrai said he “very much” agreed with the Chief Justice.

“I think there is a lot of research that has gone out and there’s evidence that shows we rely on AI tools to formulate anything that requires prefrontal cortex thinking, and that indicates there is a significant reduction in brain activity when it comes to verifying that information or upgrading with it,” Zahrai said.

“Lawyers who might have a tendency of laziness – or who are otherwise not traditionally lazy, but are so time-burdened or time-constrained that they opt to rely on the tools – may rely on those without secondary checks and balances for the purpose of getting something into court or getting a defence out.”

Provided law firms and practitioners are taking care to review and confirm material produced by GenAI – for both their clients and their own professional and ethical responsibilities – then Australia could step into the lead and set a global standard, Zahrai said.

Naomi Neilson

Naomi Neilson is a senior journalist with a focus on court reporting for Lawyers Weekly. 

You can email Naomi at: This email address is being protected from spambots. You need JavaScript enabled to view it.

You need to be a member to post comments. Become a member today