You have 0 free articles left this month.
Advertisement
The Bar

AI in legal practice: Weighing benefits and risks

Artificial intelligence (AI) is here to stay. The question is: how should it be used, writes Sergio Zanotti Stagliorio.

September 26, 2025 By Sergio Zanotti Stagliorio
Share this article on:
expand image

An interesting approach was adopted in a US case (Mid Central Operating Engineers Health and Welfare Fund v Hoosiervac LLC (2025) U.S. Dist. LEXIS 31073, 8):

“[M]uch like a chainsaw or other useful but potentially dangerous tools, one must understand the tools they are using and use those tools with caution … [T]he use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

 
 

In other words, AI can be “good” or “bad” depending on how, and for what purpose, it is used.

Examples of potential benefits of AI

Without being exhaustive, AI can benefit legal practice in the following ways:

  • Drafting emails.
  • Responding to inquiries via a chatbot on a website.
  • Drafting a chronology of facts/events from scattered, unstructured information.
  • Reviewing and summarising legal documents, such as contracts and wills.
  • Comparing images (including facial images) for identity and other kinds of matching.
  • Drafting contracts and other legal documents.
  • Drafting indexes of documents.
  • Summarising facts/events.
  • Summarising hearing/interview transcripts.
  • Finding relevant case law and legislation.
  • Writing advice letters.
  • Writing submissions.

Examples of potential risks

However, to borrow from Hoosiervac, that powerful “chainsaw” must be used “with caution”. Otherwise, many problems might arise, rendering it a dangerous tool, such as:

  • The so-called “hallucinations”, where AI generates what appears to be legitimate content (including case law citations and legislation), but which does not exist.
  • The AI platform training itself on confidential client materials fed into it, and then sharing such materials with subsequent users.
  • The AI platform providing answers/content in breach of copyright.

These problems might amount to breaches of ethical duties, such as the duties:

  • To deliver legal services competently, diligently and honestly: r 4 of the Solicitors’ Conduct Rules (Uniform Law); for instance, to give a court a “hallucinated” citation might amount to a lack of diligence, in that the citation was not checked by a human.
  • Not to deceive or knowingly or recklessly mislead the court: r 19.1; for instance, a hallucinated citation might be seen as misleading the court; also, to have AI draft legal documents such as affidavits might also be seen as misleading the court, as they are unlikely to reflect the facts as observed by the deponent.
  • to the court as the paramount duty: r 3.1; for instance, to provide the court with inexistent legislation may be seen as not honouring the paramount duty to it;
  • To act in the client’s best interests: r 4.1.1; for instance, breaching a third-party’s copyright on behalf of a client may well be seen as not in that client’s best interests.
  • Of confidentiality: r 9; for instance, allowing AI to train itself based on clients’ confidential material without their permission is arguably a breach of confidentiality.

Similar rules apply to barristers.

Further, unethical (including reckless) use of AI may also breach court rules. For instance, section 37M(2)(b) of the Federal Court of Australia Act 1976 (Cth) (FCA Act) provides that one of the overarching purposes of the civil practice and procedure provisions includes “the efficient use of the judicial and administrative resources available for the purposes of the court”. Arguably, giving the Federal Court inexistent case law citations is likely to, at least, waste its judicial and administrative resources in attempting to find such cases, to no avail.

And section 37N of the FCA Act allows the Federal Court to order a lawyer to bear costs personally for failure to comply with the overarching duties, for instance because of any costs thrown away by opposing parties as a result of the citation of inexistent authorities.

Benefits versus risks of using AI

None of the above is to say that lawyers should discard any case law or legislation suggested by AI platforms as relevant to their cases. That would be the equivalent of throwing away the chain saw. Rather, lawyers should carefully verify not only that the citations are real, but also that they are relevant to the case and arguments being made. If so, why not use them? The same applies to other uses of AI, such as drafting contracts. The key is human verification.

Are new rules necessary to regulate the use of AI?

Arguably, the existing ethical and court rules already suffice for the purpose of addressing the use of AI in legal practice. In other words, it seems that the potential problems arising from the use of AI are already addressed by applying existing rules to this new trend.

It is true that, if the existing rules entirely dissuaded unethical use of AI, there would be no cases of courts referring lawyers to the legal services commissioner (or equivalent bodies) for unethical use of AI, as discussed below. But is that not equally true of ethical issues not related to AI? As such, it does not appear that the problem is one of insufficiency of legislation and rules, etc.

It appears that the problem is more connected to a lack of familiarity of how AI works, its inherent perils, and a willingness to trust AI platforms without checking their work. As such, it appears that the solution is not more legislation, etc., but more education and awareness.

Court and tribunal guidelines on the use of AI

Some courts have taken steps to minimise the risk of lawyers blindly trusting AI platforms without doing their own due diligence. Whether or not the current legislation is sufficient to discourage unethical use of AI, those steps address the education and awareness referred to above. What follows is a discussion of the positions adopted by some Australian courts.

It appears that the High Court of Australia has not yet published any guidelines (or practice directions) on the use of AI.

The Federal Court of Australia has not published any guidelines so far either. However, on 29 April 2025, Chief Justice Mortimer published a notice to the profession, indicating that a consultation process would be undertaken on the use of AI, and inviting submissions by 13 June 2025. It appears that the outcome of that consultation has not been made public yet.

The Federal Circuit and Family Court of Australia has not published any guidelines so far either.

Turning now to a tribunal, the Administrative Review Tribunal has also not yet published any guidelines. However, it has made available the following documents/passages:

  • AI transparency statement, last updated on 26 February 2025, which includes: “The tribunal does not, and has no intention of, utilising AI services for the purposes of undertaking its review decision-making function exercised under the Administrative Review Tribunal Act 2024.”
  • ART Code of Conduct for Non-Judicial Members, of 14 October 2024, which includes:

“9.1 A Member must not use generative AI to obtain guidance on the outcome of a proceeding, to produce any part of the member’s reasons for a decision or to obtain any form of feedback or assistance on any part of the member’s reasons for a decision which the member has already prepared.

9.2 A member must not enter any tribunal information, data or records, including case or party data, emails, reports, chat logs, code and system errors, into any generative AI application.

9.3 Where a member uses generative AI as a general research tool without breaching the obligations in [9.1] and [9.2], the member must check any research generated by generative AI and verify its accuracy before relying on that research.”

Unlike federal courts, some state courts have already issued guidelines on the use of AI.

For instance, the Supreme Court of NSW issued Practice Note SC Gen 23 on 28 January 2025, which includes the following passages (modified emphasis):

“10. Gen AI must not be used in generating the content of affidavits, witness statements, character references or other material that is intended to reflect the deponent or witness’s evidence and/or opinion, or other material tendered in evidence or used in cross examination ...

13. An affidavit, witness statement or character reference must contain a disclosure that gen AI was not used in generating:

(a) its content (including by way of altering, embellishing, strengthening or diluting or rephrasing a witness’s evidence); …

16. Where gen AI has been used in the preparation of written submissions or summaries or skeletons of argument, the author must verify in the body of the submissions, summaries or skeleton, that all citations, legal and academic authority, and case law and legislative references:

(a) exist,

(b) are accurate, and

(c) are relevant to the proceedings,

and make similar verification in relation to references to evidence in written submissions … ”

In May 2024, the Supreme Court of Victoria issued the Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation, which includes these passages (emphasis added):

“3. The use of AI programs by a party must not indirectly mislead another participant in the litigation process (including the court) as to the nature of any work undertaken or the content produced by that program. Ordinarily, parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken. Where appropriate (for example, where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents), the use of AI should be disclosed to other parties and the court.

4. The use of AI programs to assist in the completion of legal tasks must be subject to the obligations of legal practitioners in the conduct of litigation, including the obligation of candour to the court and, where applicable, to obligations imposed by the Civil Procedure Act 2010, by which practitioners and litigants represent that documents prepared and submissions made have a proper basis.

10. Particular caution needs to be exercised if generative AI tools are used to assist in the preparation of affidavit materials, witness statements or other documents created to represent the evidence or opinion of a witness. The relevant witness should ensure that documents are sworn/affirmed or finalised in a manner that reflects that person’s own knowledge and words …

12. AI is not presently used for decision-making nor used to develop or prepare reasons for decision because it does not engage in a reasoning process nor a process specific to the circumstance before the court.”

Case law on the use of AI by lawyers

There are hundreds of judgments worldwide dealing with unethical use of AI by lawyers. In Australia, there have been only a few cases, some of which are discussed below.

Dayal

In Dayal [2024] FedCFamC2F 1166, Judge A Humphreys summarised the case as follows (emphasis added):

1. ... The solicitor in question tendered to the court a list and summary of legal authorities that do not exist. The solicitor has informed the court the list and summary were prepared using an AI tool incorporated in the legal practice management software he subscribes to. The solicitor acknowledges he did not verify the accuracy of the information generated by the research tool before submitting it to the court.

At [15], the court said (emphasis added):

15. Importantly in the context of this matter, the guidelines issued by the Supreme Court and County Court of Victoria explain that generative AI and large language models create output that is not the product of reasoning and nor are they a legal research tool. Generative AI does not relieve the responsible legal practitioner of the need to exercise judgement and professional skill in reviewing the final product to be provided to the court.

At [17], the court discussed the relevant duties (emphasis added):

17. … the duties of Victorian solicitors include:

(a) The paramount duty to the court and to the administration of justice, which includes a specific duty not to deceive or knowingly or recklessly mislead the court;

(b) Other fundamental ethical duties, including to deliver legal services competently and diligently; and

(c) To not engage in conduct which is likely to diminish public confidence in the administration of justice or bring the legal profession into disrepute.

At [11], the court cited the following passage from the US District Court case of Mata v. Avianca Inc, 678 F.Supp.3d 443 (S.D.N.Y. 2023), with apparent approval (emphasis added):

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

The court referred the lawyer to the Victorian Legal Services Board and Commissioner.

Valu

In Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95, Judge Skaros anonymised the applicant’s legal representative as “ALR”.

The ALR had provided the court with written submissions containing citations to Federal Court and tribunal decisions which did not exist.

At [22], the court described how those cases found their way into the submissions:

22. … [The ALR] accessed the site known as ChatGPT, inserted some words and the site prepared a summary of cases for him. He said the summary read well, so he incorporated the authorities and references into his submissions without checking the details.

The court held (emphasis added):

18. The conduct of the ALR, in filing an application and submissions which contained citations to Federal Court of Australia cases which do not exist and alleged quotes from the tribunal’s decision which do not exist, falls short of the standard of competence and diligence that the applicant in the substantive proceedings was entitled to expect from his legal representative. The conduct also falls short of a legal practitioner’s duty to the court, including the duty to ensure that the court is not deceived or mislead, even if unintentionally: r 19.1 of the [Legal Profession Uniform Australian Solicitors’ Conduct Rules 2015 (NSW)].

37. There is a strong public interest in referring this conduct to the regulatory authority in NSW given the increased use of generative AI tools by legal practitioners. The use of generative AI in legal proceedings is a live and evolving issue. While the Supreme Court of NSW has issued guidelines around the use of generative AI, other courts, including this court, are yet to develop their guidelines. The court agrees with the minister that the misuse of generative AI is likely to be of increasing concern and that there is a public interest in the OLSC being made aware of such conduct as it arises.

As such, the court referred the ALR to the Office of the NSW Legal Services Commissioner.

JNE24

In JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314, Judge Gerrard anonymised the lawyer involved.

The applicant’s lawyer had given the court written submissions that cited some cases that did not exist or did not correlate to the relevant principle relied upon by the submissions.

The court said at [20] (emphasis added):

20. AI is increasingly being used as a research tool in litigation. Ultimately, it is likely to prove to be an invaluable tool for both lawyers and self-represented litigants. There is nothing inherently impermissible about using generative AI programs to assist in research. However, not only is it not an appropriate substitute for legal research, it comes with considerable risks which, if not mitigated, have the capacity to lead to actions which could be construed as a contempt of court (as considered in the UK case of Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin) at [66]- [69] (Ayinde).

The court said at [24]-[25] (emphasis added):

24. What is common among the recent cases is the propensity of what has been referred to as AI hallucination ... There are now a concerning number of reported matters where reliance upon AI has directly led to the citation of fictitious cases in support of a legal principle. The dangers of such an approach are reasonably apparent but are worth stating. First, if discovered, there is the potential for a good case to be undermined by rank incompetence. Second, if undiscovered, there is the potential that the court may be embarrassed and the administration of justice risks being compromised. Relatedly, the repetition of such cases in reported cases in turn feeds the cycle, and the possibility of a tranche of cases relying upon a falsehood ensues. Further, the prevalence of this practice significantly wastes the time and resources of opposing parties and the court. Finally, there is damage to the reputation of the profession when the clients of practitioners can genuinely feel aggrieved that they have paid for professional legal representation but received only the benefit of an amateurish and perfunctory online search.

25. To be clear, it is not the initial reliance on AI that constitutes the vice in such matters. It is the placing before the court of false authorities or evidence that constitutes improper conduct and a breach of a legal practitioner’s duty to the court.

The court referred the lawyer to the Legal Practice Board of Western Australia.

It also ordered the lawyer to personally pay the first respondent’s costs ($8,371.30).

Luck

In Luck v Secretary, Services Australia [2025] FCAFC 26, the self-represented appellant cited a purported judgment that did not exist.

Justices Rofe, Hespe and Kennett held at [14] (emphasis added):

14. The case referred to in the first paragraph of this extract does not exist. The judgment with the medium neutral citation referred to is a completely different matter which did not involve Rofe J. We apprehend that the reference may be a product of hallucination by a large language model. We have therefore redacted the case name and citation so that the false information is not propagated further by artificial intelligence systems having access to these reasons.

There is a trend in Australian cases of courts not extracting the false citations given to them. As Luck indicates, the intention is to avoid perpetuating the use of the false citations by AI.

Conclusion

The use of AI in legal practice is neither inherently good nor bad. Its worth comes down to the purpose of the use of AI, and how it is used. If used properly, it can be a formidable, fit-for-purpose chainsaw. If used without caution and human verification, it can cause harm.

Sergio Zanotti Stagliorio is a barrister, lecturer, and bachelor in computer engineering.