Contrary to the press, proclamations and fanfare, there is a very good argument that we do not yet have true AI – at least not in the sense many people imagine, writes David Heasley.
What we currently have is known as narrow AI or weak AI: systems that are very good at performing specific tasks (like language translation, image recognition, or playing chess), often with superhuman speed or scale. But these systems lack general intelligence, self-awareness, understanding, or consciousness.
What people often mean by “true AI” is artificial general intelligence (AGI) – a hypothetical form of AI that can understand, learn, and apply knowledge across a wide range of tasks at a human level (or beyond). AGI would be able to reason, adapt to new situations without retraining, and perhaps even exhibit self-awareness or creativity in a way that’s not just mimicking patterns in data.
We are still some distance from that. While today’s AI can produce impressive results, it fundamentally remains a tool – not a mind in its own right.
Having said that, it is getting harder to detect AI-generated material. Since the mid-2020’s, several large language models such as ChatGPT have passed modern, rigorous variants of the “Turing test”.
The Turing test, first referred to as the Imitation Game by Alan Turing in 1949, is designed to evaluate whether a machine can demonstrate behaviour that matches human intelligence. In the test, a human evaluator judges a text transcript of a natural language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. If you want to see this “in action”, it is worth watching the movie of the same name.
Privacy considerations: AI is not a confidential environment
A primary concern in the integration of AI into legal practice is the presumption of privacy when utilising publicly accessible AI platforms. Unlike legal practitioners, these platforms are not subject to the Legal Profession Uniform Law or the Australian Privacy Principles. The input of client names, case details, or strategic information into generative AI tools – even for drafting assistance – may compromise the confidentiality of that data and land a practitioner in a breach of the above.
Most AI providers retain the right, under their terms of service, to store and utilise input data for model improvement. Even where anonymisation is claimed, there is no assurance that such data will be securely isolated, deleted, or leaked. For legal professionals, this presents significant ethical and reputational risks and may, in certain circumstances, constitute a breach of client confidentiality or data protection obligations.
Some AI tools, however, operate on “closed” ecosystems, meaning the data is not publicly available and is hosted within Australia. These are almost always the “professional” or “paid” tools and are the safest to use.
Hallucinations and the risk of fabricated legal content
Generative AI models pose a unique risk in legal contexts due to their propensity to produce “hallucinations” –fabricated content presented with unwarranted confidence. This includes fictitious case law, statutes, and legal principles.
In 2022, a New York attorney submitted a court filing containing multiple non-existent citations generated by ChatGPT. Sanctions were imposed. There have been several Australian cases as well. Just recently, a Federal Court judge, (Justice Bernard Murphy) directed a law firm to pay indemnity costs after a junior solicitor used AI to introduce incorrect court documents (his honour was not “best pleased”).
As a result of this risk, Australian courts have responded with caution; some now require practitioners to certify that AI tools are not used in the preparation of affidavits or legal submissions without appropriate human oversight. Disclosure and enforcement vary from state to state.
These hallucinations arise because generative models operate by predicting language patterns based on statistical probability rather than legal accuracy. They do not possess an inherent understanding of legal truth. In a profession where precision is imperative, reliance on such tools without verification may result in serious professional consequences.
Victoria encourages disclosure of AI-assisted drafting where relevant. Courts stress the importance of provenance and transparency, especially where document credibility is key.
Queensland courts caution against using AI to prepare critical documents, especially for self-represented parties. Accuracy and transparency are essential, and ethical obligations apply regardless of formal disclosure rules.
The Federal Court has not yet mandated AI disclosure but has issued a Notice to the Profession stating that lawyers must meet existing obligations of honesty and diligence. If a judge or registrar specifically requires disclosure, it must be provided. Formal guidance is under development.
In NSW, affidavits, witness statements, character references, and similar materials must not be drafted using generative AI, nor used to alter, embellish, or rephrase a witness’s evidence. These documents must include a statement confirming AI is not used in their preparation. Expert reports cannot be prepared with AI without prior leave of the court. If leave is granted, the expert must disclose which parts are AI-generated, the specific tool and version used, provide the prompts, and comply with expert witness codes.
The Supreme Court of Victoria has published Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation, which provides clear directives on the appropriate use of AI. A particularly salient excerpt states:
“Generative AI and Large Language Models create output that is not the product of reasoning. Nor are they a legal research tool. They use probability to predict a given sequence of words. Output is determined by the information provided to it and is not presumed to be correct.”
These guidelines are highly recommended for all legal practitioners considering the use of AI in litigation.
IP implications: Ownership and infringement risks
The use of AI-generated content introduces complex questions regarding copyright and ownership. Under Australian law, copyright generally applies to original works authored by humans. Text generated by AI would probably be outside this framework, creating uncertainty regarding its protectability and enforceability.
Additionally, practitioners must remain vigilant against inadvertent infringement. AI models trained on large-scale internet data may reproduce segments of third-party content, including proprietary legal materials. If such outputs are incorporated into client-facing documents or commercial publications, firms may expose themselves or their clients to intellectual property liability.
Risk mitigation strategies for legal practitioners
To ensure responsible and compliant use of AI within legal practice, firms are advised to implement the following measures:
Conclusion: A tool, not a substitute
AI is not inherently detrimental to legal practice. I use it as required, but carefully. When employed judiciously, it can enhance efficiency and support legal professionals in drafting and reviewing documents. However, it must remain subordinate to human judgement and skill.
Legal practitioners must engage with AI critically and ethically, ensuring that its use aligns with the profession’s enduring obligations of confidentiality, accuracy, and care. The future of legal practice will be shaped not by the presence of AI, but by the integrity with which it is applied.
As a footnote, I am still waiting for AI to tell me: “I’m sorry Dave, I’m afraid I can’t do that.” Like the astronaut in 2001: A Space Odyssey, I will then pay it great attention.
David Heasley is a special counsel at Salt Legal.