AI: The beginning of the end for lawyers?
Irrespective of whether ChatGPT and other artificial intelligence language models have usefulness for businesses in Australia, it would be a bold, and potentially negligent, move to rely exclusively on AI-generated research or information for important and strategic company decisions, writes Lisa Fitzgerald.
The next generation of artificial intelligence (AI) has arrived, and it is making waves.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
Eclipsing Apple’s Siri, Amazon’s Alexa and Google’s Google Assistant, the latest artificial intelligence (AI) models have demonstrated the ability not only to source helpful information but also to generate unique works.
Among them is ChatGPT, a language-model AI tool created by OpenAI, with interest centred on its ability to ostensibly out-perform humans when it comes to reviewing vast amounts of data at high speed and producing fluent content in response to very limited requests or inputs.
It is being hailed a game changer in language models that some believe could threaten the billion-dollar legal profession.
In fact, AI called DoNotPay is even being proposed for use as a legal assistant in legal proceedings, to inform litigants in person during their court cases. The objective of DoNotPay is ultimately to capture a share of legal fees from lawyers by putting the power of knowledge directly in the hands of any individual.
Recent developments in AI have prompted the question: does AI signal the end of the legal profession? Has a lawyer’s value been usurped?
A recent law-firm study tested ChatGPT’s accuracy in response to 50 legal questions. Ranking answers from one (poor) to five (good), independent assessors, on average, scored ChatGPT’s responses at 2.3 out of five, considering responses to range from “OK but not necessarily good” to “generally poor” in most cases. The overall assessment from this study was that the answers were “not impressive for a human lawyer” but impressive for generalised AI.
The transformative potential of new technology can be mesmerising. Look no further than our blind enthusiasm for cryptocurrency, which came crashing down in 2022–23. So, what, if anything, should we be aware of in the context of applying language-model AI to legal problems?
There are at least three considerations for businesses from a legal perspective:
Accuracy is accuracy, not relativity
Artificial intelligence learns from data. If that data is erroneous, incomplete, or out of date, accuracy is compromised.
While ChatGPT has numerous applications that can save users time, it (currently) has limitations in accounting for inaccuracies that arise from flawed inputs, incorrect information or a question that may give rise to a more nuanced interpretation. In a practical sense, if the possibility of information being “not quite right” could trigger a large penalty, damage reputation, or land a director in prison, then the utility of AI is relative to its potential to expose users to, or fail to protect them from, liability.
The margin for error highlights the importance of considering the risks and exposure from early adoption of new technologies to solve longstanding business problems. From seeking “advice” about whether to retain data, report a data breach, or self-classify an organisation as critical infrastructure, through to paying a cyber security ransom, or using or modifying someone else’s intellectual property rights, current limitations, including geographical and jurisdictional limitations, means the use of AI in some contexts is likely to present unacceptable risks for most businesses.
In fact, the type and quality of advice is a key consideration for regulators and judicators when assessing whether there has been a breach of directors’ duties. This was the case with RI Advice, in which it was considered to have failed to acquire adequate specialist cyber forensic advice and assistance in the wake of a cyber security breach. In the case of AI, a regulator or judge is unlikely to be forgiving if a chatbot was the sole or dominant source of advice in the case of a breach related to serious company decisions.
It is also critical to note that the current public beta version of ChatGPT is based on data up until 2021, which means it excludes important changes to Australian privacy law enacted in 2022.
Efficiency is not reliability
In business especially, there is a tendency to conflate efficiency with reliability, or even elevate efficiency above all else. However, when it comes to legal advice, lawyers must ensure reliability.
While legal advice can be presented in various ways, the required analysis is the same. An under-analysed or negligent piece of advice cannot be relied upon. Unlike outputs provided by AI, when lawyers provide advice, a client benefits from the diligence, care and skill required to be taken by lawyers and is afforded protections under the lawyer’s professional indemnity insurance.
While obtaining legal advice may not be as efficient as a dialogue with a chatbot, both in terms of time and cost, legal advice from a lawyer does come with an insurance policy of reliability and accountability.
Knowledge is power, but (legal) privilege is more powerful
It is widely accepted that knowledge, based on accurate information, is empowering. Democracy and freedom of speech principles, for example, support the free flow of information with few, but important, exceptions, such as information about national security and an individual’s personal information.
In common law jurisdictions like Australia, there is a further reason for limiting access to certain information. Known as professional legal privilege, communication with a lawyer that is given for the dominant purpose of seeking legal advice or preparing for litigation is protected and remains confidential.
A disadvantage of obtaining information or substitute legal advice from a chatbot is that the matter is not protected by legal professional privilege.
Irrespective of whether ChatGPT and other artificial intelligence language models have usefulness for businesses in Australia, it would be a bold, and potentially negligent, move to rely exclusively on AI-generated research or information for important and strategic company decisions.
Despite the exciting and valuable development of generative AI, the benefits and relevance of reliability, professional indemnity insurance and privilege — hallmarks of the legal profession — are likely to endure for some time to come.
Lisa Fitzgerald is a Melbourne-based partner at Lander & Rogers.