Insurance risks for law firms utilising ChatGPT and AI chatbots
Two partners spoke to Lawyers Weekly about the insurance risks for law firms and in-house legal teams that come with utilising ChatGPT and other artificial intelligence chatbots to provide legal services.
To continue reading the rest of this article, please log in.
Create a free account to get unlimited news articles and more!
Lander & Rogers partner Melissa Tan outlined some of the biggest insurance risks for law firms and in-house legal teams that arise from the unregulated use of test-based generative AI apps such as ChatGPT and other chatbots in the provision of legal services.
The potential for an increase in professional negligence claims
Ms Tan explained that there is potential for increases in professional negligence claims, with possible implications for renewals and the cost of insurance.
“ChatGPT is generally limited in its ability to answer legal queries as it has only been trained on data up until June 2021, and it is known to be vulnerable to ‘hallucinations’ where facts and sources are made up due to access to insufficient data,” she outlined.
“The ‘hallucination’ rate of ChatGPT has been estimated to be between 15 to 20 per cent, which is a relatively significant margin of error.”
“If the use of ChatGPT by lawyers is unregulated, the instances of inaccuracies and mistakes in the provision of legal services may increase, with a corresponding potential increase in negligence claims from clients,” stated Ms Tan.
“A high incidence of professional negligence claims may impact professional indemnity insurance renewals and the cost of such insurance.”
Ms Tan discussed the implications for in-house lawyers, commenting that they are in a unique position where their client is the company that employs them, may be considered an “officer” under the Corporations Act 2001 (Cth).
“Therefore, they often have the benefit of directors and officers (D&O) insurance and professional indemnity insurance (if they take that out), or an indemnity from the company, to manage the risk of exposure to personal liability,” she highlighted.
“A high incidence of third-party claims arising from the inaccuracies and misinformation that can arise from the unregulated use of ChatGPT may also impact on a company’s willingness to indemnify or have similar insurance risks and implications under the D&O and professional indemnity insurance policies.”
Rise in disciplinary matters
“Complaints of negligence against lawyers sometimes give rise to disciplinary matters when clients lodge complaints to the law society alleging a lawyer has breached solicitor conduct rules,” Ms Tan noted.
“Lawyers in Australia (both in-house and in private practice) are required to act in the best interests of clients, deliver legal services competently and diligently, and avoid any compromise to their integrity and professional independence.”
Ms Tan continued: “There is a risk that the unregulated use of ChatGPT may infringe on these obligations, which could result in complex coverage issues, including a potential gap in cover, particularly in relation to disciplinary matters and if a lawyer fails to disclose to the client where AI is used to assist in the provision of legal services.”
Potential for data and privacy breaches
Alec Christie, a partner in Clyde & Co’s privacy practice, weighed in on the privacy risks posed by the use of AI in law firms.
“These type of AI products ingest a lot of information, including what questions you ask (and all the details/information in those questions and the answers it gives you), how you refine the question and any personal information and client confidential information you give them,” outlined Mr Christie.
“This all then becomes part of their ‘resources’ and ‘learning’, which means others will also have the benefit of this information (i.e. personal and/or client confidential information you were not entitled to and should not have disclosed in the first place).”
Mr Christie highlighted that even the smallest disclosure of information is a risk due to the vast resources available to AI products and their function of continuous “learning” (collecting and joining the dots between information).
This learning means that with the right question(s), anyone else can source the information you have disclosed and “join the dots” with other information to identify a client or individual in question, leading to a firm breaching privacy law and/or client confidentiality obligations, outlined Mr Christie.
“In addition to the scenario noted above where you incidentally disclose client personal or confidential information to use these products, your question(s) may also result in your firm collecting personal information it has no legal right to have,” he explained.
“It is more than likely that this ‘collection’ by your firm of personal or sensitive information breaches a number of APPs and possibly, in certain cases, court suppression orders.”
How to mitigate risks
Ms Tan commented on how firms can avoid such risks: “Law firms can stave off those insurance risks inherent in AI by having a robust AI use-policy to regulate lawyers’ use of AI in their provision of legal services.”
“This also gives greater confidence to insurers and clients that there are measures in place to manage risk, and ensure that if ChatGPT or other chatbots are used in the process of providing legal service, it is disclosed to the client.”
“A robust policy regulating the use of AI by lawyers should ensure that the lawyer double-checks the accuracy of the output generated before using it to inform any work product and place strict limitations on the types of data or information that can be input into ChatGPT,” highlighted Ms Tan.
“Lawyers are generally trained to be able to identify confidential, privileged or sensitive information.
“If the lawyer knows that the information is or may be confidential, privileged or sensitive, there should be a strict policy that the lawyer should not input that information into ChatGPT.”