Conversations with AI have no legal privilege – but what about for lawyers using legal-specific AI? With the concept of “AI privilege” still in its infancy, practitioners have been warned that using open-source AI tools could risk breaching client confidentiality and that firms need to implement key security policies around the use of AI.
As the use of AI and generative AI (GenAI) becomes more common in legal workplaces, there have been a number of concerns raised about AI systems continuing to go largely unregulated.
And as a result of there being a lack of government regulation around AI, conversations with ChatGPT have no legal privilege – meaning that any information put into the platform can therefore be used as evidence in potential court proceedings.
Speaking recently on an episode of This Past Weekend with Theo Von, OpenAI CEO Sam Altman confirmed that open-source GenAI chatbots have no legal confidentiality.
“People talk about the most personal sh-t in their lives to ChatGPT,” he said.
“… Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
As such, the concept of “AI privilege” has emerged; the idea of protecting conversations with AI the same as you would with a lawyer or doctor.
The founder of legal tech company JurisTechne, Mona Chiha, said that foreign-built or US-based systems introduce “cross-jurisdictional complexity” and that firms must adopt “explainable, privacy-compliant, and jurisdictionally sound systems” in order for AI privilege to become “a defensible, ethical extension of traditional legal privilege”.
“As law firms increasingly deploy AI to support legal analysis, the concept of AI privilege is fast becoming critical. Under Australia’s legal framework, privilege protects confidential communications between lawyer and client, but when those interactions involve AI systems, this protection depends on governance, jurisdiction, and data handling,” she said.
“AI tools should be assessed against the NSW AI Assurance Framework, which mandates governance, transparency, risk assessment, and continuous oversight for any AI influencing human or institutional decision making.”
The Law Society of NSW also has numerous resources for lawyers to approach AI in legal practice – and in a joint Statement on the Use of Artificial Intelligence in Australian Legal Practice, emphasised the importance for practitioners to understand AI and its limitations, “not only because solicitors may use AI themselves but also because their clients may be using AI, seeking advice on how to lawfully use AI, or adversely affected by a third party’s use of AI”.
“Whatever form of AI lawyers use in legal practice, maintaining client confidentiality, providing independent advice, and being honest and delivering legal services competently and diligently are necessary. All are required under the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015,” a spokesperson for the Law Society of NSW told Lawyers Weekly.
A recent practice note from the Supreme Court of NSW also introduces detailed limitations on the use of AI in legal practice in the Supreme Court, stating that “legal practitioners and unrepresented parties should be aware of limits, risks and shortcomings of any particular GenAI program which they use”.
The use of GenAI in affidavits, witness statements, and other evidentiary documents has been banned in the court from the start of this year’s first law term.
Speaking about the new practice note, introduced late last year, Chief Justice Andrew Bell said the ban would apply to both open and closed-source large language models, even where firms have invested in building their own, as the risks and limitations of emerging technology are still too high at this stage.
“People can get excited by the technology … but it is not a substitute for people who are given the title of lawyer and are admitted to practice, bringing their own independent mind to what they do and their own moral commitment to abide by the undertakings they make when they are admitted to practice,” Chief Justice Bell said at the time.
Practitioners are permitted to upload certain material onto a closed-source GenAI program, provided certain conditions are met.
Last month, Chief Justice Bell said some of his concerns around the use of AI in courts have since increased, following the news that a Victorian lawyer had become the first in Australia to face professional sanctions, after failing to verify citations generated using AI-assisted legal software.
The Law Council of Australia (LCA) has also recently made a submission to the Federal Court of Australia supporting development of a practice note regarding the use of GenAI.
The practice note, LCA president Juliana Warner said, should alert clients and lawyers to the “risk of breach of client confidentiality, and risk of inadvertently waiving client legal privilege, if documents or case facts are uploaded to – or shared with –GenAI tools, particularly open-source tools”.
“Many law practices have integrated AI (including generative AI) into their operations at varying levels. This ranges from ‘off-the-shelf’ AI tools, through to sophisticated bespoke platforms,” she said.
“We are aware some of the large commercial law firms have developed, or are developing, internal GenAI programs. These programs can operate as an internal ChatGPT or chatbot that employees can interact with, instead of using a search function in an intranet page. Other models may be operating as sophisticated knowledge managers that are able to craft and tailor precedents.
“Whether using a third-party platform or a closed research database, any reliance on AI tools must be managed carefully and in line with a lawyer’s ethical and professional obligations. One limitation is that obligations concerning privilege and confidentiality must not be undercut.”
Internal policies key as AI privilege ‘unlikely’
While AI privilege would protect communications with AI systems, this concept is very much in its “infancy”, according to Hicksons | Hunt & Hunt partner David Fischl and associate and digital lead Elias Dehsabzi.
“Lawyers’ key concern when using AI systems such as ChatGPT is ensuring that client information is kept confidential and privilege is not lost. Uploading client information to a public or general-use platform through a standard login is not secure and can risk waiving privilege. These systems may store the prompts that lawyers ask, retain sensitive data for model training, or expose information to third-party access beyond the firm’s control,” the pair said.
“Since privilege can be lost through voluntary disclosure or even inadvertent disclosure to an external system – lawyers must treat all external AI interfaces as a potential newspaper headline unless the provider gives explicit, enforceable confidentiality assurances.”
Hicksons has a “dual” approach to AI systems, with part of the firm’s AI infrastructure operating within a private environment and the remainder running “under agreements with leading hyperscalers ensuring all data remains confidential, onshore, and inaccessible” to anyone outside the firm.
“Firms should continue adopting AI because of its enormous benefits, but do so with the appropriate safety measures. The safest approach is to deploy AI models that operate within a private environment or a contract-controlled cloud environment,” Fischl and Dehsabzi said.
In adopting AI, firms should also be sticking to approved, private deployments with “zero-retention controls”, Automatise (Cicero AI) CEO Joseph Rayment agreed.
He said legal-specific systems can fall within privilege when operated in a firm-controlled environment – with encryption, strict access control, detailed audit trails and no vendor training on client data – and when lawyers are directing work for an advisory or litigation purpose.
“Public chatbots that retain prompts, allow vendor access, or export data offshore can jeopardise privilege. Prompts, outputs, logs and telemetry are all potentially disclosable,” Rayment said.
“AI privilege’ as a new doctrine is unlikely – courts will apply existing rules and scrutinise governance: who accessed what, where data lived, and whether segregation and retention were appropriate. Policies are necessary but not sufficient; firms also need vendor due diligence, per-matter segregation, privilege marking, client consent where appropriate, and a hard rule: don’t paste client secrets into public tools.”
Lauren is the commercial content writer within Momentum Media’s professional services suite, including Lawyers Weekly, Accountants Daily and HR Leader, focusing primarily on commercial and client content, features and ebooks. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications. Born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling.