Responsible use of specialist AI tools needs to be a proactive effort by law firms and legal teams to mitigate risk.
In conversation with Lawyers Weekly, Legora’s head in Asia-Pacific and Japan, Heather Paterson (pictured), reflected on the concerns that law firms can and should have about how their lawyers are using AI without their knowledge, including inputting sensitive data into unauthorised tools.
“Lawyers will use AI whether it’s available on a company device or on a personal device – and responsible use should be a proactive effort to safeguard [against] the risks,” she said.
“By taking a proactive approach to using specialist tools, you are able to mitigate against concerns around how sensitive information is being handled, the reliability of the work and which sources were used to produce the work.”
“Unsanctioned use of AI”, Paterson warned, opens firms up to governance issues and risks which could jeopardise their clients.
“As soon as a lawyer feeds sensitive material into an external tool, the firm may lose control of that information,” she said.
“In some cases, that data may be stored or used to train models, increasing risk for all parties involved.”
Paterson’s remarks follow Legora’s establishment of a new presence in Sydney and its appointment of Graeme Grovum, who was formerly the head of legal technology and client services at Allens; Kosta Hountalas, a former senior associate at Herbert Smith Freehills Kramer in its technology, media, and telecommunications practice; and Murray Edstein, who was an engagement manager at McKinsey & Company and was a senior associate at HSF Kramer, in late November.
Edstein recently appeared on an episode of LawTech Talks, discussing “the AI lawyer of the future”.
The appointments followed the adoption of Legora by some of the biggest law firms in Australia, including MinterEllison and Allens.
Safeguarding against these risks in 2026 and beyond, Paterson outlined, means ensuring data stays secure, encrypted, and compliant.
“Firms should use an AI tool that ensures sensitive data storage, doesn’t use inputs for foundation model training or retain data beyond what’s required and gives visibility into who accessed what and when,” she said.
“Best practice would be to use a tool governed by robust standards, including certified frameworks like ISO 42001 for AI governance, ISO 27001 for information security management and SOC Type 2 for compliance and data protection.”
With this in mind, Paterson went on, transparency is fundamental.
“Firms should clearly mandate use of approved, secure AI tools. We’ve built our AI with transparency at the core by allowing users to know how every output was generated by tracing it back to source data and the prompt that produced it,” she said.
“That way, any work can be reviewed for reasoning and the source to verify conclusions.”
AI, Paterson concluded, is an incredible tool that offers lawyers significant productivity gains and the opportunity to differentiate their service delivery.
The real opportunity, she said, “is when you can leverage AI effectively as one of your tools in the kit, and layer in your expertise, knowledge, and experience as part of the human in the loop”.
“A clear understanding of the possibilities, as well as the limitations, will ensure lawyers are able to be open and transparent about how they are using AI on legal work,” Paterson said.
Jerome Doraisamy is the managing editor of Lawyers Weekly and HR Leader. He is also the author of The Wellness Doctrines book series, an admitted solicitor in New South Wales, and a board director of the Minds Count Foundation.
You can email Jerome at: