Responsible adoption of artificial intelligence is “not optional”. Here, a senior AI adviser discusses how lawyers can maintain trust and credibility while staying competitive.
Late last year, Thomson Reuters lead AI client adviser Fiona McLay (pictured) appeared at AI Innovate, a live-streamed session at which she – alongside colleagues Ziggy Cheng (legal AI specialist) and Jen Lee (AI strategy lead, CoCounsel) – discussed the “top AI fails” of 2025 and how lawyers could avoid them this year.
In that session, McLay reflected on the costliest AI-related mistakes made by Australian lawyers last year and the lessons emerging from those instances.
Speaking recently to Lawyers Weekly, McLay mused about the limitations of AI in legal work, why professional-grade solutions (which meet lawyers’ legal, ethical, and operational standards) should be non-negotiable, and how lawyers can implement best practices for responsible AI adoption.
Accuracy and trustworthiness, she said, together with upholding professional duties and responsibilities, have always been fundamental for lawyers – but such diligence takes time.
“Today, the speed of business is faster than ever, and the pressure on legal teams is growing. Increasingly, lawyers are turning to AI to keep pace and make time for the work that delivers real value,” she said.
But, McLay added, “not all AI is created equal”.
“And, in professional domains like law, tax, and compliance, ‘almost right’ isn’t good enough. Professionals need systems that are transparent, defensible, and designed with humans in the loop. Lawyers must always carefully check AI-generated content for unverified sources, misapplied authorities, or inaccurate summaries,” she said.
“It is the awareness of these risks that will help guide professionals on how to utilise AI tools effectively in their work,” McLay said.
“Just as we learned to detect and guard against phishing emails, best practices for recognising signs of AI-generated content and how to verify the outputs will continue to evolve.”
Responsible AI adoption, she went on, is “not optional”.
“It is the key to maintaining trust, credibility and staying competitive in an increasingly AI-enabled world.”
To this end, McLay suggested that lawyers understand the difference between general-purpose and professional-grade AI, and foster responsible AI adoption with a combination of clear AI usage policies, training, and appropriate technology selection.
“Treat your AI tool like a junior lawyer,” she said.
“Verify the accuracy of its output against trusted sources and invest time working with the tool to improve the quality of the output.”
Lawyers should also embed verification protocols into workflows, “so that cross-checking sources is as routine as conflict checks”.
“Using a professional-grade legal AI tool, such as Thomson Reuters CoCounsel, that grounds its output in authoritative local content and provides transparent source identification makes the verification process more efficient,” she said.
Finally, McLay suggested that lawyers be alert to the risk that team members may already be using free, general-purpose AI tools that aren’t designed to meet the accuracy standards of legal work.
“Furthermore, when delegating work, discuss when AI usage is appropriate and what verification of AI-generated content is required,” she said.
In conversation with Lawyers Weekly following last year’s AI Innovate, McLay, as well as Cheng and Lee, discussed the best and worst AI use cases in law.
Jerome Doraisamy is the managing editor of professional services (including Lawyers Weekly, HR Leader, Accountants Daily, and Accounting Times). He is also the author of The Wellness Doctrines book series, an admitted solicitor in New South Wales, and a board director of the Minds Count Foundation.
You can email Jerome at: