You have 0 free articles left this month.
Advertisement
Big Law

The best and worst AI use cases in law

Legal tech specialists behind Thomson Reuters’ AI tool, CoCounsel, have shared insights into the best and worst use cases for AI they’ve seen in the past year across the legal profession.

December 23, 2025 By Emma Partis
Share this article on:
expand image

In a recent conversation with Lawyers Weekly, Thomson Reuters’ AI specialists Fiona McLay, Jen Lee, and Ziggy Cheng shared insights into where AI could be best used in the legal profession and which tasks were best left to the humans.

According to the trio, the best use case for AI was labour-intensive tasks that could be easily verified. Tasks such as summarising documents, brainstorming, adjusting tone for different audiences, and finding the “needle in the haystack” in vast legal documents were strong use cases for AI.

 
 

On the other hand, relying on AI tools for definitive answers that were difficult to verify could lead to pitfalls, the specialists noted.

“You need to make sure that the tool is one that enables you to verify it easily. And so you want to be able to ground that tool in specific information that you know can be trusted,” Lee told Lawyers Weekly.

“If you just have an AI answer and there is no way to check where any of that is coming from, that is a huge red flag.”

According to a report by UNSW’s Centre for the Future of the Legal Profession, GenAI, Fake Law & Fallout, a majority (78 per cent) of AI misuse cases in Australian courts involved self-represented litigants.

“The cases that are involving the profession are very much in the minority, and they are mostly cases of fake or incorrect case citations in documents that are being handed up to the court,” McLay said.

The report found that the most common AI-related issues included fake and incorrect case citations, flawed legal reasoning based on unverified AI output, procedurally incorrect and defective documentation and prolix documentation.

While AI use could bring on risks, it could also save firms substantial amounts of time for routine tasks such as chronologies and help lawyers quickly get to the crux of long documents.

“Junior lawyers have previously and probably still do spend hours, days, weeks creating … chronologies. Using AI for that high volume of information really speeds up the process,” Lee said.

McLay added: “Summarising documents is very popular. Also being able to run different versions of a document so that you can tailor and make your argument more persuasive or to communicate the key implications for the intended audience.”

AI was also helpful to find the so-called “needle in the haystack” in legal documents, akin to a more advanced “Ctrl F” search, Cheng said.

“We talk a lot about finding the needle in the haystack, and that is exactly what [AI] is very, very good at,” he said.

“If you’re doing a contract review … and you need to find an example of where someone accepted an offer, maybe it’s an invoice that doesn’t actually say ‘this person accepted something’, it’s just implied, the AI will find that through thousands and thousands of pages.

“It’ll do it through very quickly, and it will give you the pinpoint reference so you can double-check that it’s actually true.”

Firms that lag on their AI strategy could face additional risks, the specialists warned. When practitioners are left on their own to experiment with unsanctioned AI tools, it can introduce security issues or heighten the risk of incorrect outputs.

“Transparency and visibility, it’s so important. I think our tech and the law report said more than a third of private practice lawyers are using non-sanctioned AI in their work. But obviously not telling their organisations. So there’s a huge risk there,” Lee said.

Cheng echoed this sentiment, adding that workplaces would benefit from fostering a culture where mistakes could be admitted, especially as AI usage evolved.

“You need to have a safe space for people to admit if they’ve done something wrong, like using an unsanctioned tool,” he said.

“But if you’re a large law firm or any law firm and you provide your staff with the correct AI tools that have been approved, they’ve been vetted through cyber security, confidentiality, whatnot, then they have a safe space to start learning and practising.”