A string of high-profile cases involving lawyers misusing artificial intelligence has sent ripples of concern through the profession. But according to one legal expert, the takeaway isn’t to steer clear of the technology – it’s to learn how to use it correctly.
As artificial intelligence becomes embedded in lawyers’ daily work, a surge in high-profile misuse cases, amplified by courts closely scrutinising its use, has thrust the profession into a growing and urgent debate over its risks and responsibilities.
However, speaking at the inaugural LEAP Family Law Forum, Michael Kearney SC pushed back on the prevailing narrative, arguing it has been widely misunderstood and insisting the real lesson is not to shy away from the technology, but to learn how to use it properly and effectively.
Reflecting on the mounting cases of practitioners facing professional conduct action over AI-generated material, Kearney warned that while they have been framed as cautionary tales about the dangers of the technology, they, in fact, expose something far more fundamental: its improper use.
“We’ve got a series of cases which have gained notoriety in the press, certainly amongst the profession, for people being referred to the professional conduct committees for the way they’ve applied AI and that’s wrong,” he said.
“Each and every one of those cases is a case where someone hasn’t used AI properly.”
At the heart of these decisions, Kearney identified a recurring issue where lawyers rely on AI-generated lists of authorities and citations without first verifying their accuracy.
“There are two themes to really emerge from those cases. I can take you through the detail, but we all know them. It’s where people have generated largely lists of authorities and citations using AI,” he said.
However, Kearney emphasised that beyond the sensational headlines, the courts are not rejecting AI outright, but are instead reinforcing a fundamental principle that lawyers remain responsible for how they use the tools at their disposal.
“But what the theme that emerges from the decisions of the court, if you read it beyond the sensationalist headlines, is that you are responsible for the use of AI,” he said.
“We might be getting to the point where they’re self-driving, but at the moment we’re responsible for the vehicles and how we use AI.”
He pointed to a range of judicial guidance making clear that lawyers must use AI responsibly, in line with their professional obligations, ensuring its outputs are properly verified and not misleading.
“The courts don’t say don’t use it; none of the case directions, including Chief Justice Bell’s in the New South Wales Supreme Court, say don’t use AI,” he said.
“They say use AI cognisant of your professional responsibilities as a lawyer, not to mislead the court. So that’s pretty simple. Use the great power of the search engines, the research tools, and check it. Be responsible as a lawyer.”
For Kearney, AI has proven to be a valuable tool, but he stressed that the key distinction lies in how its output is used, with problems arising when lawyers treat AI-generated content as a finished product.
“What it does for me is it cuts out sometimes four or five hours of researching to track down the authorities that I then read and use to generate my arguments,” he said.
“Note how I use it. I don’t get the AI product and staple it to the submissions under my name, and that’s when people get into trouble. So that’s what it’s really important to take away from the authorities.”
Ultimately, Kearney said the message is simple: AI should be used as a powerful tool to support and enhance legal work, not replace a lawyer’s own judgement and responsibility.
“The theme is use AI, but use it consistently with your professional obligations. Use it as a tool, a very powerful tool to assist you in being more productive. Assist you frankly in being a better lawyer,” he said.
Want to see more stories from trusted news sources?
Make Lawyers Weekly a preferred news source on Google.
Click here to add Lawyers Weekly as a preferred news source.