You have 0 free articles left this month.

Lawyers Weekly - legal news for Australian lawyers

Powered by MOMENTUM MEDIA
lawyers weekly logo
Advertisement
Big Law

People prefer AI for legal advice over lawyers, study finds

A recent study has indicated that individuals are increasingly inclined to trust legal advice produced by ChatGPT and other LLMs than by qualified human lawyers. So, what does this mean for the legal profession?

May 05, 2025 By Grace Robbie
expand image

Large language models (LLMs) are making waves across various fields, including the legal profession, where they provide quick answers to legal questions and inquiries. However, there is a growing concern that people may be more inclined to trust this technology over human experts in the field.

A recent research study conducted by a team in the UK explored the willingness of people to act on legal advice generated by either LLMS or professionals, involving 288 participants across three experiments.

In the first two experiments, featured in the article titled “Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but Still Favour Advice from an LLM”, participants were presented with legal advice sourced from both LLMs and lawyers and were asked which one they would be more likely to act on.

The research indicated that participants were “more willing to rely on the AI-generated advice” rather than the advice presented by real legal professionals – especially when they were unaware of the source of the advice.

Even more striking, when the researchers informed participants about the source of each piece of advice, their trust in the AI didn’t lessen. They found that participants were just as inclined to follow the AI’s suggestions as they were to follow a lawyer’s.

In conversation with Lawyers Weekly, Joshua Krook, one of the authors and a research fellow specialising in responsible AI at the University of Southampton, shared that people might favour such advice because LLMs “use more complicated language with fewer words to get their points across”.

“Real lawyers in our study tended to use simpler language with a greater number of words. People may prefer the conciseness of the LLM and the perceived sophistication of its language,” he said.

The third experiment the study conducted examined whether participants could distinguish between lawyer-generated and AI-generated legal content when the source was concealed.

With random guessing producing a score of 0.5 and perfect accuracy scoring 1.0, participants, on average, achieved a score of 0.59, indicating that “performance was slightly better than random guessing, but still relatively weak”.

Krook stressed that the study’s implications are “profound”, explaining that if people start turning to LLMs and AI tools for legal advice, they could begin to “rely on AI to self-represent in courts or start new legal proceedings”.

Such a reality, Krook shared, could “disrupt traditional legal services” and “lead to vexatious litigation, as these bots provide fake and misleading legal advice”.

To address these findings and the reality, Krook notes that the government must “immediately improve AI literacy education” across the board, starting in primary and high schools and extending to “bachelor’s degrees and apprenticeships”.

He additionally stated that new AI regulations need to be “urgently pass[ed]” and an AI Safety and Security Institute needs to be created “to bring us in line with almost every other major Western nation”.

“Without urgently needed reform, we are looking at significant disruptions to the legal industry,” he said.

You need to be a member to post comments. Become a member today
Got a tip for us?
If you have any news tips or stories to share, feel free to send them our way.
Momentum Media Logo
Most Innovative Company