You have 0 free articles left this month.
Advertisement
The Bar

Could AI one day replace a human judge?

The legal profession has long debated the potential roles of artificial intelligence, and at SXSW, Lander & Rogers took this further by exploring, through a mock trial, whether AI could one day assume a judge’s role on the bench.

October 24, 2025 By Grace Robbie
Share this article on:
expand image

As AI continues to reshape the legal profession and take over an ever-growing range of tasks, the question of what – or who – comes next is becoming impossible to ignore.

At this year’s SXSW, Lander & Rogers tackled that question head-on, asking a provocative and timely query: could AI one day replace human judges?

 
 

To explore the idea, the firm staged a futuristic mock case trial in which an AI “judge” presided over a case involving a software engineer accused of embezzling $500,000 from her employer.

When the AI delivered its sentencing decision, a panel of experts dissected the outcome – sparking a lively debate about the ethical, legal, and practical implications of letting machines shape justice.

One key question addressed during this discussion was whether it is ethically acceptable to delegate decisions that affect human lives to machines.

Dr Peter Collins, an Oxford-trained ethicist, highlighted the growing role of AI across various fields, noting that while it can significantly enhance processes, the responsibility for final decisions must remain with humans.

“I would use AI, if I were a doctor, to improve clinical care. But I would always leave the decision making to the doctor and the nurses. We do it [use AI] in education, we do it in all sorts of fields. It’s not exceptional. I think the thing that AI is brilliant at is sweeping up data,” Collins said.

Collins went further, arguing that in certain respects, AI could even act more ethically than humans, precisely because it is free from emotion and ingrained bias.

“Ultimately, AI is actually more ethical than human beings because it’s not necessarily swayed by bias. The limitation of our decision making is the limitation of our cognitive function. We’re hardwired for stereotypes,” Collins said.

Reflecting on the mock trial, Mat McMillan, a technology lawyer and partner at Lander & Rogers, offered a pragmatic counterpoint, raising concerns about what happens when AI gets it wrong.

Unlike minor errors in everyday software, McMillan explained, mistakes made by AI in a courtroom could have far more serious consequences – potentially impacting a person’s freedom.

“We’ve all used ChatGPT at some stage, got a response, and thought that’s not quite right. But when you translate that to a courtroom setting and the AI hallucinates, or it makes a mistake, it misreads a witness statement, or it makes a recommendation for too harsh a sentence in the circumstances, then we’re starting to play with someone’s liberty,” McMillan said.

“And that raises the questions: who should be responsible for that? Is that the coder? Is it the judge [who] relied on it? Is it the agency that’s rolled out the AI solution?”

In today’s legal system, accountability is clear – when a mistake occurs, there is always a human name attached. But McMillan posed a sobering question: what happens when that chain of accountability breaks?

“When you look at our traditional liability frameworks, they often assume that there’s a human that you can point to. There are concepts of duties and standards of care and intent that you can trace back to,” McMillan said.

“But AI muddies that because what if it is bias in the data that the system’s been trained on? What if it is a glitch with the algorithm? What if it is an edge case that hasn’t actually been thought of by the parties?”

When considering what system might strike the right balance, McMillan proposed a middle path – one where AI is used in courtrooms as a “high-risk” tool, subject to stringent certification and continuous oversight.

“I think a middle path might be to look at AI tools in a courtroom setting as high risk, and to put them through rigorous certifications and ongoing audits so that if a mistake does arise, you can pinpoint the error and trace back through to where the failure has occurred in the process,” McMillan said.