Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

AI is creeping into courts. How will Australia respond?

Artificial intelligence (AI) is being increasingly implemented in court processes around the world, including in Estonia, which will soon see small contract disputes being settled by “AI judges”. In light of the benefits and failures seen around the world, one expert weighs in on how Australia should approach AI in courts. 

user iconJess Feyder 30 September 2022 Big Law
AI is creeping into courts. How will Australia respond?
expand image

In Estonia, the automation of small contract disputes is being actively pursued, with the Estonian Ministry of Justice announcing it will seek to clear a backlog of cases by using 100 so-called “AI judges”.

Reportedly, the project could adjudicate small claim disputes under €7,000.

The two parties will upload documents and other relevant information, and the AI system will issue a decision that can be appealed to a human judge. 

Advertisement
Advertisement

The intention of implementing AI judges is to give human judges more time to deal with the more complex disputes.  

AI is increasingly seeping into court processes around the world in often quite mundane tasks.  

A joint research project by the Australian Institute for Judicial Administration (AIJA), UNSW Law & Justice, UNSW Allens Hub for Technology, Law and Innovation and the Law Society of NSW’s Future of Law and Innovation in the Profession (FLIP Steam) has identified some of the key issues arising from the increasing presence of AI in court systems around the globe. 

The project’s report, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators, identified examples of the use of AI in Australia and overseas, from computer-based dispute resolution software to the use of computer code based directly on rules-driven logic, or “AI judges”.

Despite hesitancy, AI is a growing part of court processes, said Professor Lyria Bennett Moses, director of the UNSW Allens Hub and associate dean of research at UNSW Law & Justice.

“Artificial intelligence, as a concept and as practice, is becoming increasingly popular in courts and tribunals internationally. 

“There can be both immense benefits as well as concerns about compatibility with fundamental values,” she said.  

“AI in courts extends from administrative matters, such as automated e-filing, to the use of data-driven inferences about particular defendants in the context of sentencing.  

“Judges, tribunal members and court administrators need to understand the technologies sufficiently well to be in a position to ask the right questions about the use of AI systems.”

The pitfalls of AI

Some of the concerns around the ethics of AI have been identified in the United States following the use of what is known as the Correctional Offender Management Profiling for Alternative Sanctions tool (COMPAS).  

The tool is meant to aid judicial processes by conducting a risk assessment on the likelihood that an offender will break the law again. 

COMPAS integrates 137 responses to a questionnaire, with questions ranging from clearly relevant considerations like “how many times has this person been arrested before as an adult or juvenile?” to the more opaque “do you feel discouraged at times?”

The code and processes underlying COMPAS are secret and are not known to the prosecution, defence, or judge.

The findings of the COMPAS tool have very real consequences and will inform the judge on whether the alleged offender can be granted bail, or whether the accused should be eligible for parole. 

In a 2013 case, Paul Zilly was convicted of stealing a lawnmower. The prosecution together with Mr Zilly’s lawyers agreed to a plea deal of one year in a county jail and a subsequent supervision order.  

However, on the basis of a high risk of reoffending COMPAS score, the judge rejected the plea deal and sentenced Mr Zilly to two years in prison.

In 2016, non-profit investigative journalist site ProPublica looked through around 10,000 criminal defendants in Florida and found clear racial bias in the COMPAS software. 

It found that African-American defendants were more likely to be given a false-positive flag of high risk, and white defendants were more likely to be given a false-positive low-risk score, despite not in fact being low risk. 

Professor Bennett Moses has questioned whether similar tools should ever be acceptable in an Australian context. 

“Everyone has a right to be treated impartially,” she said. “The use of some tools is in conflict with important legal values.

“[T]here are tools, frequently deployed in the United States, that ‘score’ defendants on how likely they are going to reoffend. 

“This is not based on an individual psychological profile, but rather on analysis of data. If people ‘like’ you have reoffended in the past, then you are going to be rated as likely to reoffend.

“The variables used in this analysis include matters such as whether parents are separated (and, if so, one’s age when that occurred) — the kinds of things that might statistically correlate with offending behaviour but are outside one’s own control. 

“The tool is also biased (on some fairness metrics) against certain racial groups.

“It is important to ask whether the use of such tools would be appropriate in an Australian court.”

Where AI helps

While the US experience of AI in the courtroom has raised questions domestically and internationally, the report also identified positive experiences where AI has aided access to justice. 

Professor Bennett Moses said that AI could be of important value in addressing language barriers.

She noted a practical and non-controversial example of a benefit — the use of natural language processing in converting audio of what is spoken by judges, witnesses in court and counsel to text.

This can make access to court transcripts faster and easier, particularly for those with hearing impairments. In China, some trials are captured “in real time” in Mandarin and English text.

“I’ve always believed that interesting legal questions lie on the technological frontier, whether that relates to AI or other new contexts to which the law is called to respond,” she said. 

You need to be a member to post comments. Become a member for free today!