You have 0 free articles left this month.
Advertisement
Big Law

Chief Justice warns of AI fabrication, delegation in international arbitration

The international arbitration community should be alive to the risks of generative artificial intelligence, Chief Justice Andrew Bell has said.

November 03, 2025 By Naomi Neilson
Share this article on:

Chief Justice Andrew Bell. Source: Australian Centre for International Commercial Arbitration (ACICA)

expand image

In a speech delivered to the International Arbitration Conference 2025, Chief Justice Andrew Bell of the NSW Supreme Court acknowledged the opportunities provided by generative artificial intelligence (GenAI) but warned against the risks of “misuse and overuse”.

Specifically to the international arbitration space, the dual themes of fabrication and delegation have the potential to “threaten the legitimacy and international currency of arbitral awards”, he warned.

 
 

Referring first to the former, Chief Justice Bell told the conference that the risks posed by deepfake evidence are real and have been exacerbated by the “increased democratisation of the technology”.

Investor-state dispute settlements may be at particular risk, “since they often concern or involve high-profile political figures whose likeness is more vulnerable to manipulation by deepfake technology”.

For example, former US president Barack Obama, Tom Cruise, and former Gabon president Ali Bongo have been victims of deepfake technology. Closer to home, Chief Justice Bell said there was a deepfake of two Supreme Court judges who had words “literally put into their mouths”.

This deepfake technology may also open up the “liar’s dividend”, by which the veracity of genuine evidence could be challenged.

“Fabricated evidence and even judgments and arbitral awards may be facilitated by GenAI’s particular penchant for verisimilitude and remarkable skill of imitation,” Chief Justice Bell said.

“These will no longer be the province of the skilled forger. I regard this phenomenon as a particularly significant risk in litigation and arbitration.”

On delegation, Chief Justice Bell pointed to an example of an arbitrator who admitted to using ChatGPT to write articles and had expressed a need to rush through a decision before his trip started.

There has also been “much academic attention” on the consensual use of GenAI, with theorists suggesting this would occur in three stages: AI playing an “advisory function” that is double-checked by humans, a joint tribunal of AI and human, and a sole AI arbitrator.

“What are the implications for the use of GenAI in the enforcement of an arbitral award issued (with the consent of the parties) by a fully autonomous AI judge, or a human judge assisted by AI? On its face, given the prominence given to party autonomy in international arbitration, the answer should be high,” Chief Justice Bell said.

While the natural point in reply is that human judges also exhibit bias, Chief Justice Bell said human bias is “easier to correct” because human arbitrators “can be trained to recognise and mitigate”.

Chief Justice Bell acknowledged international arbitral awards are rarely published and, when they are, heavily redacted. With this in mind, arbitral decision making is not “sufficiently high volume” to make it the “ideal candidate for automation with AI”.

Naomi Neilson

Naomi Neilson is a senior journalist with a focus on court reporting for Lawyers Weekly. 

You can email Naomi at: This email address is being protected from spambots. You need JavaScript enabled to view it.