You have 0 free articles left this month.
Advertisement
Big Law

Barrister risks disciplinary action over citing ‘entirely fictitious’ AI-generated cases

A judge has found that an immigration barrister relied on artificial intelligence tools to prepare for a tribunal hearing, resulting in the citation of fictitious and irrelevant cases.

October 20, 2025 By Grace Robbie
Share this article on:
expand image

An immigration barrister has been found by a UK judge to have relied heavily on AI tools, including ChatGPT, to conduct legal research for a tribunal hearing, resulting in the use of “entirely fictitious” and “wholly irrelevant” case citations.

In light of the findings and the unnecessary drain on the tribunal’s time, Chowdhury Rahman could face a disciplinary investigation, with Upper Tribunal Judge Mark Blundell considering a referral to the Bar Standards Board.

 
 

The case came to light during an Upper Tribunal hearing involving two Honduran sisters seeking asylum in the UK, claiming they were being targeted by a criminal gang in their home country, in which Rahman represented the appellants.

During the hearing, Rahman argued that the lower court judge had failed to properly assess credibility, made an error of law in evaluating documentary evidence, and overlooked the impact of internal relocation.

However, Judge Blundell rejected these arguments, dismissing the appeal and ruling that “nothing said by Mr Rahman orally or in writing establishes an error of law on the part of the judge and the appeal must be dismissed”.

In a rare postscript to his judgment, Blundell highlighted “significant problems” with the grounds of the appeal, particularly pointing to shortcomings in Rahman’s legal research.

Of the 12 authorities cited in the appeal, the presiding judge found that several did not exist and that others “didn’t support the propositions of law for which they were cited in the grounds”.

When Judge Blundell asked Rahman to walk him through the authorities in order and identify the passages supporting the legal propositions in the grounds, Rahman was unable to do so.

Even when he could locate a passage, the judge stated that “not one which offered any support for the propositions of law which were set out in the grounds”.

“Mr Rahman said that he had made a mistake and that he had intended, instead of citing one authority, to cite a completely different one. Often, however, the authority he said that he had intended to cite was also irrelevant to the proposition of law set out in the grounds,” Judge Blundell said.

Judge Blundell noted that Rahman had relied on “various websites” for his research and, given that he “appeared to know nothing” about the authorities he referenced, all of his submissions were consequently “misleading”.

While he outlined four possible explanations for the situation, the judge concluded that the only plausible explanation was that Rahman relied on AI to formulate the grounds of his appeal.

“In my judgment, the only realistic possibility is that Mr Rahman relied significantly on GenAI to formulate the grounds and sought to disguise that fact when the difficulties were explored with him at the hearing,” Judge Blundell said.