You have 0 free articles left this month.
Advertisement
Big Law

The ‘life-changing’ risks of AI replacing human reasoning in refugee and asylum decisions

There has been a global shift in the use of AI in legal systems, where these technologies have been used in life-changing decisions. New research calls for greater accountability and caution when using AI algorithms in refugee and asylum decisions.

October 20, 2025 By Carlos Tse
Share this article on:
expand image

In her paper titled, The human in the feedback loop: Predictive analytics in refugee status determination, University of Wollongong Associate Professor Niamh Kinchin explored how AI is starting to influence decision making in refugee and asylum case determinations.

The ‘deeply human nature’ of refugee cases

 
 

To determine whether an asylum seeker qualifies as a refugee under international law, governments use what is called a “refugee status determination”. This process establishes a careful judgement on whether an individual has a legitimate fear and risk of persecution if they return to their home country.

Countries such as Canada and the Netherlands have started to use automated systems to sort visa applications. However, the use of similar automation in refugee determinations may carry the risk of “biased or overly rigid outcomes”, Kinchin’s research found.

Kinchin said: “Algorithms learn from historical data, but refugee cases are forward-looking and shaped by emotion and uncertainty.”

“A machine can detect patterns, but it cannot understand the difference between fear that is genuine and fear that is reasonable. That distinction lies at the heart of refugee law.”

She highlighted that despite models being effective, there remain problems with insufficient data, which lead to “inaccuracies or uncertainties”. Systems are to be designed to protect fairness, dignity, and serve justice sensitively, “especially when they affect the most vulnerable”, she emphasised.

Ways forward

Without strong human oversight, Kinchin warned, the complex and deeply human nature of refugee determinations makes errors arising from AI automation particularly damaging to asylum seekers – given the “complex and deeply human nature of refugee cases”, she said.

Kinchin explained that “predictive systems promise faster and more consistent decisions, but the law is not just about logic and data – it’s about fairness, compassion and the human story behind every case”.

“Technology can support decision making, but it cannot replace the moral and emotional reasoning that people bring to justice. Refugee decisions are about fear, hope and protection. They must never become statistical exercises,” she added.

To promote the ethical handling of these cases, Kinchin urged governments and courts to keep “the human in the feedback loop”, making sure that humans remain responsible for the oversight, interpretation and questioning of AI outputs.

“AI can be a useful aid, but a human decision-maker must always retain final control,” she said.

Carlos Tse

Carlos Tse is a graduate journalist writing for Accountants Daily, HR Leader, Lawyers Weekly.