You have 0 free articles left this month.
Advertisement
SME Law

Why AI won’t ‘kill’ the law — but could quietly undermine it if left unchecked

Claims that AI will “kill” the legal profession are everywhere, but not everyone is buying the hype. Instead, Jean Gan warns that the real danger isn’t AI itself – it’s what happens if lawyers ignore the technology that could quietly reshape the profession from within.

February 16, 2026 By Grace Robbie
Share this article on:
expand image

As artificial intelligence tools become increasingly embedded in legal work, a growing chorus has warned that the technology will spell the end of the legal profession as we know it.

But Jean Gan, head of legal and compliance, argued that narrative misrepresents the nature of law – and ignores where the real risks actually lie.

 
 

While she acknowledges AI’s ability to accelerate legal analysis and streamline processes, Gan argued that the idea that it could replace lawyers falls apart once you consider the judgement, responsibility, and accountability that define the profession.

“I fundamentally disagree because it misunderstands what law actually is. Law is not just about producing answers efficiently. It is about judgement, responsibility, and accountability when decisions are challenged,” she said.

“AI can support lawyers by accelerating analysis and surfacing patterns, but it cannot carry professional duty. That responsibility still sits with human lawyers, whether in private practice or in-house.”

Gan argued that the real danger of AI isn’t replacement, but the subtle – and far more insidious – risk that comes from unexamined delegation of legal judgement to AI tools.

“From my perspective as a career in-house lawyer, the real risk is not replacement, but unexamined delegation,” she said.

“AI outputs can appear authoritative, and there is a temptation to rely on them without enough scrutiny. When legal judgement is gradually deferred to tools rather than exercised deliberately, risk accumulates quietly.”

While AI is already woven into daily legal practice, Gan cautioned that the profession’s greatest vulnerability lies in downplaying its impact and deferring responsibility, governance, and scrutiny, hoping regulators or courts will provide all the answers.

“What I do see across organisations and institutions, however, is a tendency to downplay AI’s impact while it is already being used day to day,” she said.

“AI is often treated as a technology or efficiency issue rather than a question of professional responsibility and governance. There is also a lot of waiting for regulators or courts to provide certainty, even though practice has already moved ahead.”

However, Gan emphasised that ignoring or minimising AI’s impact is far more dangerous than the technology itself, because mistakes quietly multiply when leadership turns a blind eye, leaving limits undefined, guidance absent, and accountability unclear.

“AI will be used regardless of whether leadership openly addresses it. When there is no shared understanding of limits, no guidance, and no clear accountability, mistakes multiply quietly,” she said.

“Most serious issues I have seen stem from governance gaps rather than the technology itself.”

Looking ahead, Gan urged the profession to focus on clarity, literacy, and accountability, rather than fear or avoidance, calling on legal leaders to be explicit about where AI can assist and where it must never make decisions.

“Be explicit about where AI can assist and where it must never decide. Invest in AI literacy that focuses on limitations, uncertainty, and risk, not just productivity,” she said.

“Bring AI into professional judgement and ethics discussions, not just internal policies. And ensure a human lawyer always remains clearly accountable for outcomes.

“That is how trust in the profession is maintained as it evolves.”