You have 0 free articles left this month.
Big Law

Can workers refuse to use AI on religious grounds?

“I don’t use AI on moral grounds.” It sounds like a line from a science fiction film, but some employers are telling me it is a sentiment being voiced with increasing regularity in workplaces, writes Paul O’Halloran.

May 13, 2026 By Paul O'Halloran
Share this article on:
expand image

Whether driven by ethics, faith, or something in between, some employees are pushing back against the AI tools their employers are rolling out. And in Australia, where the law protects religious belief in the workplace, that objection may carry more legal force than most employers realise.

Back in 2024, Pope Francis devoted his World Day of Peace message entirely to the theme of AI and human dignity, a notable intervention from the head of a church with more than 1.3 billion followers. He warned of the dangers of “technological paradigms” that reduce human beings to objects of algorithmic manipulation and called for AI to remain in the service of humanity rather than the other way around. Those words were spoken two years ago, and the workplaces the Pope was warning about have since arrived. It is not difficult to see how an employee of faith, reflecting on that guidance, might regard a mandatory AI tool as a genuine affront to their beliefs, and how a court might take that seriously.

 
 

In the United States, workers have already tested whether religious belief can exempt them from using workplace technology, and courts have taken those claims seriously. In 2015, a Virginia jury awarded US$150,000 to a coal worker who refused to use a biometric hand scanner to track his attendance. The scanner was not AI in the modern sense, it was an automated identification tool, but the legal principle it established translates directly. The worker objected on the grounds that it violated his faith as an evangelical Christian, citing the Book of Revelation and his belief that the scan would leave him with the “mark of the beast”. Fanciful? Perhaps. Legally actionable? Apparently so.

Australian law has not yet grappled with this question directly, but the framework for doing so exists. The Fair Work Act 2009 (Cth) prohibits adverse action against employees on the basis of religion, and state anti-discrimination legislation in NSW, Victoria, and Queensland, among others, extends those protections to religious belief and activity in the workplace. The practical scenarios are not hard to imagine: an accountant who objects to AI-generated work on the grounds that it dehumanises their profession, a healthcare worker who refuses AI diagnostics out of a belief in the sanctity of human judgement, or an HR manager whose faith community has spoken out against algorithmic selection of candidates in recruitment. None of these is far-fetched.

Australian case law does, however, draw a meaningful distinction between a personal moral objection and a genuinely protected religious belief. The High Court’s leading authority on the definition of “religion”, a 1983 payroll tax exemption case involving the Church of Scientology, held that a religion must involve a supernatural element, accepted canons of conduct flowing from that belief, and be recognised by its adherents as religious in nature. A personal conviction that AI is ethically wrong, however sincerely held, is unlikely to clear that bar without being firmly anchored in the doctrines of a recognised religion. That said, an employee whose objection draws directly on formal religious guidance, such as the Vatican’s published statements on AI and human dignity, stands on considerably stronger ground.

In the workplace, religious conviction does not automatically relieve an employee of the obligation to follow a lawful and reasonable direction. Reasonableness is assessed objectively. The stronger the operational connection between the AI tool and the core function of the role, the more defensible the employer’s refusal to accommodate becomes. A data analyst whose role is predominantly AI-driven stands in a very different position from a worker who uses AI tools only incidentally. But an employer who dismisses a faith-based objection without genuine inquiry risks undermining the lawfulness of the direction itself. That failure also carries broader legal exposure: general protections claims under the Fair Work Act, human rights complaints, and breaches of consultation obligations under modern awards and work health and safety legislation.

The risks will grow as AI becomes more central to more roles. According to a 2024 Boston Consulting report, roughly half of all jobs globally will be reshaped by AI within the next few years. As that happens, the question of what accommodation is reasonable for religious objections will become increasingly difficult to answer. An employee who only tangentially uses AI tools may be easier to accommodate than one whose entire function is built around AI output.

No Australian court has yet been asked to rule on a faith-based objection to AI in the workplace, but the absence of a decided case is not the same as the absence of legal risk. The law does not need to be written specifically for a scenario before it applies to one. Employers who treat religious objections to AI as a curiosity rather than a genuine legal exposure may find themselves in a courtroom before the law has even caught up.

Paul O’Halloran is a partner and accredited specialist in workplace relations at law firm Dentons.

Want to see more stories from trusted news sources?
Make Lawyers Weekly a preferred news source on Google.
Click here to add Lawyers Weekly as a preferred news source.