You have 0 free articles left this month.
Powered by MOMENTUM MEDIA
lawyers weekly logo
Advertisement
Big Law

‘AI is not intelligent’, new research warns

While the rapid rise of artificial intelligence is transforming not only the legal profession but also the world at large, new research led by Charles Darwin University has warned that it is also placing fundamental human rights and dignity at serious risk.

July 22, 2025 By Grace Robbie
Share this article on:
expand image

In a study released this week, Charles Darwin University legal academic Dr Maria Randazzo delved into the rapid rise of artificial intelligence, revealing how its “unprecedented” influence is not only reshaping Western legal and ethical frameworks but also “undermining democratic values and deepening systemic biases”.

Randazzo warned that current regulatory frameworks have “failed” to safeguard and protect fundamental rights and freedoms such as “privacy, anti-discrimination, user autonomy, and intellectual property rights”.

 
 

This failure stems largely from the opaque nature of many algorithmic models, which operate in ways that are nearly impossible to decipher or question, a challenge Randazzo described as the “black box problem”.

Due to this lack of transparency, Randazzo explained, decisions made by AI models are often “impossible” for humans to trace, leaving individuals in the dark about whether their rights have been violated and with limited means to challenge such outcomes.

“This is a very significant issue that is only going to get worse without adequate regulation,” she said.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.

“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

The study spotlighted the starkly different paths the world’s major powers are taking in regulating AI – with the United States adopting a market-led approach, China embracing a state-driven model, and the European Union striving to lead with a human-centric framework.

Randazzo said the EU’s model provides a more ethical blueprint by placing human dignity at the forefront of AI development. But even this, she argued, will be insufficient without broader global alignment.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.

“Humankind must not be treated as a means to an end.”

You need to be a member to post comments. Become a member today