You have 0 free articles left this month.
Powered by MOMENTUM MEDIA
lawyers weekly logo
Advertisement
Big Law

AI-inspired cyber breaches now ‘a day-to-day part of our jobs’, says partner

The work of cyber lawyers is continually evolving, including and especially given the advent of cyber security breaches involving artificial intelligence, says a BigLaw partner.

May 30, 2025 By Jerome Doraisamy
Share this article on:
expand image

Speaking recently on The Lawyers Weekly Show, Norton Rose Fulbright partner and Australian head of cyber security Annie Haggar discussed how the complexity of cyber security implications for businesses, and therefore their lawyers, “has exploded” in the last few years, and the proliferation of artificial intelligence (AI) being used in such criminal activity cannot be understated.

In that episode, Haggar reflected on how different the work of lawyers in this space will look in 2030, relative to what it is in the present day.

 
 

Even just a few years ago, she said, cyber lawyers weren’t having to advise on breaches involving AI, which have already become a “day-to-day part” of the job. Just recently, she mused, she advised a client on a global data investigation that was required after an employee tested out DeepSeek with sensitive data.

This happened, she said, three days after DeepSeek was launched.

“The role of a cyber lawyer today is going to look completely different in 2030 and I just can’t tell you what that’s going to look like because we don’t know what the technology is going to be like,” she warned.

“There may be more widespread availability of quantum computing to start with, let alone where AI is going to be there by then. But I think even if you look at the job of the cyber security lawyer today versus a couple of years ago, we are talked about much more as trusted advisers to the organisation and to the risk team.”

“It’s not that we are just focused on a cyber breach and the privacy response – we are there helping them to build their risk management framework to advise on compliance with laws,” Haggar noted.

Such laws include the recently amended Security of Critical Infrastructure Act, new APRA regulations, and the new Internet of Things security requirements (which covers any organisation manufacturing or supplying devices that connect to the internet) – this, she said, is “pretty much everything”.

“The complexity of cyber security implications for businesses and therefore their lawyers, has exploded in the last few years. It’s not just breach response work anymore.”

AI, Haggar continued, will be “revolutionary” for the legal profession and broader community – “if used correctly” – but there will always be bad users, she said.

“In the good space, AI is being used to help provide cyber security defences. It’s built into the tools that scan the billions of communications flashing between the millions of people every day to see what’s malicious and to catch malicious traffic and to stop it before humans would have been able to detect it. So there’s huge uses of AI in cyber security defence and the big cyber security product companies and services companies are using that to their advantage and to our advantage right now,” she outlined.

“But when you get onto how is it being used for nefarious purposes, it starts to get really interesting and it feels a bit like we’re in The Terminator, to be honest. There’s already been botnet attacks, [whereby] any device that connects to the internet, if it’s not secured, can be used and manipulated by threat actors to conduct cyber attacks. What threat actors are doing is using unsecured Internet of Things devices, which these days include your fridges, your microwaves, your washing machines, your baby monitors, your video, your home security cameras, etc, and they are gathering all of those using AI into bot swarms that then all focus their internet connection on particular systems.”

“So, they are using these bot swarms to conduct cyber attacks on businesses, to take them down. They don’t need to have huge computing power, which is expensive and very visible. They can harness thousands of anonymous Internet of things devices to conduct these attacks,” she detailed.

This is one of the reasons for the minimum security requirements being introduced in the Cyber Security Act, Haggar explained.

Agentic AI is also a cause for concern, she went on, as it acts as an agent facilitating communication between tools.

It is being suggested, she said, that agentic AI tools can be put together to conduct cyber attacks and be made available for hire. So, you can hire your own AI agent, threat actor, to conduct threat cyber attacks on whichever target you’d like to point it at.

This is not just in our future, Haggar proclaimed – it is starting to happen now.

“People are starting to experiment with using AI tools in that way, and as the AI agentic networks get better, [I’m] absolutely sure that cyber criminals will be offering that as a service. They already offer hacking as a service. You can pay to have something hacked, [among] other cyber crime services that are offered, and agentic AI conducting threats will be added to that list.”

To listen to the full conversation with Annie Haggar, click here.

Lawyers Weekly will host the Partner Summit on Thursday, 12 June 2025 at The Star, Sydney, at which speakers will address the range of opportunities and challenges for partners and partners-equivalent, provide tips on how they can better approach their practice and team management, and propel their businesses towards success. Click here to book your tickets – don’t miss out! For more information, including agenda and speakers, click here.

You need to be a member to post comments. Become a member today