From deepfakes targeting individuals and businesses to increasingly sophisticated AI-driven scams, emerging technologies present a growing threat to the legal profession, prompting urgent concerns across the sector.
Artificial intelligence and related technologies have rapidly swept the legal profession, offering the promise of greater efficiency, productivity, and enhanced business capabilities. However, the misuse of these fast-evolving tools is a growing concern.
Legal professionals are now facing “targeted corporate attacks”, with multiple cases revealing that law firms are increasingly falling victim to AI-driven scams and deepfake manipulation.
As both technology and cyber crime evolve at a rapid pace, lawyers and legal professionals must stay informed about the intersection of AI and fraud to navigate the significant challenges these emerging digital threats present effectively.
The deepfake dilemma
One of the most alarming applications of AI in cyber crime – and a real and immediate threat to the legal profession – is the rise in attacks facilitated by deepfake technology.
Dragan Gasic, special counsel at BlackBay Lawyers, warned that AI-generated deepfakes “have taken realism to the next level”, resulting in professionals across the legal and broader professional services sectors falling victim to sophisticated deception.
Gasic pointed to a striking example from early 2024, when a British business, Arup, was scammed out of HK$200 million (approximately $39 million) through a deepfake scheme.
“An employee was deceived into transferring funds after participating in a video conference where fraudsters used AI-generated visuals and voices to impersonate senior company executives,” he said.
While Arup confirmed that its “financial stability and internal systems remain unaffected”, Gasic explained that the incident highlighted the “increasing prevalence and sophistication of cyber threats, including deepfakes”.
With such attacks occurring globally, Selwyn Black, partner at Carroll & O’Dea Lawyers, noted that the “ever-accelerating capabilities” of this technology have “raise[d] new and multifaceted concerns across the entire legal landscape”.
He emphasised that deepfake-related attacks “introduce serious risks around intellectual property infringement, brand misuse, passing off, and reputational harm”.
Gasic added that the dangers posed by AI and deepfake technology are “no longer theoretical”, warning that these tools are “already distorting reality and eroding the foundational trust upon which Australian businesses operate”.
“When used maliciously, AI technologies don’t merely deceive, they undermine confidence in communications, they challenge the integrity of identity verification and cast doubt on the reliability of digital evidence,” he said.
Cameron Whittfield, partner at Herbert Smith Freehills Kramer, echoed these concerns, noting that AI’s role in cyber security threats is far from static and “is something that is still evolving”.
“We know that many threat actors are using AI tools to develop deepfake impersonations, create realistic phishing campaigns, assist with ransomware and develop malware,” he said.
Whittfield warned that the widespread adoption of AI, both within the legal profession and beyond, has not only “lowered the barriers to entry for threat actors” but is also “exposing security vulnerabilities at a speed that makes it very challenging to respond”.
“One particular area of vulnerability for professional services is in relation to business email compromises or financial redirection fraud. We thought that we were getting a handle on these threat vectors, but they’re making a comeback,” he said.
“AI tools are making the ‘bait’ more convincing and easier to develop – for example, we know that a large portion of phishing emails are now developed using AI tools”.
The threat at home
With the global reach of digital ecosystems and increasingly sophisticated cyber crime networks, AI-driven and deepfake attacks targeting the legal profession are no longer a distant threat – they are already emerging in Australia.
According to Gasic, the Australian Competition and Consumer Commission (ACCC) reported that in 2023, its Scamwatch service “recorded 109,000 reports of phishing activity in Australia, resulting in losses of AU$26.1 million”.
Even more concerning, Gasic noted that Australia has become a significant source of phishing activity, entering the global top 10 for countries hosting phishing attacks, “with a 479.3 per cent surge in the volume of phishing content hosted in the country”.
He explained that the rise of AI-driven phishing campaigns has only “further propelled cyber attacks’ success”, allowing them to exploit vulnerabilities among “unsuspecting Australians” – including legal professionals – at an unprecedented scale.
Despite the escalating threat, Australia’s current legal frameworks are struggling to keep pace with the rapid evolution of AI and deepfake technologies.
While legislation like the Privacy Act 1988 (Cth), introduced in response to the “distressing” data breaches affecting millions of Australians, offers some level of protection, Gasic noted that these measures provide only limited remedies.
“Australian law offers partial remedies through existing causes of action such as fraud, defamation, misleading and deceptive conduct, but lacks a dedicated deepfake framework. The Privacy Act and recent AI regulation proposals hint at reform, but enforcement gaps remain,” he said.
However, the impact extends beyond Australia and is being experienced worldwide.
“Deloitte’s Centre for Financial Services predicts that generative AI could enable fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023, a compound annual growth rate of 32 per cent,” Gasic said.
How the legal profession stays on the front foot
In response to the rising risks associated with AI-driven fraud and deepfakes, lawyers are increasingly being called upon to serve as the first line of defence for businesses.
According to Gasic, effectively managing the legal risks posed by AI demands “innovative approaches to contract drafting and review”, “advising clients on the manipulative potential of algorithmic technologies”, and reinforcing “safeguards such as multifactor authentication, biometric verification, and internal protocols for validating executive communications”.
He also stressed the need for lawyers to develop a “clear understanding of both the capabilities and limitations of AI, enabling them to effectively challenge or defend digital evidence by examining metadata, verifying timestamps, and maintaining a reliable chain of custody”.
Gasic further explained that this evolving digital landscape is forcing lawyers to expand their roles beyond traditional practice by taking on responsibilities as “educators, strategic advisors, and digital guardians”.
Whittfield explained how AI is “revolutionising how [the legal profession] detects, responds to, and prevents cyber incidents and fraud”.
“From real-time threat detection to predictive analytics that flag suspicious behaviour before damage is done, AI empowers defenders with speed and precision,” he added.
While the pace of technological advancement poses challenges, Whittfield urged legal professionals to remain alert and adaptable.
“To keep pace, we really need to stay across the developments in the threat landscape, including the new tools and techniques being used by our adversaries.
“We should also maintain an open mind about new tools developed to assist with our defences, be extra vigilant about the security of our third-party supply chains, and take particular care with the protection of our data holdings,” he said.
Black echoed these concerns, noting that the increasing use of AI-generated content in cyber attacks makes it imperative for “law firms and internal legal departments to remain alert as to how AI can be weaponised in commercial environments”.
“Today’s technological developments highlight a need for legal professionals to adopt coordinated and forward-thinking approaches,” Black said.
“From criminal and civil liability to IP protection, privacy compliance, and evidentiary integrity, the legal management of GenAI and deepfake technology are ultimately becoming part of business as usual.”
Despite the complexity of the challenges, Whittfield remains hopeful, stating: “In the long run, I’m optimistic that we will be able to effectively use AI tools to combat the changing threats.”