In the current landscape, law firms are becoming prime targets for increasingly sophisticated cyber attacks, particularly as AI adoption introduces new vulnerabilities. Here, this CEO outlines how legal practices can strengthen their defences, mitigate AI-related risks, and build client trust through proactive cyber security strategies.
The rate of cyber crime in Australia – and globally – has surged in recent years, with 83 per cent of organisations revealed to have been hit more than once.
In 2025, law firms and legal teams are also increasingly attractive targets for cyber criminals, despite 58 per cent of leaders from in-house teams saying that it would take a cyber incident to improve processes, while one in two law firms is not ready to handle a cyber incident.
The profession has witnessed firms be victims to attacks, including BigLaw practices HWL Ebsworth and IPH, and such attacks on legal practices are reportedly not going to decline, with client communications in particular being a key concern in terms of cyber security.
In conversation with Lawyers Weekly, Cybertify CEO Ramtin Diznab said that “too many” law firms treat cyber security as a “helpdesk function rather than a risk discipline”, resulting in backups and response plans going untested and recoverable incidents turning into prolonged outages.
“Managed service providers are valuable, but without explicit security architecture, threat modelling and control verification, claims of maturity rarely match reality,” he said.
“Identity hygiene is often poor. Legacy protocols persist, conditional access is permissive, device compliance is not enforced, and standing admin rights remain widespread instead of being elevated just in time. Vendor sprawl, especially across AI and niche SaaS, proceeds without rigorous due diligence on data residency, logging, retention and subcontractors.”
Most attack paths, according to Diznab, start with a “passive, open-source reconnaissance phase” to map out an organisation, its people and its tech. This is generally followed by a more active phase, probing systems with different scans to reveal potential vulnerabilities.
“The key difference in an active reconnaissance phase as opposed to a passive one is that the firm has a chance to detect our activities, depending on its security maturity level,” he added.
During each of these phases, hackers can gain a good understanding of an organisation’s current affairs, its people and roles, its public technology as well as an idea of its internal technology through job ads or LinkedIn profiles.
Acting as a hacker, Diznab said he would “target the weakest or the most likely path” to get a foothold in the organisation. This could range from anything to password attacks against an external service, to a phishing campaign, to physically walking into a building and connecting a laptop to the company’s network.
“With just a low-privilege foothold, I might be able to extract the information from whatever database I require; however, oftentimes, privilege escalation is required. This usually requires a privilege escalation phase, which is typically stealthily looking around the network for areas where there are misconfigurations, vulnerabilities, or poor security practices, such as a domain administrator’s password being in a deployment script on a publicly accessible share,” he said.
“Regardless, once I’m in, I should have everything I need to be targeting whatever the objective was, which for a typical legal firm might be compromising the confidentiality of a critical database such as LEAP or iManage.”
AI tools driving increasing cyber risks
As emerging technology is continually adopted by the profession, AI-related cyber breaches are becoming more common, particularly with the rise of shadow AI, the unauthorised use of AI tools by employees without proper approval or oversight.
“AI changes the data flow and expands the implicit trust surface. Prompt injection and indirect prompt injection allow an adversary to hide instructions inside emails, PDFs, websites or knowledge bases that an assistant later ingests,” Diznab said.
“If the assistant has tools, those planted instructions can drive exfiltration or internal calls that were never intended. Over-scoped connectors and service principals are common. A single leaked token with tenant-wide read of files or mail becomes a mass disclosure event.”
To mitigate these risks, firms should establish a robust AI usage policy that clearly defines approved tools, outlines permissible use cases, and prohibits unvetted platforms for processing sensitive client data. Regular audits and network monitoring can help detect unauthorised AI applications early, while enforcing multifactor authentication (MFA) across all AI and cloud systems strengthens access security.
Ongoing staff training is also essential to raise awareness of threats such as AI-driven phishing and deepfakes, while promoting secure prompt engineering and compliance with firm policies. Additionally, deploying AI-specific security controls, such as firewalls and endpoint detection systems that monitor prompt activity, can help prevent data leaks and manipulation through techniques like prompt injection.
How firms approach AI-related cyber security should also depend on their size, with differing budgets and operational complexity meaning different approaches.
“Large firms, with access to substantial financial and human resources, should focus on building enterprise-grade AI governance frameworks. These frameworks include custom zero-trust architectures that enforce strict access controls across global offices and high-volume data flows. Dedicated cyber security teams can deploy advanced tools, such as AI-driven threat detection systems, to monitor and secure complex AI integrations in real time,” Diznab said.
“[These] firms should implement extensive, ongoing cyber security training programs tailored to AI-specific risks, such as prompt injection or data leakage through unsecure APIs, using in-house trainers or external consultants. Robust incident response plans, supported by security operations centres, should include regular simulations to test resilience and ensure rapid recovery from breaches.”
In contrast, smaller and mid-sized firms should prioritise cost-effective, cloud-based security solutions with an aim to be more accessible.
“Automated threat detection and response platforms provide robust protection without requiring extensive infrastructure, while partnerships with specialised cyber security providers can bridge resource gaps,” Diznab said.
“These firms benefit from streamlined, user-friendly AI governance policies that approve a limited set of secure AI tools and prohibit shadow AI use through clear guidelines. Training should be concise and practical, focusing on common vulnerabilities like phishing amplified by AI-generated deepfakes, delivered through online modules or brief workshops to suit limited staff availability.”
Firms of all sizes should also remain vigilant for indicators that their AI tools or systems may be susceptible to cyber attacks, with a key warning sign being unusual activity in AI interfaces, which could suggest unauthorised access or data exfiltration attempts.
“Multiple failed login attempts on AI tool dashboards can indicate brute-force attacks targeting weak credentials. The presence of unauthorised applications, often linked to shadow AI, is another red flag, detectable through network monitoring that reveals unapproved software installations,” Diznab said.
“Outdated software integrations within AI workflows also pose risks, as unpatched systems are prime targets for exploits. Regular monitoring for these signs, combined with proactive audits, enables firms to identify and address vulnerabilities before they escalate into breaches, safeguarding data and maintaining client trust in a competitive legal market.”
Key tips for firms to protect themselves
In the 2025 landscape, implementing a “robust cyber security solution” is critical; a comprehensive cyber security framework strengthens data integrity, and automated incident response tools mean firms can “minimise downtime, preserving billable hours and maintaining operational continuity”.
“This resilience is vital as many law firms face annual cyber attacks, which can erode client trust if not addressed effectively. A strong security posture also fosters client confidence, a key differentiator in a market where clients prioritise data security when selecting legal providers,” Diznab said.
“Engaging professional cyber security consultancies ensures access to expertise in areas like penetration testing and AI-specific threat mitigation, addressing gaps that managed service providers (MSPs) cannot. This strategic investment not only mitigates risks like ransomware and data exfiltration but also supports scalable AI adoption, reduces insurance premium increases, and positions firms as innovative leaders in a competitive market, driving long-term growth and client retention.”
Finally, Diznab recommended five simple, effective security measures firms should be implementing in the AI era: regular employee training, enforcing multifactor authentication (MFA) across all AI platforms and cloud-based systems, implementing data encryption, developing a clear AI governance policy, and applying timely software patches and updates to AI tools, preventing exploitation by attackers targeting outdated systems.
“MFA adds a critical layer of protection by requiring multiple verification steps, significantly reducing the risk of unauthorised access from compromised credentials. Developing a clear AI governance policy is crucial to mandate the use of approved tools, prohibit unauthorised shadow AI, and require periodic security audits to identify potential weaknesses,” he added.
“These measures are accessible yet highly effective, enabling firms to mitigate risks, ensure regulatory compliance, and build client trust. To maximise protection, firms should engage professional cyber security consultancies rather than relying solely on MSPs, who often lack the specialised expertise needed for AI-specific threats.”
Lauren is the commercial content writer within Momentum Media’s professional services suite, including Lawyers Weekly, Accountants Daily and HR Leader, focusing primarily on commercial and client content, features and ebooks. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications. Born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling.