With demand for AI skills growing and firms racing to embed AI into workflows, the profession now sits at a pivotal moment: beyond “the hype cycle”, where real capability building, governance, and ethical integration are vital.
The rapid rise of generative and agentic AI has pushed the legal profession into a new era of promise, pressure, and profound scrutiny.
Agentic AI, or AI “agents”, can make decisions and act autonomously and without human oversight, amplifying risk around data security and system reliability.
McKinsey & Company has also predicted that agentic AI systems could “help unlock $2.6 trillion to $4.4 trillion annually in value across more than 60 GenAI use cases”, despite there being an increased level of risk.
However, data just last year revealed that across mid-size firms, only 5 per cent had a “comprehensive” level of AI maturity, despite almost 100 per cent of Australian legal professionals using AI in some capacity.
Because of this increased uptake, core legal skills will continue to evolve as “AI skills” become more in demand, notwithstanding a number of concerns raised about AI systems continuing to go largely unregulated.
Bridging ethical and governance gaps in AI
As AI tools become increasingly integrated into legal practice, the pressing challenge for law firms is not just technical adoption, but ensuring lawyers have the knowledge, confidence, and governance skills to use AI ethically and responsibly.
This is something Lander & Rogers has observed through its LawTech Hub – and chief innovation officer and transformation lead Michelle Bey said that any gaps the firm has observed tend to be more related to how AI – and agentic AI – should be used in legal practice.
“Data from using enterprise-deployed AI at Lander & Rogers over the last two years has identified that the gaps aren’t technical; they’re about broad AI fluency and the confidence to use and validate tools confidently,” she said.
“The real capability gaps are around AI governance: managing confidentiality and privilege, ensuring data security, and keeping human judgment central to every process. These are non-negotiable in legal practice.”
The use of GenAI in affidavits, witness statements, and other evidentiary documents has also been banned in the Supreme Court of NSW from the start of this year’s first law term.
In September, Chief Justice Andrew Bell then said some of his concerns around the use of AI in courts have since increased, following the news that a Victorian lawyer had become the first in Australia to face professional sanctions, after failing to verify citations generated using AI-assisted legal software.
Having previously emphasised the importance of practitioners understanding AI and its limitations, Law Society of NSW president Jennifer Ball said the Law Society welcomed the “ongoing leadership” of Chief Justice Bell in reviewing and refining the Supreme Court’s approach to using AI tools.
“The continuing and rapid development of generative and agentic artificial intelligence (AI) presents valuable opportunities, but also creates serious risks for the legal profession, its clients, and indeed self-represented litigants,” she said.
“AI offers much potential, but solicitors continue to be responsible for their work, whether it is prepared for court or clients. This helps to ensure the integrity of, and public confidence in, the legal system.”
Will the hype around AI die down?
Following the ban of this tech in the Supreme Court, areas like litigation, criminal, and family law will need strict controls around AI use, resulting in a more cautious approach to AI use.
Legora APJ lead Heather Paterson said court restrictions target irresponsible AI use, rather than AI itself.
“Sensitive areas like family law may see more caution around client-facing applications, but the real divide is between transparent and reckless AI deployment, not between practice areas,” she said.
However, Major Lindsey & Africa managing director Ricardo Paredes said that other practice groups – such as transactional and advisory practices like corporate/ M&A, banking and finance, real estate, employment – will demand more, not less, in terms of AI skills and implementation, particularly as more organisations embrace agentic AI.
“AI tools will help these teams analyse, summarise, and draft at speed – and they don’t run into the same evidentiary risks that worry the courts,” he said.
“We don’t think the hype will die down – we think it will become more practical. As agentic AI becomes stronger, firms will move beyond experimentation and start using AI as part of normal legal infrastructure, just like email.”
Similarly, Paterson added that while there is “hype” around AI, it’s unlikely to slow down as the profession progressively embraces emerging technology.
“There’s hype around legal AI because there’s an expectation now, especially from the clients of firms driving adoption, that AI integration should be standard practice,” she said.
“Repetitive tasks like document review and data extraction are already being automated quietly in the background. Some firms may build proprietary tools for competitive advantage, but potentially more interesting is the appetite for collaborative AI that enables firms to work directly with clients in a secure AI-powered workspace.”
Autonomous agentic AI tools have also become more common and “hyped” in litigation practices, particularly within e-discovery and chronology.
According to LegalScout co-founder Shamik Ghosh, this is likely to quickly spread to other practice areas, with advanced AI platforms to “become ubiquitous for all practice groups to leverage”.
“In the case of agentic AI in legal, we are probably at the peak of the hype cycle with promises of workflow optimisation, analysis, drafting and reviews done by AI. This will start to normalise and become embedded into processes throughout 2026, and those firms [that] invest the time and resources in shifting the mindset of their business practices will lead the transformation that’s coming for the entire legal industry,” he said.
“To drive this transformation, law firms will need to rethink their entire operating model and desired capabilities, including future talent requirements.”
For example, law graduates of the future will be required to be “AI natives” and manage, validate and check outputs of AI agents in a secure environment, Ghosh predicted.
“This will lead to a shift in paradigm in the organisational model where, instead of the typical pyramid management structure, there will be AI-led teams with strong AI-capable leaders at the top, a broad middle layer and individuals who are supervising one or many AI agents to work smarter and faster,” he said.
Practical steps for firms moving forward
To ensure AI is used safely, ethically, and effectively, law firms must implement practical, hands-on training and upskilling programs that combine governance, workflow integration, and continuous learning for all lawyers and staff.
This should look like “mandatory AI basics” training for all lawyers, according to Ghosh, including how models work, where they fail, and why confidentiality and privilege can be at risk.
“There should be clear AI governance training tied to firm policies, court rules and regulator guidance, so people know what is allowed in real matters. Hands-on workshops with firm-approved tools are essential, so lawyers actually practice prompts, checking outputs and documenting how AI was used,” he said.
“This will lead to building trust in the AI systems, which is critical to adoption. Partners, and innovation leads then need upskilling on vendor selection, supervision duties and how AI choices affect pricing, quality, and liability.”
Lander & Rogers has implemented mandatory AI governance training, while practice-specific programs are tailored to the workflows and needs of each team. Hands-on AI platform development sessions also run in collaboration with founders from the firm’s LawTech Hub, to give lawyers a better understanding of how new technology is built and an opportunity to shape its design.
“We’re also exploring the skills lawyers will need next through the co-creation of an interactive educational platform with Monash University, designed to help grads and clerks test and grow their AI and tech literacy from day one,” Bey said.
Additionally, the in-house AI Fluency Program provides all staff with a shared foundation in AI awareness across a series of modules, ensuring everyone understands how AI fits into legal and operational contexts, the associated risks and ethical considerations, and the firm’s approach to responsible AI use. Training that is “practical and both governance- and ethics-led” is vital for modern law firms, added Bey.
“AI has already fundamentally changed the legal profession, and it continues to reshape it, almost daily,” she said.
“Firms that want to stay competitive and relevant will need to embed AI into culture and workflows, supported by governance, ethics, and continuous learning.”