You have 0 free articles left this month.
Powered by MOMENTUM MEDIA
lawyers weekly logo
Advertisement
Careers

Why law schools should introduce compulsory AI training

Just as law firms are rapidly adopting AI to deliver faster, better client outcomes, the first law school to offer a foundational AI-in-law training component will set the new standard and be better for it, write Oliver Hammond and Rowan Gray.

June 23, 2025 By Oliver Hammond and Rowan Gray
Share this article on:
expand image

On 28 January, the Supreme Court of NSW issued Practice Note SC GEN 23 on the “Use of Generative Artificial Intelligence”. Shortly after, on 29 April, the Federal Court of Australia released a Notice to the Profession addressing “Artificial Intelligence Use in the Federal Court of Australia”. In parallel, major law firms have begun adopting AI tools such as OpenAI’s “Harvey” (a nod to everyone’s favourite Suits lawyer). As the legal profession embraces AI, it is only natural that law schools will need to follow suit.

While AI might be an unprecedented aid for lawyers, it comes with significant caveats for law students. Firstly, generative AI is prone to “hallucinations”, producing text that looks authoritative but may be false. For example, a recent study by academics from the University of Wollongong and the University of NSW (UNSW) tested the ability of an AI chatbot to assist in teaching a criminal law course. They concluded that even the most recently released generative AI models are “unpredictable” and raise “serious concerns about the reliability and trustworthiness” of AI, especially in respect of their suitability for legal education.

Law students consequently risk submitting seriously flawed and inaccurate work if they rely on AI summaries or answers without cross-checking sources (not to speak of the serious ethical problem of passing off AI output as one’s own “original” work or research). Secondly, large language models may be trained on data that is outdated or not jurisdiction-specific.

Despite these pitfalls, studies suggest that with careful use, AI can be a powerful aid rather than a crutch. Early experiments have tested AI’s performance on law assessments. For example, researchers at the University of Wollongong pitted ChatGPT against real law students in a final exam for a core subject. The initial results were humbling for the AI: without special coaching, several of ChatGPT’s exam answers “barely passed” or failed, with the best-performing AI script only scoring at about the 15th percentile of the class. However, when the lecturer applied prompt engineering techniques to help ChatGPT generate better answers, the AI’s performance improved markedly, two AI-generated answers scored a creditable 73 per cent and 78 per cent (roughly a distinction range). Tellingly, the markers in that study did not suspect those high-scoring answers were written by a chatbot; the tutor admitted they “wouldn’t have caught it” even if it had been an online submission.

One seemingly innocuous and common use of generative AI is the summarisation of class readings. This usage appears to have particularly spiked with OpenAI’s introduction of a file upload function to its free ChatGPT offering. Law students are under extremely high levels of stress and have many competing demands on their limited time. It is not surprising, then, that a tool that offers to summarise 90 pages of readings on civil procedure in perhaps a couple of pages would be a very attractive prospect.

On the one hand, it would be disingenuous for law schools, obviously conscious of the high expectations they have for their students, to feign shock that students would do what they need to stay afloat. Yet, on the other hand, the ability to effectively and efficiently digest large chunks of information is part and parcel of legal practice. Indeed, judgment-reading is a skill – and one that often, for example, sets the best appellate barristers apart from the rest. It is especially important that early-stage law students do not come to rely on AI as a crutch for keeping up with their workload in lieu of developing the skills that will be necessary for a successful career.

Considering these many challenges and considerations, universities across Australia have taken various approaches. In early 2023, the University of Sydney announced it would return to more handwritten, invigilated exams to thwart students from using AI tools surreptitiously. At the Australian National University (ANU), assessment designs were overhauled to reduce opportunities for AI-assisted cheating – with a greater emphasis on lab-based exercises, oral presentations, and timed in-person exams.

Yet, even in these early responses, one could sense an unease about whether such analog solutions were sustainable. It is clear AI is here to stay. Some commentators have also noted that technical countermeasures (like AI-output detectors or watermarks) would likely fail or be quickly circumvented, concluding “it’s an arms race that’s never going to finish, and you’re never going to win”.

Other law schools have embraced AI, attempting (to varying degrees of success) to integrate it into their education approach. However, in our opinion, the main problem with an AI-integrated approach to legal education is that law school should train a law student’s professional judgment about what the law is or should be, and how it would or should be applied in particular cases. That is precisely the task that AI is so ill-suited for. AI might be a suitable assistant for these tasks, but it cannot and should not do the job for a student. The line between the two is fine. Nevertheless, in most use cases, it is not viable for a university to monitor, delineate and enforce this boundary – it is largely a matter for the individual student. For example, what is to stop a student from inputting a lecturer or tutor’s question in class into ChatGPT and regurgitating the output as their own answer? The student who does so may not necessarily do it for nefarious means – they might simply be anxious about getting it right and feel concerned that a wrong answer might cause them to lose face. Of course, the student loses out in the long run, but possibly not by much.

Law schools certainly find themselves in an unenviable position – stuck between a rock and a hard place, with no easy way forward and little desire to backtrack. The goal is to avoid the current challenges dragging into a prolonged stalemate.

Law schools have at least made a good first start by making it unfailingly clear that the substitution of a student’s own written work with AI-generated text is unacceptable without attribution. Alas, who is to be the enforcer of AI use? It provides outputs that, most of the time, are indistinguishable from that of a human. Ultimately, AI use in universities should be limited; the use of in-person, invigilated exams via computer (rather than handwriting) appears to be the most sensible option. Law schools should ensure that the majority of assessments take place in the exam hall, where a student’s individual reasoning, not their prompting skills, is what’s tested. This would restrict AI use to more appropriate tasks: summarising readings, consolidating notes, or helping with practice questions. Most importantly, it ensures that the most valuable part of a law student – their mind – develops. There, in the pressure of timed conditions, they must still identify the legal issue, apply the rule, analyse the facts, and form a coherent conclusion.

So, is this the end of the midterm research essay? At least in its AI-restricted form, yes. A minority of assessments should be completed outside the exam hall, and law schools should permit the use of AI (many students are already using it regardless). Naturally, some students will rely on AI to generate entire essays. But if outputs from generative AI tools are being copied into submissions unreferenced, unchecked, and unedited, then something has gone seriously wrong in how students are being taught, or not taught to use such a tool.

And here’s a bold prediction: just as law firms are rapidly adopting AI to deliver faster, better client outcomes, the first law school to offer a foundational AI-in-law training component will set the new standard and be better for it.

Oliver Hammond is a fourth-year undergraduate law student at UNSW and co-founder of The Australian Law Student. Rowan Gray is a fourth-year undergraduate law student at UNSW and former editor of the UNSW Law Journal.

You need to be a member to post comments. Become a member today