‘Authentic’ law school assessments to combat use of ChatGPT to cheat
With artificial intelligence technology on the cusp of passing the US bar exam, universities in Australia are switching up their assessment tasks to stop students from using AI platforms to cheat.
Last November, AI research and deployment company OpenAI launched ChatGPT, a model that “interacts in a conversational way” and generates text in response to different prompts. Now, Australian law schools are concerned students are using ChatGPT — or similar — to write assessments.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
As reported recently by The Guardian, ChatGPT is able to come up with comprehensive and coherent responses to university assignments, something that academics across the world have expressed concern over.
Deputy chief executive of Group of Eight Australian universities, Dr Matthew Brown, said the universities would be “proactively tackling” programs like ChatGPT through increased staff training and more targeted tech detection strategies.
“Our universities have revised how they will run assessments in 2023, including supervised exams … greater use of pen and paper exams and tests … and tests only for units with low integrity risks,” he told The Guardian.
“Assessment redesign is critical, and this work is ongoing for our universities as we seek to get ahead of AI developments.”
A number of universities and law schools have already taken serious action, with the University of Sydney now citing “generating content using artificial intelligence” as a means of cheating within its academic policy.
Flinders University has also implemented a targeted policy against using AI or other computer software to cheat — but dean of law Professor Tania Leiman and law lecturer Dr James Scheibner told Lawyers Weekly that many law assessments feature “authentic” elements to them that are hard to cheat.
“Where undergraduate students are assessed by examination, they cannot bring their own computer to the examination centre. Rather than generic essay questions, many written assessment tasks involve solving authentic problem questions. Increasingly, in our undergraduate Law degree at Flinders University, assessments also comprise clinical skills such as reflective practice, oral advocacy, interviewing, negotiation, in-class exercises, presentation to clients and performance during clinical placement.
“In our core topics, Legal Innovation and Innovating Social Justice, students work in teams to develop concept proposals [that] are presented to industry experts. In our core topic, Law in a Digital Age, undergraduate students work with a not-for-profit client to develop simple open-source software applications, interacting regularly with those clients throughout the semester. In our core topic, Law in Action, undergraduate students experience the whole cycle of obtaining a legal job — applying, undergoing and interview, induction, professional development, performance appraisal and placement,” Ms Leiman and Dr Scheibner explained.
“All of these authentic tasks focus on developing and demonstrating professional human-centred skills and are not the sort of tasks that can be completed by chatbots based on large language models (LLMs). Even if students can use LLMs to help draft written components of these assessments, they will usually still need to demonstrate at least some of the sort of clinical skills described above to meet the standard required for a passing grade or above.”
This sentiment was echoed by an ANU spokesperson, who said that the university also has an academic integrity rule all students are expected to abide by.
“Academic misconduct is an issue of great concern faced by all universities. At ANU, we have robust measures in place to prevent academic misconduct and catch potential incidents. These measures include sophisticated assessment design and verifiable assessments such as nested assessments, assessments based on laboratory activities, practicums and fieldwork, timed assessments, oral presentations and invigilated examinations,” the spokesperson said.
“We continue to invest in these measures and systems. We are concerned about recent advancements in technology-assisted cheating, and we continuously monitor and update our policies and practices accordingly.”
Whilst ChatGPT 3.5 (the current model) has now been shown to make mistakes in legal work, or summarising court cases, research has suggested that the yet-to-be-released ChatGPT 4.0 is likely to be able to pass the US bar exam.
However, Ms Leiman and Dr Scheibner said students using LLMs risk receiving incorrect and inaccurate outputs — and lack sufficient skills to actually identify the errors.
“As [Michigan State College of Law Professor Michael James Bommarito and Illinois Tech – Chicago Kent College of Law Professor Daniel Martin Katz] have demonstrated, a GPT-3.5 model was able to receive a passing grade on two out of seven topics in the US Multistate Bar Exam multiple choice section. However, there are significant variations in law across different Australian states, particularly in areas such as criminal law.
“If, for example, a student answering a question on facts set in one jurisdiction referred to legislation or case law from another, this would raise suspicions for the marker and may well be incorrect in any event. If a student used an LLM to generate an initial answer, and then amended that answer to focus on the correct jurisdiction or piece of legislation, this raises a question as to whether this is plagiarism,” the pair explained.
“Answers that refer to material that might be sourced by the LLM but not directly covered in class are also likely to raise suspicions for the marker, especially if it is an obscure source and more than one student references it. This might suggest academic integrity has been breached by way of collusion — leading to an interview with the student seeking an explanation. The consequences of breaches of academic integrity are serious and can be very significant for law students, even on occasions preventing them from being admitted.”
Penalties for breaches in academic integrity could also be detrimental to students’ future prospects, something the University of NSW attested to.
“The university is aware of the use of AI programs to assist and write papers for students who then submit the work as their own. Using AI in this way undermines academic integrity and is a significant issue facing all education and training institutions, nationally and internationally,” a spokesperson said.
“UNSW already requires students to submit their own work in assessments. Where plagiarism or academic misconduct are found, the penalties include suspension and permanent exclusion. Academics here and across the globe are constantly updating their approaches to setting assessment tasks to ensure the results reflect the achievements of our students.”
Whilst Ms Leiman and Dr Scheibner said they were not currently aware of Flinders University students using ChatGPT or similar AI methods to cheat on their assessments, the technology is unlikely to fade — and will only get more advanced moving forward.
“Law schools need to understand both the opportunities and the risks of new technologies — and equip their students to do the same. Law schools already use many strategies to reduce the risks of plagiarism — and effective assessment and pedagogical design is crucial here. Many of these strategies will also reduce the rewards for students seeking to use LLMs or other technological tools and could disrupt the use of essay-type and multiple-choice assessments in the near future,” they added.
“Strategies include setting new problem questions for each iteration, with authentic facts drawn from current unresolved events. We suspect that as LLMs improve, so will plagiarism-detection software.”