As AI continues to reshape nearly every aspect of university life, from student assessment to administrative operations, in-house teams must strike a delicate balance between harnessing its potential and safeguarding academic integrity while meeting regulatory requirements.
Speaking on a recent episode of The Corporate Counsel Show, Michael Adams, a former law school dean and general counsel, emphasised the growing impact and influence of AI technology in the university setting.
He also underscored the critical importance of involving in-house legal teams in universities in the conversation about establishing the right balance for student use of these platforms.
Drawing on his background as a legal educator, Adams explained how AI tools are increasingly being misused by students for dishonest purposes, particularly in the form of plagiarism and academic misconduct.
“I used to teach and develop technology and the law as a key subject, as a core capstone unit at the University of New England. One of the things I quickly caught on to was that there is an inherent tension between the use of technology for cheating purposes, i.e., plagiarism, misconduct, etc,” he said.
However, he reflected that AI has evolved from being a source of concern around misconduct to a powerful tool that is now deeply embedded in professional life, and it is no longer something institutions can afford to ignore.
“But at the same time, since pretty well ChatGPT 3 and 4 and other versions, what we’ve noticed is that it’s become such a new normal part of the working life, whether it be in the law firm or whether it be just in life, that it’s actually artificial to say we’re going to ignore it,” he said.
“What sort of happened, in my view, is that first of all, TEQSA has done a review of all university policies, and most universities have changed the wording to make sure they capture generative AI. But in the work environment, I think it’s different, and I think there’s a lot of discussion”.
Adams also reflected on how several leading universities around the world are taking proactive steps to safely integrate AI into their academic environments by developing controlled platforms.
“I want to give credit to Sydney University – they have developed a closed network called ‘Cogniti’, and I know UNE and other universities are starting to use that particular system because it can be controlled and we can track the prompts that students are using, and so, we can train them to use AI,” he said.
Given the complexity of university governance, Adams emphasised that in-house legal teams play a vital role in shaping these frameworks to navigate the many layers of policy and compliance.
“One of the reasons I think it’s important is in the university environment, there are literally hundreds of policies and procedures and internal checks and balances, from decision making to course approvals to finance, and on it goes,” he said.
“Often for academics across the levels, up to the dean and then up to the executive level, so your pro-vice-chancellors, deputy vice-chancellors, sometimes, it’s quite murky to get through that dense level of compliance with those policies”.
For in-house teams working to address the challenges AI presents, Adams stressed the importance of taking measured steps and not being discouraged by inevitable hurdles.
“Taking steps, and you can’t run until you can walk. I think the AI journey has been such a huge take-up of this technology ... but the reality is you do struggle to keep up with the latest changes,” he said.