Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Ethical use of AI ‘just a slogan’

An academic has urged in-house counsel to exercise caution when using generative artificial intelligence (AI), musing that he is unsure of what the ethical adoption of AI currently looks like.

user iconMalavika Santhebennur 05 March 2024 Corporate Counsel
expand image

Ahead of his session at the Corporate Counsel Summit 2024, Sebastian Sequoiah-Grayson – Unisearch expert and senior lecturer in epistemics at the School of Computer Science and Engineering, University of NSW – said in-house legal teams should be cautious for now when using generative AI tools like ChatGPT.

ChatGPT and similar tools have been making global headlines since 2023, inciting debate around their limitations, opportunities, and ethical considerations and prompting the legal profession to reconsider how using them could change how they carry out their functions.

However, Sequoiah-Grayson told Lawyers Weekly that in-house counsel using the tools must double-check the data points in generative AI to those that are verifiable.

Advertisement
Advertisement

“The promised payoff is that the direction of the initial search for legal facts could be optimised very quickly from the outset,” he reflected.

“However, it does not abdicate lawyers from verifying those facts in any way. Generative AI makes suggestions given the consequences of decisions within the legal domain. You’d want to exercise a huge amount of caution before you treat those suggestions as actual answers.

“You’d still need to check the official records in the same way that I tell students if they’re using generative AI to find out anything about any topic, they would still need to verify those claims in peer-reviewed academic sources. That’s how you do research.”

Law Quarter partner Jacqui Jubb wrote in Lawyers Weekly last year that while she understands concerns about relying on AI as an accuracy tool due to the potential for errors, she pointed out that humans make errors, too.

Rather, generative AI has the ability to analyse large amounts of data in a fraction of the time it would normally take a human to undertake the same tasks. It can sift through legal databases, previous case studies, and statutes quickly to provide lawyers with an analysis or summary, she said.

At the summit, Sequoiah-Grayson and a panel of speakers will decipher how in-house legal teams could harness the power of generative AI tools and outline the legal intricacies associated with it.

‘Never believe a sales pitch or slogan’

As for ethical considerations that accompany the integration of AI into organisations, Sequoiah-Grayson said it is currently “just a slogan” and warned lawyers to “never believe a sales pitch or a slogan”.

“I’m not sure what ethical adoption of AI looks like yet,” he said.

“The onus is on those who use that slogan to legitimate its use by saying something substantive about the systematic use of generative AI in the legal world in such a way that its use is ethical.

“By ethical, I mean something more robust than just coherence with some guidelines. Rather, it should be the types of parameters that you and I would consider robust insofar as self-attributions of moral behaviour or concern go.”

Lawyers must consider the ethical use of AI from a rich moral and ethical perspective rather than an exercise in compliance, Sequoiah-Grayson insisted.

He made a distinction between corporate compliance to guidelines and genuine moral scrutiny, stating that in-house lawyers are well placed to do this, given that the Australian legal profession is grounded in moral argument and reasoning.

The use of generative AI requires oversight and transparency, along with public discussions that help decision-makers determine what effective oversight should look like to ensure responsible AI usage.

FAL Lawyers senior associate Julian Ryan echoed this view in Lawyers Weekly last year, where he said ethical guidelines are necessary to regulate the use of AI in morally challenging scenarios.

He added that it is critical for multiple stakeholders to become involved in developing these controls to ensure that they align with the values of Australian society (including government, industry experts, and the community).

Ryan said that AI technologies rapidly advancing and evolving would present new privacy challenges and, as such, would warrant considering whether the existing regulations adequately address these concerns.

Similarly, Sequoiah-Grayson said ethical considerations around AI are a constant negotiation that will engender ongoing moral and ethical discourse.

“It’s not a list of moral facts that remains unchanged,” he said.

Instead, he said, all stakeholders must continue to discuss how to implement oversight and guardrails for the use of AI, while revising and updating guidelines and regulations to keep pace with change in the technology.

Sequoiah-Grayson concluded: “Never stop asking questions, and never be afraid to be wrong about your predictions because fear inhibits risk-taking. Risk-taking is necessary for progress, both morally and technologically.”

To hear more from Sebastian Sequoiah-Grayson about how in-house counsel could use generative AI ethically and increase efficiencies in their business, come along to the Corporate Counsel Summit 2024.

It will be held on Thursday, 2 May, at The Star, Sydney.

Click here to book tickets and don’t miss out!

For more information, including speakers and agenda, click here.

You need to be a member to post comments. Become a member for free today!