Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Legal regulatory systems haven’t caught up to AI

As the digital economy and the real economy merge, many businesses are beginning to implement AI, leading to the emergence of novel ethical dilemmas and regulatory frameworks lagging behind. 

user iconJess Feyder 14 September 2022 Big Law
Legal regulatory systems haven’t caught up to AI
expand image

The Modern Philosophies conference, hosted by the Governance Institute of Australia, showcased the timely discussion, “AI: Ethical Dilemmas”, with guest speakers Stela Solar, director at the National Artificial Intelligence Centre (CSIRO) and Sue Keay, robotic technologies lead at Oz Minerals and chair of the Robotics Australia Group. 

Greg Dickason, discussion chair and managing director pacific at LexisNexis, began the discussion and said that the digital economy and the real economy are merging because businesses are becoming more data-driven.

“Becoming data-driven businesses means that we’re measuring so much more. We become more efficient because of these measurements. We understand our businesses better, our customers better, our supply chains better — and we start to apply AI models on top of that,” he said. 

Advertisement
Advertisement

“We’re starting to realise that there are ethical dilemmas because of AI, and using AI models, especially if they’re not properly tested, if we’re not understanding their biases.”

Ethics always comes up in discussions about AI, noted Ms Solar, “because ethics guides anything that’s new”.

“When things are new, by default, our structures, our legal regulatory systems haven’t caught up,” she said. 

“There are already some legislative acts and instruments which are already applicable to AI. The Privacy Act, Anti-harassment and Discrimination Act, corporate governance acts — all of these are applicable to AI.

“It’s critical for us to not only acknowledge the ethical principles of AI, but to examine how our legal frameworks are applicable to AI. AI technology is taking us into a new area which is not fully defined, or able to be governed, or specified in our regulatory instruments.”

Ms Keay added that “Australia has got an ethical framework, but for the people who are developing algorithms, there isn’t something they can easily do to ensure they are ethical”.

“For example, there was a company, who were contracted to develop AI for a legal firm to help them with the task of identifying which cases they should take on, on the basis that they would prefer to take on cases that are quick to get through the courts and make the most money,” explained Ms Keay. 

“When they did that, the AI contractors discovered that if the company were to implement the AI models they developed, then they would exclusively take on cases for men. That was, because in our judicial system, cases for men tend to go in more quickly and the payouts are higher.

“Now that is not the fault of the AI developers, or the company that wants to maximise its profits, but what do they do with this information? Broadly, it’s becoming known that the judicial system has these inequities, but the development of this AI amplified that issue to the point where it was impossible to ignore.”

In terms of the action that needed to be taken around these issues, Ms Keay said that “fortunately, the company took the ethical decision that although they could apply the algorithm and make more money, that wasn’t the way they wanted to run their business”.

“They wanted to ensure they were representing women as commonly as men,” she added. 

And according to Ms Solar, AI technology is built on the patterns, the data and the behaviour of who we are as a humanity. 

“Some of that is legacy data that has biases in it, and when you put that data together into a system, of course, it might produce bias itself,” she said. 

“A lot of the ethical conversations around AI happen because it can achieve some of these human biases, or human errors, at scale. It’s an amplifying mirror that’s being held in front of us. You might have had one out of five people that might have been biased around a table, but now imagine that data being forever kept in a model that propagates that bias across scale.”

Ms Solar also noted that Australians have a higher benchmark of trust, perhaps because we have a natural leaning towards being more sceptical.

“That has two effects. It could slow down our innovation; we’ve actually seen slower AI adoption in Australia than in other countries,” she said. 

“But on the flip side, if we know that Australians have a higher benchmark of trust, we could use it to lift our standards and lead the global stage with trusted innovation from the start.”

You need to be a member to post comments. Become a member for free today!