AI and informed consent a balancing act
Every business is dealing with artificial intelligence in one way or another, and informed consent needs to be looked at carefully by anyone wanting to incorporate such technology into their business’ practices, a lawyer has emphasised.
KPMG Law’s Kate Marshall has explained that while we don’t actually understand the full potential of artificial intelligence, “putting in place good frameworks and oversight and principles will certainly help” in any approach to regulation.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
It is interesting that some of the technology companies are coming out with their own principles to guide their development of artificial intelligence where legislation is lagging, according to Ms Marshall.
On this note, she highlighted informed consent as “one of the core privacy principles that exists within our legal framework, and human rights, and privacy laws around the world”.
While as a privacy lawyer, she has always felt that people should be providing their informed consent for “everything”, she now considers that “maybe we don’t need to”.
“Maybe there are some things that as a society we accept, that I don’t need the organisation to spell out in a lot of detail and really explain to me what’s involved here,” she said.
While sometimes it may be acceptable or appropriate for consent “to just happen”, Ms Marshall said that “in other circumstances we need to be really careful to make sure people understand”.
“And I don’t mean long, legal terms and conditions that everybody scrolls through and accepts, I mean, really understand,” she emphasised.
“Really informed consent [is] having the courage to have those conversations around ‘this is what I’m going to be using your data for, this is how technology is going to help you get a better outcome’ – are you ok with that?”
By stepping away from the legal terms and conditions and being more transparent around what the use of information means, Ms Marshall said “maybe we’ll move to that [consent] in the more important side of things and a bit less of it in the less important things”.
“I’ll give you an example – you need to ring up the airline and change your ticket details, and you happen to be speaking to a bot – do you need to be told that? Do you need to be told that that’s some sort of piece of technology rather than a human?”
“But of course, the situation where you’ve got a machine operating on you or a machine making a diagnosis, that if they get it wrong could have a huge impact on your life, they’re very different scenarios”, she offered.
“We need to have a bit of a balanced approach depending on what it is that we’re talking about,” she considered. “What’s the impact?”
“You might feel very differently about the use of AI,” Ms Marshall stated.
What she hopes doesn’t happen is for Australia to immediately go into regulation mode, that is, trying to create a regulatory regime that is deemed appropriate for everything.
“That’s very difficult to put in place effectively without stifling all of the opportunities that could be created out of the use of technology and the use of AI.”
Lawyers Weekly recently reported on the necessity of principles as the cornerstone of artificial intelligence regulation.
Interested in the issues shaping the in-house legal landscape? Don’t miss your chance to hear from local and global in-house legal powerhouses at the 2019 Corporate Counsel Summit!