Uber fatality unveils AI accountability issues
Last week’s determination by prosecutors that Uber would face no criminal liability in the death of a pedestrian hit by one of its self-driving vehicles in Arizona made headlines around the world, prompting numerous questions around technology liability.
Lawyers Weekly recently spoke with KPMG’s technology partner Kate Marshall on the very issue of regulation of motor vehicles, machine learning and artificial intelligence, and the accountability questions currently being raised by the industry, and society more generally.
According to a letter written by the Yavapai county attorney Sheila Polk, regarding the March 2018 incident in Tempe which saw a woman pushing a bicycle hit by an autonomous vehicle with a back-up driver, “there is no basis for criminal liability for the Uber corporation arising from this matter”.
Referring the matter to Arizona’s Tempe Police Department, the letter said further investigation was needed to consider what “the person sitting in the driver’s seat of the vehicle would or should have seen that night given the vehicle’s speed, lighting conditions, and other relevant factors”.
The attorney’s findings echo the principle Ms Marshall considered as generally accepted, that “there always has to be that human element”.
She said it was “interesting that we have a higher expectation around technology and AI than we do around humans”.
“I know that when I drive a car, I’m not a brilliant driver, and we all get distracted,” she conceded.
“So we accept a level of risk around those things, yet when it comes to AI or technology, we expect a higher standard.”
In the Arizona fatality case, as reported by the New York Times, the Tempe Police Department had previously released a report that said the safety driver was streaming the television show “The Voice” on her phone in the minutes leading up to the crash.
Ms Marshall said that ultimately having somebody there who is able to step in is about not letting artificial intelligence “go out and make decisions without humans having any control over the outcome” as well as accountability.
To provide a different example, Ms Marshall queried accountability when it’s a robot doing surgery, compared with a human surgeon.
“We know we can sue a surgeon, who do we sue with a robot?”
“Is it the coder? Is it the producer of that application or that system? Is it the one that controls the organisation behind it that’s actually putting it out there in the market? Who is it?” she questioned.
Ms Marshall considered it interesting, “the fact that we accept a higher standard [for technology], that we’re more nervous about the risks associated with the robot doing it than we are about the surgeon who we can have a conversation with and that human interaction”.
Ms Marshall has called discussion around artificial intelligence and machine learning “a really important conversation” for Australia to have.
She noted that it feeds into conversations (like the Uber scenario) going on at a global level around what “introducing AI at scale really mean[s] for us as businesses, for us as communities, [and] what does that mean for the future of our children, and should it be regulated?”
Ms Marshall acknowledged that in years past, regulation only caught up once that piece of technology became mainstream, using the example of the first motor cars.
“The hope of today is that we don’t do that,” Ms Marshall said, and that instead there are broader conversations around how artificial intelligence should be dealt with.
She considered the current state of rules surrounding the testing of driverless cars and autonomous vehicles as being too reactionary itself, “with no overall framework or agreement around the approach to artificial intelligence being used”.
From a legal perspective, Ms Marshall said “the accountability piece is not so simple as ‘you’ve done something and its impacted on me, therefore I’ve got a right to sue you and recover damages’”.
She said there is a need for debate around who will be accountable for artificial intelligence, because “for there to be trust in AI, there needs to be a degree of accountability”.
Lawyers Weekly has previously reported on the need for principles to be used in artificial intelligence regulation.
Uber's associate general counsel and head of legal in the Asia Pacific, Katrina Johnson, will be speaking at Lawyers Weekly's inaugural Corporate Counsel Summit later this month about how to manage a team operating in different jurisdictions.