Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Can robots commit crimes?

AI may soon reach a level whereby a robot would have the capability to form a criminal intent, act on that intent and do what, if done by a human being, would be a crime. This eventuality raises serious legal questions, argues one partner.

user iconJerome Doraisamy 25 May 2023 SME Law
expand image

At present, robots are not inherently capable of committing crimes on their own. As legislation for criminal activity pertains to human acts, a robot cannot be a criminal.

However, a robot can be used to carry out criminal activity, in the same way a vehicle can be used to break the speeding limit, suggests Peter Francis, partner at Melbourne and Canberra-based boutique FAL Lawyers.

“We would not consider the vehicle to be breaking the law, rather the driver. The same analogy could be applied to robots involved in a crime,” he noted.

Advertisement
Advertisement

“Artificial intelligence could one day reach a level whereby a robot would have the capability to form a criminal intent, act on that intent and do what, if done by a human being, would be a crime.”

“That’s getting very close to the ability to form intent, which, if the intent is criminal, is one half of a crime,” he said.

In conversation with Lawyers Weekly, Mr Francis detailed that, as robots become more autonomous, questions should and will inevitably arise regarding the criminal liability for their actions.

“Who should be held responsible if a robot commits a crime? Should it be the robot’s owner, manufacturer, programmer, or the robot itself?” Mr Francis said

“If we reach a point where we determine a robot can act on intent, the question would then be how do we view the robot from an ethical standpoint. Would we simply destroy the machine in the same way we would destroy a faulty vehicle, or do we then view the robot as more than a machine rather than a being with emotions?”

This would be, Mr Francis mused, the film I, Robot coming to life.

“A further question would be, would a robot ever have a motive to commit a crime? Though the law only considers intent, we should acknowledge that most crimes are committed due to a motive,” he noted.

In light of the continued rapid advancements in AI, Mr Francis continued, it is critically important that laws and regulations keep pace to ensure that robots are used in ways that are safe and ethical.

“I believe the risks posed by robots are such that consideration should be given to the adoption of a compulsory insurance scheme to provide a no-fault cover for all parties injured by them similar to that operating for motor vehicles,” he opined.

While the profession — and broader society — may think that the current challenge is a new one, there are “numerous examples”, Mr Francis identified, of machines performing an agency-like function.

“Vicarious liability does already cover the actions of a robot, holding the owner accountable in the same way as an employer is liable for the misdeeds of their employees. We can look back on existing practice to draw appropriate regulations in order to protect us in the use of new technology. For example, it could be prescribed that all robots must be manufactured with an inbuilt black box, as we do with aeroplanes,” he detailed.

“More broadly, we should also consider the rise in artificial intelligence used to carry out professional work, including in the legal sector. There is a need to establish clear guidelines, regulations, and ethical frameworks for the use of AI across all professions.”

This is particularly so, Mr Francis concluded, “when we consider the need for human expertise and judgement, as is the case with the law”.

You need to be a member to post comments. Become a member for free today!