Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Government considers laws for ‘high-risk’ AI

The Albanese government is considering AI-specific laws to act as a “mandatory guardrail” against the use of the high-risk technology.

user iconNaomi Neilson 18 January 2024 Big Law
expand image

To curb community concern around artificial intelligence (AI) as the technology develops, the federal government proposed either updating existing laws or creating AI-specific laws to act as a protection against future hazards posed in high-risk settings.

Minister for Industry and Science Ed Husic said his biggest concern is generative AI, which can now create content “that you think is real and organically developed but it’s come out of a generative model”.

“The big thing I’m concerned about is not that the robots take over, but that disinformation does,” Mr Husic told reporters.

 
 

In a statement accompanying his comments, Mr Husic said the Labor government “wants safe and responsible thinking” solidified within laws as AI continues to be “designed, developed and deployed”.

The creation of these new laws was proposed in the government’s interim response to the Safe and Responsible AI in Australia consultation, which acknowledged that while there is “immense potential” in AI, Australians want stronger protections in place.

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” Mr Husic said.

The government also proposed an advisory body to support the development of mandatory guidelines.

This move was welcomed by RMIT University’s Professor Lisa Given, who added it would “complement other initiatives that the Australian government has taken recently to manage the risks of AI”.

Law lecturer at RMIT, Dr Nicole Shackleton, also welcomed the “promising steps” taken by the federal government but noted there are still several gaps that would need to be filled by legislation.

This included how AI is used in sex and intimate technologies, “which is a growing market internationally and in Australia”.

Dr Shackleton said other than the government’s focus on AI-generated pornography – or “deepfake pornography” – the interim report has shown “little interest” in issues of sexual privacy, the safe use of AI in sexual health education, or the use of AI in sex technologies such as personal and intimate robots.

“It is vital that any future AI advisory body be capable of tackling such issues, and that the risk-based framework employed by the government does not result in unintended consequences which hinder potential benefits of the use of AI in sex and intimate technologies,” Dr Shackleton said.

Speaking to Lawyers Weekly late last year, UNSW Law & Justice’s Professor Michael Legg said generative AI can be trained to read legislation, judgments and draft documents, but it will never master human-centric and soft skills utilised by actual lawyers.

“There may be an AI-enhanced lawyer who is more efficient and effective at their job, but it’s still the human skills that distinguish them from the machine and to continue to add value,” he said.

“It’s the ability to address the novel and the uncertain problem through practical wisdom and judgement, but also to listen and provide empathy.”

Tags