Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

‘With great power comes great responsibility’: The emergence of AI washing

Incorporating artificial intelligence (AI) into business operations offers numerous advantages, giving organisations a competitive edge in a rapidly evolving market. Promoting the use of AI can also depict a company as high-tech and cutting-edge. However, some organisations may be promoting their use of AI without actually implementing it – resulting in “AI washing”.

user iconLauren Croft 22 May 2024 Big Law
expand image

Over the course of last year, AI tech and platforms like ChatGPT made global headlines and prompted various debates across the Australian legal profession.

You can read Lawyers Weekly’s full coverage of ChatGPT and other AI platforms and what lawyers need to know here.

 
 

While there is still some fear in the profession that the rise of such tech could mean the beginning of the end of lawyers, clients are demanding efficiency more than ever. But amid concerns that BigLaw firms may not be utilising AI to its full potential, a new term has emerged: AI washing.

Comparable to greenwashing – the act of making misleading statements about the environmental benefits and sustainability of products or services – AI washing occurs when organisations inflate and exaggerate their capabilities around AI to attract customers and stakeholders.

In the US recently, the Securities and Exchange Commission (SEC) coined the term when it brought and settled charges against two investment advisers – Delphia and Global Predictions – for making false and misleading statements about their use of AI.

Delphia allegedly made misleading statements across various documents, including in SEC filings, claiming it had more AI and machine-learning capabilities than it actually did. According to the SEC, Global Predictions made similar claims on its website and social media, allegedly claiming to be the “first regulated AI financial advisor”.

Neither of the investment advisers admitted or denied the SEC findings and agreed to pay civil penalties of US$225,000 (Delphia) and US$175,000 (Global Predictions), as well as consented to the entry of orders finding that they each violated US federal law and ordering them to cease and desist in “AI washing”.

In a statement at the time, SEC chair Gary Gensler said the SEC found that “Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not”.

“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors,” he said.

This underscores the need for greater regulation around AI, Angel Zhong, associate professor of finance at RMIT University, wrote in a recent article on The Conversation.

“[AI washing] involves dressing up ordinary tech with fancy AI buzzwords such as ‘machine learning’, ‘neural networks’, ‘deep learning’ and ‘natural language’ processing to seem more innovative than they actually are,” she said.

“If investors are skeptical [sic] about AI, they’re less likely to invest in legitimate AI-powered solutions. This can slow down the development of truly groundbreaking technologies. It is crucial to deal with AI washing, echoing the cautionary tale of the dot-com bubble.”

Regulatory risks and other impacts

While AI remains newer to the market, regulation around such tech is, currently, left largely to existing legislation around privacy, intellectual property and consumer protection laws – with these regulations somewhat struggling to keep up with the rapid advancement of technology.

However, this merely means that regulation is on its way, according to Hicksons partner David Fischl​​.

AI in law is our generation’s iron man suit. With great power comes great responsibility. This means regulation is on its way. In the past, the ACCC and ASIC took a proactive approach towards combating greenwashing. Given the commercial benefits of using buzzwords in marketing – we expect similar initiatives against AI-washing,” he said.

“Lawyers must be prepared for the increased regulation by ensuring they are well versed in how AI technology works and staying informed on policy and regulatory changes. Only these lawyers will be properly equipped for this brave new world.”

Following the growth of the ESG sector in recent years, investor and consumer appetite for environmentally friendly purchases and investments has grown – resulting in an increased level of (and incentive for) greenwashing.

As such, regulatory bodies have launched various investigations, with a number of cases lodged since the Australian Securities and Investments Commission’s (ASIC) first court action in February last year and regulatory crackdowns on greenwashing to continue moving forward.

Similar to greenwashing, Bird & Bird partner Hamish Fraser said AI washing falls under “misleading or deceptive conduct” and that lawyers across the board should keep an eye out for it.

“AI washing will fall foul of the same laws as greenwashing and any other misleading conduct. Omitting information about the AI product and making statements about the future of the AI product without reasonable grounds could also amount to misleading and deceptive conduct. The consequence for breaching these prohibitions include pecuniary penalties, and in some cases, criminal liability,” he said.

“In much the same way as lawyers assist their clients with advertising clearance work, they should now keep an eye out for statements about AI and how it is used and ensure there is a proper basis to make the representation and that the representation does not otherwise contain any language or expressions that are or likely to be misleading or deceptive.”

But while some organisations use AI “buzzwords”, Fraser said it isn’t “as prevalent nor as potentially reputationally damaging as greenwashing” currently.

“AI is a leading-edge technology and evolving fast. Unlike environmental claims, there is arguably a wider scope of ambiguity in determining whether AI-related claims are indeed misleading or deceptive. There isn’t a single clear definition of AI in Australia, despite there being various ‘buzzwords’ used internationally. Businesses want their service providers up to date and using the best technology, so where the so-called buzzwords have uncertain meanings, using such words may not increase the value or perceived value of the product. A decision to buy a product is likely to still be based around the product itself,” he said.

“That’s not to say there won’t be misleading statements around AI, but greenwashing is different. With greenwashing, a representation around environmental or sustainability credentials, that may not itself be part of the service, will often remain untested and has the potential to give a business a greater and potentially an unfair advantage.

“A potentially more contentious area around AI could be AI hushing, or not disclosing when AI is being used. There is already a significant nervousness around the use of AI to make a decision about a person, for example. Therefore, some businesses may prefer to not discuss how they use AI, and that could, of course, be misleading – through silence or the extent to which AI is being used.”

Best practice moving forward

Across the country, law societies and bar associations across Australia have issued guidance on responsible use of AI in the legal profession.

In July last year, the NSW Bar Association released guidelines around barristers’ use of AI, warning that large language models were still evolving and continue to be under intense legal scrutiny.

And in January this year, the Law Society of NSW announced that it would establish an expert taskforce to guide lawyers in the state through the challenges of AI, including the issue of AI washing in the profession.

At the time, Law Society of NSW president Brett McGrath told Lawyers Weekly that both he and the Law Society understand that those in the community “want a trusted source of expert advice and assistance to help guide them through, so that [lawyers] can then, in turn, guide and advise their clients, and also guide the court, on new and emerging areas of jurisprudence which will come through with AI, self-driving cars, for example, and also how to deal with things such as deepfakes in evidence”.

But while there will undoubtedly be instances of misleading conduct around AI washing, Fraser said that “AI hushing” may, in time, attract more attention.

“There is already a concept known as ‘greenhushing’. Greenhushing refers to businesses that purposely keep quiet about their sustainability goals, either for fear of being found liable for greenwashing or to undersell their sustainability commitment to appease certain members of the public. In the same vein, AI hushing may be defined as businesses that purposely keep quiet about their use of AI, to deliberately undersell their use of AI due to fear of losing customers (or their trust). Of course, ironically, the risks to their brand, should they be found out to be using AI without disclosing it, are potentially far worse,” he said.

“AI hushing is a concern not just because it means omitting information that may constitute another form of misleading and deceptive conduct but also because it is a failure to be transparent in accordance with the various (and growing number of) guidelines.”

For best practice around these emerging issues, it will be important for lawyers using AI to disclose how AI will be used, the benefits and risks associated with the use of AI, and how those risks are being managed. As such, transparency is key.

“Lawyers need to be careful when promoting the use of AI in the delivery of legal services. AI washing has the potential to erode the sacred trust that we have with our clients. It is crucial that we, as lawyers, do not lose sight of this. Transparency is key in client communication. When it comes to promoting the use of AI in legal services, legal professionals should be honest and clear. In the future, it will be common for lawyers to deliver products and services using the help of AI. Our team operates this way today,” Fischl said.

“Lawyers need to be well versed in how AI technology works, including risks and mitigation strategies. Only these lawyers will be properly equipped to guide clients on purchasing AI-powered products and services. By staying informed, lawyers will arm themselves with the relevant knowledge and tools. They’ll also then better understand the coming laws and impact on their clients. This will help lawyers to guide their clients on asking the right questions in procurement processes.”

Fraser echoed a similar sentiment – and concluded that particularly for client relationships, openness around the use of AI is crucial moving forward.

“Open and transparent communication is, of course, at the heart of any client relationship and, to some extent, is the basis for the important principle of legal professional privilege. In terms of businesses and consumers, the use of AI when dealing with personal information (particularly automated decision making) can be a cause for nervousness amongst customers of a business,” he said.

“Transparency is not only an important integer of the use (and promotion) of AI, Australia’s AI Ethics Principles and Safety by Design make it clear that transparency is essential.”

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.