Governance key challenge for businesses implementing AI
Over a third of Australian organisations are not confident their AI deployment complies with current laws, a new report from DLA Piper has revealed.
According to the report – AI Governance: Balancing Policy, Compliance, and Commercial Value – 36 per cent of companies are not confident with artificial intelligence (AI) deployment, despite 96 per cent rolling out AI in their organisation.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
The report examines the intersection of artificial intelligence, governance, and risk and explores how organisations are rolling out AI programs in practice and the challenges they are facing.
DLA Piper intellectual property and technology partner Nicholas Boyle said that AI presents a number of risks and, therefore, needs good governance.
“While the transformative potential of AI seems boundless, it also presents risks. This report emphasises the pivotal role of good governance in unlocking AI’s transformative potential in a fragmented regulatory landscape,” he said.
“Concerns surrounding responsible AI use have surged alongside its adoption, prompting global policymakers and regulators to formalise rules to address societal, technical, and commercial challenges.”
The majority of respondents said they were rolling out AI, with 72 per cent using tools and solutions from third-party vendors. AI is currently most commonly being deployed in customer service (59 per cent) and R&D/product development (57 per cent). Further, 45 per cent of respondents said they see AI as critical to how their organisation generates value, and 41 per cent foresee their core business being made redundant by AI unless they embrace it.
“Whatever their approach, most companies we consulted are still exploring the benefits of AI for two main purposes: efficiency and transformation. As they undertake pilots and roll out AI solutions and projects, 47 per cent are focused on making efficiency gains – optimising existing processes and tackling known problems with AI,” the report stated.
“Fifty-three per cent are thinking even bigger by applying AI to a range of complex organisational issues, including transforming operations, building new services and generating revenue. This is where the greatest value can be realised.”
In terms of challenges and risks, establishing and implementing good governance was the leading challenge in deploying AI, with 99 per cent of respondents citing it in their top five challenges. Ensuring AI initiatives operate within regulatory guidelines was also cited as a key challenge by 96 per cent of respondents. In addition, 43 per cent of respondents had seen AI projects interrupted, paused or rolled back, citing data privacy issues (48 per cent) and a lack of governance framework (37 per cent) as common reasons.
In the report, partner and USA chief data scientist Bennett B. Borden JD-MSc noted that “organisations suspect AI use among employees is pervasive”.
“But leaders may not have a clear line of sight on all of these applications or potential infringements. As there is significant share price and reputational risk at stake, this exacerbates the fear of breaches and misuse originating from inside the company,” he said.
“Companies should be wary of ‘knee-jerk’ reactions or blanket bans on AI use, which have the potential to derail legitimate and strategic AI work. Instead, use governance as a guardrail on activity and take simple steps like securing an enterprise license for AI tools, which are much more protective than personal ones. Typically, terms indicate that the information remains confidential, the outputs are owned, and data isn’t used in downstream model training.”
However, despite 36 per cent of of respondents confirming they are not confident that they comply with current AI law, with 39 per cent unclear on how regulation is evolving, over half of respondents are excluding legal and compliance teams from their AI decision making – “believing they’re AI nay-sayers rather than enablers”, according to the report.
“Companies believe they have strong compliance frameworks, yet regulatory investigations and fines are common. Our research also finds legal teams sidelined in AI decision making, and knowledge gaps on current and future regulation. This raises red flags for compliance and mirrors overconfidence we’ve seen in other areas of new regulation over the last decades, including data privacy, anti-money laundering and health and safety,” the report noted.
“There is currently no objective universal standard against which to measure the performance of AI governance. For many, doing anything at all means doing well. Often, this view isn’t challenged until gaps and inconsistencies come to light, or there’s a serious issue.”
DLA Piper Australian AI lead and senior associate Alex Horder added that the report “unveils the critical truths, challenges and opportunities shaping the AI landscape”.
“AI has infiltrated every sector, promising almost limitless competitive advantage. The report explores the commercial risk and escalating concerns about responsible and compliant AI use,” he said.
“To unlock AI’s potential, organisations must discern real concerns from ‘phantom’ risks. Our report underscores the pivotal role of good governance in navigating the AI landscape, reconciling risk and reward, compliance and commerce, and ultimately unlocking value in line with values.”