You have 0 free articles left this month.
Advertisement
Corporate Counsel

Why AI governance isn’t optional for in-house teams

Whether you’re just beginning to explore AI tools or haven’t considered them at all, LexisNexis’s APAC head of legal stressed that in-house teams must adopt a structured approach to AI governance to ensure this technology is used responsibly and efficiently.

October 21, 2025 By Grace Robbie
Share this article on:
expand image

Speaking on a recent episode of The Corporate Counsel Show, Ali Dibbenhall, LexisNexis’s Asia-Pacific head of legal, explained the critical importance of in-house legal teams adopting a structured approach to AI governance to ensure the technology is harnessed safely and effectively.

Even if in-house teams haven’t experimented or shown interest in AI, Dibbenhall warned that structured governance is essential, as employers are already exploring the technology, often without anyone realising it.

 
 

“At the very least, regardless of how enthusiastic the adoption of AI is across your organisation, there is someone in your organisation who is using AI. Sure, we might not be talking about it, it might not be through official channels, but there’s someone who is using AI,” she said.

“Whether it’s to make their presentations a bit more exciting or a bit more professional looking or to help them develop a little piece of content that they need to put in an ad or in an internal document or in their performance review.”

While some in-house teams may assume a structured approach to AI governance isn’t necessary, Dibbenhall stressed that it remains important – you just need to adapt it to your team’s current level of engagement.

“Even if you think you’re not using it, that’s OK. That might mean that you need far less and far less structure and far less detail in the approach that you take to it, but you probably still need something,” she said.

Dibbenhall shared that when considering the best approach to AI governance for legal teams, one of the first steps is creating a “safe place for people to play” with these tools.

“What we’re trying to do is create a safe place for people to play. So we really want to focus on making sure that we try as best [as] we can to understand the types of tools that people are engaging with,” she said.

While companies more deeply invested in technology or AI development may require more robust governance structures, Dibbenhall emphasised that the minimum baseline for every organisation is to provide a safe environment for experimentation.

“If your business is more heavily into the tech side of things and really looking at developing their own tools or repurposing or working with other AI partners, there might be a whole bunch of other stuff that you need, but that’s kind of the minimum,” she said.

“You’ve just got to sit and think about it and decide, yeah, what’s the baseline that is appropriate for your business and try and create that safe place for people to really experiment and start using it.”

Beyond governance, Dibbenhall highlighted the value of experimenting with AI tools, explaining that hands-on experience not only helps teams identify the best tools for their needs but also sparks motivation and curiosity.

“Experimentation is the only way that you can really figure out which is the best tool for the job. They all have different abilities and different sorts of areas of focus, and experimenting with them. It’s not just about finding the best tool, actually, it’s part of the fun,” she said.

“I mean, it might not be for everyone, but from my perspective, seeing what these things can do is actually, it’s fun, it’s exciting, it gets people motivated and interested in exploring and engaging with the technology.”

She added: “It’s through those experimentations that you can actually find the use cases that work best for your business and for your industry and for your people.”