Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

6 legal risks around ChatGPT

New research has revealed that legal and compliance leaders should address their organisation’s exposure to a number of key ChatGPT risks.

user iconLauren Croft 23 May 2023 Corporate Counsel
expand image

According to Gartner, organisations should establish specific guardrails to ensure responsible enterprise use of generative artificial intelligence (AI) tools, including ChatGPT.

After making global headlines over the last few months, AI platforms like ChatGPT are changing and will continue to change the day-to-day operations of legal practice to some extent. You can read Lawyers Weekly’s full coverage of ChatGPT and other AI platforms and what lawyers need to know here.

Ron Friedmann, senior director analyst in the Gartner legal and compliance practice, said there were six specific risks around ChatGPT organisations needed to be aware of.

Advertisement
Advertisement

“The output generated by ChatGPT and other large language model (LLM) tools are prone to several risks,” he said.

“Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties. Failure to do so could expose enterprises to legal, reputational and financial consequences.”

The six ChatGPT risks that legal and compliance leaders should evaluate include:

Fabricated and inaccurate answers

One of the more common issues with ChatGPT is the fact that it has a tendency to provide incorrect information, with many legal professionals not fully trusting the program.

“ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” Mr Friedmann said.

“Legal and compliance leaders should issue guidance that requires employees to review any output generated by ChatGPT for accuracy, appropriateness and actual usefulness before being accepted.”

Data privacy and confidentiality

If chat history is not disabled when using the bot, previous information may become part of its training dataset — something Mr Friedmann said legal and compliance leaders needed to be aware of.

“Sensitive, proprietary or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” he said.

“Legal and compliance need to establish a compliance framework for ChatGPT use, and clearly prohibit entering sensitive organisational or personal data into public LLM tools.”

Model and output bias

Despite OpenAI’s efforts to minimise bias and discrimination in ChatGPT, known cases of these issues have already occurred and are likely to persist despite ongoing, active efforts by OpenAI and others to minimise these risks.

“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias, and make sure their guidance is compliant,” said Mr Friedmann. “This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls.”

Intellectual property (IP) and copyright risks

ChatGPT is trained on a large amount of internet data that likely includes copyrighted material. Therefore, Mr Friedmann said, the program’s outputs have the potential to violate copyright or IP protections.

“ChatGPT does not offer source references or explanations as to how its output is generated,” he said.

“Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinise any output they generate to ensure it doesn’t infringe on copyright or IP rights.”

Cyber fraud risks

ChatGPT is already reportedly being used to generate false information at scale, with applications that use LLM models, including ChatGPT, also being susceptible to hacking.

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cyber security personnel on this issue,” Mr Friedmann added.

“They should also conduct an audit of due diligence sources to verify the quality of their information.”

Consumer protection risks

Finally, businesses that fail to disclose ChatGPT usage to consumers, such as in the form of a customer support chatbot, run the risk of losing their customers’ trust and being charged with unfair practices under various laws.

According to Mr Friedmann, the California chatbot law mandates that in certain consumer interactions, organisations must disclose “clearly and conspicuously” that a customer is communicating with a bot.

“Legal and compliance leaders need to ensure their organisation’s ChatGPT use complies with all relevant regulations and laws, and appropriate disclosures have been made to customers,” he added.

You need to be a member to post comments. Become a member for free today!