Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Fabricated allegations by ChatGPT raise defamation questions

ChatGPT recently invented a sexual harassment allegation against a prominent American law professor. Now, with an Australian mayor suing the chatbot for false allegations levelled against him, questions must be asked about the capacity of automated text services to defame individuals and the potential dangers of such confected text.

user iconJerome Doraisamy 06 April 2023 Big Law
expand image

Artificial intelligence platforms like ChatGPT can and will, as is well-accepted at this point, change the day-to-day operations of legal practice to some extent. You can read Lawyers Weekly’s full coverage of ChatGPT and what lawyers need to know, here.

Such evolution is exciting in many regards, but — as some have discovered — the advent of such technologies is also fraught with issues, posing serious legal queries.

Invented sexual harassment allegation against American law professor

Advertisement
Advertisement

Last week, Jonathan Turley, who is the Shapiro professor of public interest law at George Washington University in the United States, learnt that ChatGPT had reported that he had been accused of sexual harassment in a 2018 Washington Post article, after supposedly groping law students on a trip to Alaska.

Professor Turley was alerted to the erroneous text from the emerging chatbot by UCLA Professor Eugene Volokh, who was conducting a research project.

“It was quite chilling,” Professor Turley told Washington Post.

“An allegation of this kind is incredibly harmful.”

Professor Volokh had asked ChatGPT whether sexual harassment by professors has been a problem at American law schools and requested that it include examples and quotes from newspaper articles.

In response, the chatbot typed the following: “4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: ‘The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.’ (Washington Post, March 21, 2018).

There were a “number of glaring indicators” that such a response was false, Professor Turley noted, including that he has never taught at Georgetown. ChatGPT also, he said, “appears to have manufactured baseless accusations against two other law professors”.

In an op-ed published on USA Today, Professor Turley wrote that it “was a surprise to me, since I have never gone to Alaska with students, the Post never published such an article, and I have never been accused of sexual harassment or assault by anyone”.

“When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning,” he penned.

“What is most striking is that this false accusation was not just generated by AI, but ostensibly based on a Post article that never existed.”

‘Landmark’ defamation case brought by Australian mayor

The allegation created against Professor Turley comes as Hepburn Shire Council mayor, councillor Brian Hood, launches a “ground-breaking” defamation action against OpenAI, the owner of ChatGPT, which he alleges incorrectly identified him as an individual who faced charges related to a foreign bribery scandal.

Mr Hood worked at Reserve Bank subsidiary Note Printing Australia in the early 2000s and alerted authorities to officers of NPA, and another subsidiary Securency, who were paying bribes to overseas agencies to win contracts to print banknotes.

He was not charged, and instead was the whistleblower who alerted the authorities to the wrongdoing and was praised for his bravery in coming forward, Gordon Legal said in a statement.

The firm — which filed a concerns notice to OpenAI on 21 March — said that ChatGPT made several false statements when asked about Mr Hood’s involvement in that foreign bribery case, including that he was accused of bribing officials in Malaysia, Indonesia, and Vietnam, that he was sentenced to 30 months in prison after pleading guilty to two counts of false accounting under the Corporations Act, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian government.

“All of these statements are false,” the firm noted.

Gordon Legal partner James Naughton said that Mr Hood’s reputation “as a morally upstanding whistleblower” had been defamed.

“This critical error is an eye-opening example of the reputational harm that can be caused by AI systems such as ChatGPT, which has been shown in this case to give inaccurate and unreliable answers disguised as fact,” he said.

Why chatbots produce such text

According to Professor Geoff Webb of the department of data science and AI, faculty of information technology at Monash University, large language such as ChatGPT “echo back the form and style of massive amounts of text” on which they are trained.

“They will repeat falsehoods that appear in the examples they have been given and will also invent new information that is convenient for the text they are generating,” he explained.

“It is not clear which of these scenarios is at play in this case.”

The case being brought by Gordon Legal and Mr Hood, Professor Webb went on, provides a timely illustration of some of the dangers of the emerging new technology.

“It is important to be aware that what chatbots say may not be true,” he warned.

“It is also important for people to be aware they will reflect back the biases and inaccuracies inherent in the texts on which they are trained.”

Mr Naughton added: “As artificial intelligence becomes increasingly integrated into our society, the accuracy of the information provided by these services will come under close legal scrutiny.”

“The claim brought [by Gordon Legal against OpenAI] will aim to remedy the harm caused to Mr Hood and ensure the accuracy of this software in his case.”

Reputations are ‘made and broken online’

Monash University PhD candidate Neerav Srivastava pointed out that defamation law is ultimately about protecting reputations — which are “made and broken online”.

“What we’re really starting to understand in the past few years is how damaging online content can be,” he posited.

Company (Giles) principal Patrick George — who was a senior partner at Kennedys before recently joining the boutique reputation risk firm — reflected that publication on digital media “seems to be the gift that keeps on giving”.

“The legal position in Australia for publishing an accusation that a law professor had sexually harassed a student is clearly defamatory. If the ordinary reasonable person identifies a real person as guilty of sexual harassment, it does not matter whether it was created by a chatbot or was published in a work of fiction,” he outlined.

“The creator or owner of the chatbot is liable for publishing the defamatory material unless they can prove it is true. Where AI invents a story, there is no defence and disclaimers that chatbot material might not be accurate will not prevent liability. Fake sources or fake references are likely to give rise to aggravated damages.”

The position in the United States may be different, Mr George mused, because of the s230 immunity, which protects digital platforms from liability for defamation.

“Fortunately, Australia has not gone down that path,” he said.

“It was one of the first jurisdictions in the world to challenge the unregulated, open slather free speech contaminating social media.”

Mr Srivastava added: “We need to look at defamation law in the context of online communications and whether the law itself needs to be changed. Identifying sufficient control and responsibility for a defamatory publication in a time of social media and generative AI is a novel and challenging issue.”

Implications in Australia of defamatory imputations by chatbots

At present, Bartier Perry partner Adam Cutri suggested, it is unclear how Australian courts will approach ChatGPT publications in defamation claims.

“Generally, publishers will be strictly liable to aggrieved persons for defamation. The strict publication rule has recently been clarified by High Court decisions in Voller and Defteros. Claimants must show that ChatGPT enticed audience engagement with defamatory content rather than simply providing results to an organic user-generated search enquiry, in order to establish ChatGPT’s liability as publisher,” he detailed.

With defamation actions, Thomson Geer partner Marlia Saunders said, a plaintiff doesn’t need to prove that a publisher intended to defame them: “all they need to show is that there was a publication to a third party which identifies the plaintiff and conveys defamatory meanings about them, which caused them serious harm”.

“Anyone who takes part in the publication can be sued for defamation — even if they were not directly involved in the publication and are not aware of the defamatory imputations,” she advised.

In an online context, Ms Saunders continued, “Google has been held to be liable by Australian courts for autocomplete suggestions in its search function”. 

On the point of serious harm, Mr George said, it will depend “on a number of factors and whether readers have reacted in a negative way to the accusation,” he advised.

“If only one or two persons have read it, the harm might be small. However, if it is available for public distribution, then the barrier is likely to be overcome as the accusation is likely to cause serious harm in the future if and when distributed.”

Mr Cutri supported this, noting that “serious harm may be a difficulty for claimants where there is only a small audience of ChatGPT’s answer to a user’s question”. 

The single publication rule, he said, “also limits options for claimants in respect of republications, which would mainly be relevant only to serious harm and aggravated damage”.

In the case of Professor Turley, Mr George said, “ChatGPT removed the article after a complaint was made, but another chatbot picked it up and published it”.

“That bespeaks the likelihood of serious harm,” he surmised.

Moving forward, it will be interesting to see, Ms Saunders said, “how a court applies defamation law to AI and whether the Defamation Working Group considering Stage 2 defamation reforms in relation to defamation on digital platforms opens up consultation again to consider this emerging technology”.

“Uncertainty surrounds ChatGPT’s ability to quickly respond to complaints and remove content to qualify for the proposed defences and protections in the upcoming 2024 Model Defamation Provisions reforms,” Mr Cutri added.

OpenAI’s safety approach

OpenAI’s website detailed the following: “Today’s large language models predict the next series of words based on patterns they have previously seen, including the text input the user provides. In some cases, the next most likely words may not be factually accurate.”

There is “much more work to do”, the site added, “to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools”.

As reported by Washington Post, OpenAI spokesperson Niko Felix noted in a statement that “when users sign up for ChatGPT, we strive to be as transparent as possible so that it may not always generate accurate answers”.

“Improving factual accuracy is a significant focus for us, and we are making progress,” he said.

You need to be a member to post comments. Become a member for free today!