Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

ChatGPT’s current iteration is almost twice as persuasive as the average human being

While people are getting better at identifying the difference between human content and that generated by artificial intelligence, so too is AI becoming more convincing, according to a new study.

user iconDaniel Croft 08 April 2024 Big Law
expand image

OpenAI’s latest iteration of ChatGPT, GPT-4, was almost twice as persuasive than actual humans when given access to personal information, as discovered in a new study.

Researchers at the Swiss Federal Institute of Technology Lausanne (EPFL) tested 820 people by putting them through a number of debates on a number of topics against each other and/or AI chatbots, with some AI large language models (LLMs) given personal information, and some without.

The result was that when GPT-4 was given personal information to work with, it was 81.7 per cent more persuasive than any human.

Advertisement
Advertisement

“Our results show that, on average, LLMs significantly outperform human participants across every topic and demographic, exhibiting a high level of persuasiveness,” the study said.

“In particular, debating with GPT-4 with personalisation results in an 81.7 per cent increase.”

The study added that when not given any personal data to work with, GPT-4 still outperformed humans in persuasiveness, “but to a lower extent” of 21.3 per cent, adding that “the effect is not statistically significant”.

Additionally, when “personalisation was enabled” for human participants, they actually became less persuasive, but by a statistically negligible margin.

The findings are alarming for a number of reasons, most significantly the danger of AI being used for malicious purposes.

It is already well documented that AI is capable of assisting scammers and cyber criminals effectively, being used to assist in code writing, creating phishing emails and messages, and much more.

Now, however, these tools are exceeding the capabilities of humans when it comes to persuasion, making the use of AI to create phishing scams and lure in victims much more dangerous and effective.

Additionally, threat actors could use these LLMs for mass disinformation during elections or other serious events to sway public opinion.

“Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could obtain even stronger effects by exploiting fine-grained digital traces and behavioural data, leveraging prompt engineering or fine-tuning language models for their specific scopes,” said the researchers.

“We argue that online platforms and social media should seriously consider such threats and extend their efforts to implement measures countering the spread of LLM-driven persuasion.”

Despite the LLMs’ increased persuasiveness, the study also found that participants identified when they were speaking to an AI most of the time, roughly 75 per cent of the time, meaning AI still has specific traits different to humans, stunting the efficacy of these tools for malicious tools, for now.

That being said, AI continues to advance rapidly, and significant investments mean the technology is only becoming more capable, and identifying AI-generated content is becoming harder to accomplish.

You need to be a member to post comments. Become a member for free today!