Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Are you talking with a human professional online? Perhaps not

In addition to the ethical and prohibitive issues with AI tools for commenting and engagement, it’s just not smart marketing, writes Sue Parker.

user iconSue Parker 07 December 2023 SME Law
expand image

LinkedIn’s core value proposition is “to engage in genuine conversations, communities and learnings to develop economic opportunities”.

This is a noble trajectory and focus for all lawyers and their clients as the platform delivers superlative professional opportunity. However, this value proposition faces great risk and compromise for the 14 million Australian and 1 billion global members.

The explosion of generative artificial intelligence (AI) and ChatGPT in 2023 has created a maelstrom of laziness, deceptive conduct, and banality. AI is already talking to AI, with members often being none the wiser.


Trust and ethics

Trust is defined as a firm belief in the reliability, truth, or ability of someone or something. Ethics and trust are the currency of success and sustainability, as I wrote inLegal leaders must have an executive presence on LinkedIn”.

While global surveys rate LinkedIn as the most trusted of all social media platforms, the clamour for influence, visibility and revenue brings out the unscrupulous alongside the ethical. And competition and market challenges fuel the race and hustle.

Paying for followers and likes, automated messages and connections, and manual and automated engagement pods have been rife for many years on LinkedIn, eroding personal and business reputations.

Spurred on by the quest for visibility, new clients and networks, many members can and do take a quick fix, head in the sand or unscrupulous approach. Others will employ an honest, strategic, and long-term focus. And therein sits the issues at play on LinkedIn and AI, which lawyers and their clients need to appreciate.

AI deception and manipulation

Quality content, meaningful engagement and commenting with value are key pillars of visibility and traction on LinkedIn. But these are being compromised with generative AI and inane duplication from ChatGPT.

I want to focus here on the pernicious issue of AI generative commenting automated tools and apps that are flooding the platform at a breathtaking scale. It really begs the question: are you talking with a human or AI bot?

The deluge of AI automated commenting app services is rather brazen, to say the least. And disturbingly, LinkedIn seemingly turns a blind eye to the organisations and members selling them directly on the platform.

There is a raft of tools and Chrome third-party commenting apps that deceive flagrantly and automate comments at a mass scale. It’s very appealing to the lazy and time-poor.

Automated engagement pods would have simple, banal preprogrammed comments such as “great post”, “thanks for sharing”, or “love your work”. It was easy to identify who was gaming the system. But the new tranche of AI-automated comment apps goes to the next level, with far more than a few words.

Note that the key sellers of apps are often integrated with other prohibited manipulative LinkedIn engagement and connection tools, which can and will damage trust and profiles.

The apps also have drop-down options for the tone and style of commenting, ranging from friendly and serious to abrupt and argumentative. When combined with personal preprogramming, it becomes dangerous and deceptive.

And if a human hasn’t actually seen the comment (as it can all be automatically programmed), bingo – you may end up in hot water.

Outsourcing risks

Firstly, LinkedIn’s User Agreement expressly prohibits the use of third-party software, extensions, bots, and browser plug-ins, of which generative AI comment apps and tools sit. Further, the Professional Community Policies state that members must try to create original content, respond authentically to others’ content and not falsify information about themselves.

Like all professions, there are good, bad, ugly, and brilliant. And, let’s call the elephant out in the room – anything marketing and social media has a fairly huge chunk of dodgy players. When market needs are robust, every Tom, Dick and Mary will try and grab a slice of the financial rewards.

The issue, of course, is the dodgy players will sell golden goose promises of success and high rewards. And for those who are time-poor and just want results, knowing how that is achieved is of no real consequence.

Outsourcing externally is risky for personal profiles and is against the T&Cs. Marketing and advertising for the company and sponsored pages are fine, but always undertake due diligence.

Legal issues and potential Consumer Law breaches

I spoke with Dr Fabian Horton, chair of the Australasian Cyber Law Institute, who said that lawyers and their clients who engaged in the purchase and/or use of fake AI comments should consider the long-term repercussions on customer trust.

Authenticity, once compromised, is a challenge to regain, which is particularly relevant for lawyers who are already subjected to the scrutinising public eye.

In addition to ethical concerns, using fake comments could be seen as engaging in deceptive practices. This may not only damage reputations but also expose those who purchase and use fake comments to potential breaches of the Australian Consumer Law, such as section 18.

Dr Horton raised the issue of fake comments being reflective of the practice known as astroturfing.

Astroturfing is a deceptive practice, usually in politics, where the true sponsor of the message is hidden to give the appearance of a grassroots campaign. On social media, the use of AI-driven fake comments raises the same ethical and legal concerns.

In summing up, Dr Horton advised there are many matters to consider, including data protection and privacy, intellectual property, issues stemming from discriminatory or biased content and particularly, misleading or false information.

Users should familiarise themselves with LinkedIn’s terms of service and the relevant laws in their jurisdiction to ensure they are not breaching contractual obligations or any civil or criminal laws.

Final word

In addition to the ethical and prohibitive issues with AI tools for commenting and engagement, it’s just not smart marketing.

Trust is all we have to rely on in a world running wild with AI. And while I understand the lure of quick and time-saving services and golden goose visibility promises, it is not sustainable.

Take the high road of personal connection and creativity. Your real voice and perspective will last long after the robots have left the room.

Sue Parker is a career, communications and LinkedIn specialist at DARE Group Australia.