The increasing use of artificial intelligence tools by employees is transforming the landscape of workplace complaints, with a Maddocks partner cautioning that it presents new and complex challenges for HR departments and legal teams alike.
The quiet infiltration of artificial intelligence into the workplace has moved beyond productivity and automation tools – it’s now reshaping how employees voice concerns and how HR departments and legal teams respond, respectively.
Speaking with Lawyers Weekly, Meaghan Bare, a partner at national firm Maddocks, emphasised that employees’ use of AI is transforming the nature, tone, and complexity of internal complaints, presenting fresh challenges for both employers and legal advisers.
While many employees may have previously felt hesitant or ill-equipped to raise formal grievances, Bare explained that AI is now empowering staff to lodge complaints more readily, leading to a noticeable rise in both the frequency and legal complexity of internal matters.
“There has been a clear increase in both the frequency and complexity of internal complaints, many of which appear to have been generated or enhanced through AI tools,” she said.
“Employees who may have previously found it difficult to articulate formal grievances are now able to produce highly nuanced and often legalistic correspondence, without recourse to union representation or incurring the cost of legal advice.”
Reflecting on this trend, Bare noted that HR professionals are increasingly reporting to legal teams that they are receiving complaints that appear professionally drafted, sometimes in a tone or style that seems inconsistent with the employee’s usual communication.
“HR professionals are encountering more documentation that cites internal policies or legislative provisions. They have commented to us that the written complaint often does not resemble the employee’s usual writing style or approach,” she said.
“We have also seen a significant volume of AI-generated correspondence in the area of employees claiming their position is covered by a particular award.”
While AI tools may help some employees express their concerns more clearly, they also contribute to confusion and increase legal risks for employers.
One of the significant challenges Bare has identified is the risk employers face in misinterpreting the tone or intent of AI-generated complaints from their staff.
“There is a risk of misinterpreting the intent of staff. Correspondence generated with the assistance of AI can appear adversarial when, in fact, the substance of the complaint may be relatively straightforward,” she said.
Additionally, Bare noted that the efficiency of AI in producing written material, combined with the rapid pace at which complaints are being filed, can “place a strain on HR team resourcing”.
However, the risks are not limited to employee communications. Bare pointed out that when HR teams or managers use AI to draft responses without proper oversight, they may also expose the organisation to legal liability.
“If the employer’s response is also generated using AI without sufficient verification, there is potential for inaccurate information to be provided, which could create legal risk,” she said.
Citing a recent case, Bare highlighted how an employer’s use of ChatGPT to draft a letter to an absent employee backfired when the Fair Work Commission interpreted it as a formal termination, leading to serious consequences.
“One of the most prominent examples so far is the Fair Work Commission case of Daniel O’Hurley v Cornerstone Legal WA Pty Ltd [2024] FWC 1776. In that matter, the employer used ChatGPT to draft a letter intended to confirm an employee’s abandonment of employment,” she said.
“However, the AI-generated letter was interpreted by the commission as a termination letter. As a result, the employer could not argue that the employee had abandoned their role, and the general protections claim was allowed to proceed.”
From a legal standpoint, Bare said employment lawyers should help clients navigate this emerging space by managing their expectations around AI use – and its implications.
“It is important to warn clients that, with the advent of AI, legalistic language does not necessarily indicate that an employee has sought legal representation,” she said.
“It is also unlikely to be fair or reasonable to impose a blanket prohibition on employees using AI to draft internal complaints. AI can, in fact, help employees express legitimate concerns more clearly.”
Rather than banning AI outright, Bare encourages organisations to update their workplace policies to reflect its growing use on both sides of the employment relationship.
“It is advisable to review workplace policies to reflect the increasing use of AI on both sides of the employment relationship,” she said.
“HR teams may also require training to understand how AI might be used, and to understand its benefits and limitations.”