Robot lawyer triggers ‘noisy’ legal debate
After the world’s first robot lawyer was blocked from appearing in a US court, many have questioned whether artificial intelligence (AI) technologies can aid in legal proceedings or whether it’s “worse than nothing”.
DoNotPay is a legal services chatbot described as “the world’s first robot lawyer”. Potential clients can download the DoNotPay app for assistance with a number of minor legal issues, such as parking tickets, legal fees, airline compensation, contract breaches, tax rebates, victim compensation payouts and much, much more.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
With an aim “to level the playing field and make legal information and self-help accessible to everyone”, DoNotPay was set to represent its first client in the US in February this year in a traffic ticket proceeding.
However, after being threatened with jail time by State Bar prosecutors, DoNotPay chief executive Joshua Browder postponed the case in January.
“It seems likely they will put me in jail for six months if I follow through with bringing a robot lawyer into a physical courtroom. DoNotPay is postponing our court case,” he tweeted at the time.
Despite the robot lawyer trial being scrapped for the time being, LegalVision product manager Blythe Dingwall and senior lawyer Robert Chen said this is “definitely something the legal profession should take note of”.
“While now the robot lawyer is only really able to handle simple matters like refunds and disputing parking tickets, the potential is there for AI to be handling more complex legal tasks a few years from now. This also frees up lawyers to work on more complex work,” the pair told Lawyers Weekly.
“AI tools could help lawyers quickly prepare the first drafts of simple advice or contracts, which a senior would check over. While more experienced lawyers probably won’t be affected, widespread AI tools could reduce the number of positions available for entry-level roles like law graduates and paralegals. We could also see that probability-based AI tools may be developed to help practitioners become more efficient.”
In conversation with Lawyers Weekly, PwC NewLaw director Peter Dombkins drew parallels between DoNotPay’s robot lawyer and OpenAI’s chatbot ChatGPT, which is able to come up with comprehensive and coherent responses to different prompts — and could potentially be used in a number of industries for admin tasks or document drafting.
“Moravec’s Paradox observes that it’s much easier for AI to mimic human reasoning (which we as humans find more difficult) than to mimic human sensorimotor and perception (which we as humans find relatively easy and natural). This paradox was on full display with the recently attempted ‘robot lawyer’ in the US — which sought to combine the logical reasoning provided by ChatGPT, with the physical and perception attributes of a self-represented human defendant.
“In terms of the longer-term impacts of AI upon the profession, we need to look at the types of activities that AI can best do alone, those best done by lawyers alone, and the hybrid space where lawyers can achieve their best outcomes when enabled by AI,” he explained.
“For example — many commentators have observed that AI is a skilled liar! Platforms such as ChatGPT are able to imitate a human’s conversational style and potentially pass the ‘Turing test’, but they are less able to imitate the substance and veracity of a subject matter expert.”
While these technologies can be a fantastic tool for those in a number of different roles, it does not appear likely that ChatGPT will wholly replace lawyers — at least not yet, with concerns raised around the accuracy of the program.
But new AI technologies are only growing in popularity. Tech giant Microsoft, which originally invested $1 billion into OpenAI in 2019, also recently announced a further multiyear, multibillion-dollar investment in the tech start-up, rumoured to be worth up to $10 billion.
Similarly, Mr Browder founded DoNotPay when he was a Stanford University student and reportedly used the app to successfully dispute 190,000 parking tickets. DoNotPay has since grown to cover over 100 areas of law and processes over 1,000 claims every day.
In 2021, following increased funding injections from Andreessen Horowitz, Lux Capital, Tribe Capital, Day One Ventures, and Felicis Ventures, the start-up was valued at $210 million.
In theory, the app listens to court arguments and records proceedings through smart glasses in order to formulate legal advice, based on legislation and legal precedent, before feeding real-time audio to the defendant through headphones for them to relay to the court.
However, most courts require all parties to consent to being recorded, ruling the AI bot out in many circumstances.
In fact, DoNotPay considered 300 cases for a trial of its robot lawyer — but Mr Browder told CBS last month that only two were feasible.
“It’s within the letter of the law, but I don’t think anyone could ever imagine this would happen,” he told CBS.
“It’s not in the spirit of law, but we’re trying to push things forward, and a lot of people can’t afford legal help. If these cases are successful, it will encourage more courts to change their rules.”
AI tools like DoNotPay and ChatGPT are also likely to continue to trigger “noisy” debates, according to Ashurst Advance APAC head of client solutions Tae Royle.
“Lawyers are highly trained professionals who provide carefully tailored advice, often at great expense, leaving the market for consumer legal services grossly underserved. It is uneconomic to ask a lawyer to document a loan agreement between two acquaintances for the sum of $500. But ChatGPT can do that today in a few seconds with proper prompts,” he said.
“Such a loan agreement may be full of holes and may not stand up in court, but a loan for $500 is unlikely to make it to court. A lender who cannot afford a lawyer may well think such a loan agreement is better than nothing. A ‘better than nothing’ service addresses a huge gap in the consumer market. However, a tension arises when the service is ‘worse than nothing’ in that the AI-generated agreement has the opposite to the intended effect or is otherwise plain wrong, compounded by the inability of an unsophisticated consumer to tell the difference.
“Moving away from simple agreements which are easy to generate, legal advice given by ChatGPT arguably still ranges from mostly wrong to positively misleading, meaning in AI legal advice, the scales are generally still tipped towards ‘worse than nothing’.”
It will also potentially lead to lawyers becoming proficient in new skills that enable them to better interface and leverage AI platforms.
“These types of AI platforms are able to help commoditise ‘lower quality’ content generation (for example, standard types of email responses), whilst also augmenting the value of ‘human made’ content and providing tools to help humans be more perceptive and creative. A follow-on impact of commoditising content, is that it also becomes more accessible — for example, drafting a well-composed and grammatical[ly] correct letter is no longer dependent upon someone’s level of education,” Mr Dombkins noted.
“For lawyers — this means less time on processing and administration, and more time on understanding and solving for their client’s needs. However, in order to become more accurate and relevant to the legal sector, this will require further training of these AI models using more and higher-quality legal-specific content (and noting, of course, the confidentiality and privilege-related issues in doing so).”
Despite an AI program being able to draw upon legislation and legal precedent from across the globe, bots are unlikely to be trustworthy advisers, Ms Dingwall and Mr Chen insisted.
“If an AI tool produces the “generic” or “average” answer to a legal question based on the available data, will it ever be able to use the law to push the boundaries? Will a robot lawyer ever be able to use an obscure case to overturn entrenched precedent? Will we be stuck with old and entrenched laws even when social norms have changed?” the pair asked.
“We should also consider confidentiality, conflicts of interest, bias, discrimination and ethics (how probability-based AI is premised on historical data, which we as a society should constantly be evaluating its reliance on), and how our duty to the court may be conflicted at times with the use of AI.”
Moving forward, Mr Royle added, some legal professionals will adopt AI as a “competitive advantage”, whereas others will fight against its use — but likely to no avail.
“We can expect a battle in the coming months and years between those who believe that AI legal services are good enough to be better than nothing, and those who believe that consumers need to be protected from shonky AI legal advice. The battle will be fought at the margin of legal services where few lawyers actually practice, and so will have little day-to-day impact on most lawyers,” he said.
“However, AI legal services will inevitably become more sophisticated and will start to climb the value chain, infiltrating commoditised legal services first. In the long term, the extensive use of AI in legal practice is inevitable, although it remains unclear whether users of legal services will be granted direct access or whether access to AI legal services will require the engagement of legal professionals as quality assurance layer.”