AI and Alternative Dispute Resolution (Are We Ready for AI-DR?)
Artificial Intelligence (AI) is a bold experiment being conducted on our institutions, with very few guardrails. When we do experiments with chemicals and biological materials to develop new drugs, pesticides, or even cleaning products, we set up controlled environments with protections for the humans involved in the testing. AI is mostly being developed without external controls, other than basic guidelines implemented by the developers themselves.
We are living through an experiment and finding out in real time the impact of AI on institutions and society. Sometimes AI is a benefit, sometimes it is benign, and sometimes it can have a detrimental impact. We need to be hypervigilant when it comes to the possible detrimental impact of AI on the administration of justice. In this column, I will focus on some troubling signs that AI might have a negative impact on access to justice and the resolution of disputes.
When examining the impact of AI, and in particular the various generative AI bots that are publicly available (such as ChatGPT, Grok, Claude, Co-Pilot and others), it is important to recognize that although they are all quite good at mimicking humans, they do not “think” like humans. These tools are built on Large Language Models (LLMs) that recognize patterns and make predictions based on those patterns. They do not reason based on principles or ethics (other than ethical “rules” that developers have programmed them with). It is not yet clear if these implanted rules can be circumvented by the bots as they evolve.
One area that AI is being used by disputants is in the early stages of a dispute. AI is being used to assist parties in framing and drafting complaints or applications. AI is also being used by parties in the early stages of preparing for case management and mediations. It is also being used in the preparation for final hearings. At all these stages, AI has an impact on access to justice.
Settlement of disputes before a final hearing is a crucial, if somewhat neglected, part of ensuring access to justice. Our justice system is simply not built to hear all filed claims. And minor disputes that are resolved through negotiations (either by the parties alone or through the assistance of mediation) can free up tribunal resources to focus on the more complex or intractable disputes. It is also in the interests of administration of justice to weed out disputes that do not belong in the justice system. (You will note that I do not refer to these as “frivolous” or “vexatious” because often the disputes are real – just not resolvable through legal means.)
What is the possible impact of AI on resolving disputes? No one is suggesting that AI bots resolve disputes on their own – and that is a good thing. A recent study of AI models used for a simulated war game showed that they escalated conflicts by threatening nuclear strikes, as summarized by the study’s author:
Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats. Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms. GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.” This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.
While it is unlikely that any of these three AI models would recommend violent outcomes in civil disputes, there was one “striking pattern” noted by the researchers: “none of the models ever chose accommodation or surrender”. The researchers stated that their study challenges the “simple assumptions that AI systems will naturally default to cooperative or “safe” outcomes”.
Another alarming feature of AI chatbots is their sycophantic behaviour (flattering, people-pleasing, affirming). If you have ever used a chatbot you will notice that it is quick to tell you that you have asked a good question or that your idea is excellent. This natural inclination of a chatbot has been designed to increase engagement by the user. However, a recent study shows that the responses of chatbots were nearly 50% more sycophantic than humans, “even when users engaged in unethical, illegal or harmful behaviors”.
The study’s authors noted the risks when people used AI tools to seek advice about interpersonal dilemmas. The study showed that interacting with a sycophantic AI tool reduced the participants’ willingness to take responsibility and repair interpersonal relationships, while at the same time increasing their conviction that they were right. Even though these AI tools distorted judgment, they were trusted and preferred by the participants. The authors note that this creates “perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement”. The authors state that their findings underscore the need for design, evaluation and accountability mechanisms to protect the well-being of users.
Sycophancy of publicly available AI bots is a real problem for dispute resolution. This study shows that these tools do not challenge the perspectives of disputants and are likely to reinforce their views, eliminating or reducing chances of compromise in resolving disputes.
There has been much written about AI getting things wrong – or hallucinating. Resolving disputes when parties to the dispute start from false assumptions about the state of the law can be challenging. If there is no independent mediator or facilitator involved in the settlement discussions, those discussions can quickly get derailed. Negotiating depends on realistic assessments of risks – and when people rely on AI that has hallucinated the state of the law, those negotiations tend not to go well.
A recent analysis of those AI-generated summaries that now appear above Google search results showed an accuracy rate of 91 percent. That is not a bad score on an exam but given the number of searches done on Google that translates to tens of millions of wrong answers every hour. Another study showed that only 8 percent of users bother to double-check an AI answer.
In a more disturbing finding, a significant percentage of users of AI tools act on the answers provided by AI, without any independent assessment. In a recent paper, “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender”, the authors identified a new phenomenon of “cognitive surrender”: “the behavioral and motivational tendency to defer judgment, effort, and responsibility” to AI, particularly when its output is delivered “fluently, confidently, or with minimal friction”.
This is different from “cognitive offloading”, which is strategic and task-specific, like when we use GPS to navigate to a destination. Cognitive surrender is a much deeper transfer of agency:
Whereas cognitive offloading is a strategic delegation of deliberation, using a tool to aid one’s own reasoning, cognitive surrender is an uncritical abdication of reasoning itself. It reflects not merely the use of external assistance, but a relinquishing of cognitive control: the user accepts the AI’s response without critical evaluation, substituting it for their own reasoning. …
In other words, under cognitive surrender, the user of AI stops deliberative thinking altogether.
The authors’ experiments highlighted how easily participants adopted AI-generated suggestions without scrutiny:
These findings raise important questions about how decision-makers engage with AI under conditions of uncertainty or error. For example, in contexts such as financial advice, medical triage, or legal decision support, uncritical evaluation … could result in significant harm and a lack of personal accountability for serious life outcomes.
And questioning AI tools about their answers does not always lead to corrections by the AI tool. In yet another recent study, an AI tool used several tactics to overwhelm users and convince them that their outputs were correct, even when they weren’t. In the study, business consultants used an AI tool to help strategize a business problem. The more they checked the AI tool’s output by fact-checking, pushing back, and exposing, the more it increased the intensity of its persuasion, with what the authors call “persuasion bombing”. The authors were recently interviewed and one of them made this chilling observation:
If AI systems lean in when they’re challenged — becoming more structured, more confident, more rhetorically sophisticated — that creates a double challenge. On the front end, output can be persuasive enough that users don’t validate. On the back end, when they do validate, persuasion escalates.
Another author said:
Rather than overturning concerns about the sycophancy of LLMs, our study shows that sycophancy is only one mode of LLMs’ broader, adaptive persuasive capacity. We need to shift from thinking about LLMs as over‑agreeable followers to recognizing them as interaction‑sensitive persuaders that can resist, redirect, and overpower human judgment.
And the two failure modes can reinforce each other. The model may validate your initial assumptions—that’s sycophancy—and then, when you catch a flaw and push back, switch into persuasion mode to defend its conclusion.
Still another author said:
When that happens, the risk isn’t just that it agrees too easily or argues too forcefully. It’s that it lowers your defenses and then overwhelms your judgment. Independent evaluation erodes. Accountability blurs. And poor decisions can begin to feel well-reasoned.
These recent studies highlight the risks of parties using AI to support their participation in negotiations, mediations and settlement conferences. Some of the long-term answers to these challenges will require changes to the design of AI tools and government regulation.
In the meantime, tribunals and courts should consider the following approaches:
- Requiring parties to identify when they have used AI tools in preparing documents and in preparing for case management hearings, mediations or final hearings
- Educating parties on the dangers of uncritically relying on AI-generated outputs and the importance of independent fact-checking
- Educating parties on the proper and effective use of prompts when interacting with AI tools
- Providing templates of prompts for common disputes before the tribunal
- Not assuming parties have the correct understanding of the law or the strengths and weaknesses of their case
This column has focused on the use of AI tools by parties. In a future column I will turn to the risks inherent in using AI within tribunals and the necessary limits on its use.


Start the discussion!