Could Artificial Intelligence in Decision-Writing Improve Access to Justice?
While many in the justice sector are confused and ambivalent about artificial intelligence, the Canadian Human Rights Tribunal has taken a clear stand. CHRT “tribunal members,” as announced in a Practice Direction last fall, ”do not use AI to write decisions or analyze evidence.”
To understand why the CHRT (and some other Canadian tribunals and courts) have explicitly banned AI in adjudication, read the work of Prof. Amy Salyzyn. The “corrosive risks” she identifies include hallucinated cases, biased language, and subtle (and not-so-subtle) legal errors. Then there’s the issue of litigants’ confidential data being provided to AI companies through queries and prompts.
Tribunals and courts can’t be faulted for avoiding this technology for the time being.
But we can’t afford to ignore the potential for significant access to justice benefits that careful deployment of AI by civil trial courts and tribunals could eventually yield. The AI-free status quo is failing to deliver reasonably accessible justice for Canadians with many of the most common legal needs. The time may come when a blanket refusal to use these tools by our courts and tribunals would be irresponsible.
A2J at the Canadian Human Rights Tribunal and Beyond
The CHRT (which shut the door on AI so firmly) does great work, but it doesn’t do it very quickly. The Tribunal’s 2023 Annual Report reveals that parties waited an average of 200 days to have their matters assigned to adjudicators for hearings. Thereafter parties had to wait again to actually be heard, and wait a third time for decisions in their matters to be released.
I wanted to see what the CHRT has been up to, so I checked their recent decisions page on CanLiI. At the very top was Todd v. City of Ottawa, released April 15th 2025. The case involves the allegedly-discriminatory termination of a bus driver in 2014 — over a decade ago. (Part of that time was taken up by a judicial review.) The saga isn’t over; April’s ruling was on a procedural motion rather than a final decision. Since 2019, in the average year the CHRT has issued fewer than 10 substantive decisions in which an applicant is actually found to have been, or not been, discriminated against.
There’s no need to reiterate here the reasons why it’s problematic to spend a decade figuring out whether or not it was fair to fire someone. (Not to mention sums of money that would probably exceed several years of a bus driver’s salary.)
Nor is it fair to pick on the CHRT, because the situation is similar at many of this country’s trial courts and tribunals. Ages of time and oceans of money are routinely required to adjudicate relatively minor civil disputes in Canada.
A Speedy Scribe
Significant chunks of the time and money consumed by litigation pay for writing — by litigants and their lawyers but also by adjudicators writing reasons for decision. As Oliver Burkeman points out in Meditations for Mortals, much of the writing process involves:
(i) knowing what you want to say,
(ii) getting a few words onto the page in an effort to say that thing,
(iii) looking at the words, deciding they’re not good enough, deleting them, and then going back to step (i).
Generative AI, arguably, can put a shortcut in that process at stage (ii). It can float words about which the human author makes a decision. It’s not immediately clear that we must resist this at all costs regardless of the A2J benefits at stake.
In Todd v. Ottawa, the last oral hearing was held on January 18 of 2024 and the decision was released 15 months later. In Canada’s trial courts, the average time between the conclusion of a trial and the release of the judgment is over three months in Ontario, and over five months in the Federal Court.
Meanwhile generative AI — notwithstanding all of its risks and problems — writes very quickly and convincingly. If you don’t believe me, ask any teacher at any level who has graded student writing within the last year or two. Output from the AI sites often matches the work of the very best students. Any honest teacher will admit that we’ve almost certainly been tricked, and given “A” grades to AI work that was passed off as human work.
Third vs Fourth Party
“Adjudication is a human responsibility. Tribunal members hear cases and make decisions based on the evidence and submissions.”
This is the reason given for the CHRT’s AI ban, and it’s difficult to disagree with it. But the line is blurry between:
(i) using technology in a way that betrays our human responsibility to adjudicate, and
(ii) using technology in a way that makes our human judgments more accessible and useful to the world.
One effort to draw the line is the “third party vs fourth party” distinction. The first two parties are the disputants, and the adjudicator or mediator is the third party. A fourth party would assist the third party, without supplanting any part of their essential role. The word processing software in which an adjudicator drafts their reasons for decision would seem to be an unproblematic fourth party.
Might the same eventually be said of an AI that helps draft reasons for a decision that was reached by humans?
What if any AI-generated text in a decision had been fully reviewed and vetted by a human adjudicator who is responsible and accountable for it?
Writing vs Analyzing
The CHRT specifically banned the use of AI for analyzing evidence, as well as for writing decisions. Analyzing evidence seems like clearly problematic third-party work, rather than unproblematic fourth-party work.
However the “third party/fourth party” conceptual distinction itself can be questioned, and some of the potential problems identified by Salyzyn (e.g. the use of gender-biased language) don’t lose their bite even if the AI is being an obedient fourth party.
The Problem with Prohibition
What happens if tribunals and courts prohibit AI use by adjudicators, while leaving intact incentives for them to produce convincing decisions quickly? Some adjudicators will use AI anyway, especially as it’s getting harder and harder for its use to be detected.
The free versions of AI that adjudicators use illicitly (eg ChatGPT) may be a lot more dangerous than paywalled, law-specialized AIs that could be provided by their employers.
AI might be a bit like alcohol or marijuana. Perhaps the world would be a better place, on balance, if it these things didn’t exist. But they do exist, and all efforts to eradicate them have failed and caused a lot of problems in the process (eg the lethal bathtub moonshine and violent bootlegger gangs that proliferated during the age of alcohol prohibition). A world where they are permitted and regulated may be better than a world where they’re in the shadows.
The Nirvana Fallacy
Nor should we compare real world AI, with all its warts, to ideal judges or tribunal members. They are human beings with warts of their own. For example, human adjudicators at Canada’s Immigration and Refugee Board were found 15 years ago to grant refugee status at wildly varying rates. There was, at least at the time, serious reason to doubt that whether an applicant was allowed to stay in Canada was being determined by the law and the facts, as opposed to the applicant’s “luck of the draw” among adjudicators.
When it comes to writing decisions, sometimes human adjudicators produce reasons that are extremely brief or mostly copied verbatim from one of the party’s submissions. The bar that AI must pass before we open the door to it should be based on what human actors are doing in the system today.
ChatGPT may use gender-biased language but so do many human writers. Maybe a properly supervised and programmed AI could make human language less biased than it would otherwise be, just as Word’s spell checker improved this blog by catching typos.
The justice system needs to be fully aware of the rapidly evolving risks posed by AI. But it must also remain aware of how far the non-AI system falls short in terms of delivering accessible justice to Canadians. Any decision to ban a potentially access-improving technology must take fully into account the victims of the inaccessibility of justice in Canada today.




Short answer: yes.
Longer answer: Most definitely, yes.
By most accounts, the role of AI in the efforts of the first and second party is likely to contribute (if not already contributing) to increasing the workload of the third party in a compounding way as more, and more complex, matters are filed. We do not have the luxury of time to focus the fourth party discussions on “whether” or “if”, rather, we need to move quickly to “how.”
Unfortunately, our justice system isn’t there yet. Nor does it seem to recognize the urgency. Tribunals Ontario, for example, has an April 2025 Practice Directive governing the 13 tribunals and bodies under its umbrella that is nearly identical to the CHRT directive, suggesting most policy makers are finding greater comfort in lockstep prohibition than in future-focused experimentation and creating the environment for safe and beneficial incorporation of AI in the multiple aspects of the adjudication process.
While generative AI technologies are imperfect and unlikely to reach a level that obviates the need for humans at the adjudicative wheel, it’s even more unlikely that we can expect the current bans to stand as a meaningful policy response for much longer.
Thanks Colin. Do you know of any courts or tribunals that are models in this regard?
Hi Noel. None that I’d declare a model of the sort I’d like to see, but WSIAT in Ontario has done an excellent job of moving itself forward notwithstanding the constraints under which it operates. From their 2025-2027 Strategic Plan:
“AI Strategy
The WSIAT is in the final stages of developing its AI Strategy and Governance
Framework. The WSIAT’s approach to AI implementation will align with the Ministry of
Public and Business Service Delivery and Procurement and Ontario’s Principles for
Responsible Use of AI which include AI systems that are reliable and valid; that AI is
used in a safe, secure, and privacy protecting way; that AI use benefits the people of
Ontario, that AI use is human rights affirming and non-discriminatory; and that AI use is
transparent, accountable and responsible.
An AI Governance Committee has been established to oversee all AI and Machine
Learning initiatives. The WSIAT undertook an external Artificial Intelligence AI
Opportunities and Readiness assessment as a preliminary step to establish the
organization’s preparedness to utilize artificial intelligence effectively and prepare for
AI-based initiatives in 2025. Findings are under review and consideration. An
organization wide rollout of Microsoft CoPilot Web is anticipated by the end of the year,
accompanied by a set of guidelines for its use that include responsible principles,
document, and citation of use.”
While not sprinting down the AI path, WSIAT has its running shoes on, has stretched and is very well positioned to break into a steady jog when the time is right. https://wsiat.on.ca/en/publications/2025_Business_Plan.pdf