Towards Transparency: Why Not a Court AI Register?
Canadian courts and judges are using AI in their work. Not all of them, but some of them. A small number of courts have publicly announced formal pilots or adoption of AI tools (see, e.g., here and here); other courts have authorized judges to use certain AI tools but haven’t (to my knowledge) made any public announcements about these authorizations; and, finally, individual judges are experimenting with using AI tools in their work on ad hoc, generally undisclosed (at least to the public) bases. I am not aware of any judicial decision in Canada in which a judge has acknowledged AI use, although there is one highly publicized case of suspected misuse. While AI adoption is underway within Canada’s judiciary, it is unfolding with remarkably little public transparency.
This is, to my mind, a serious problem. Public confidence in the courts rests on transparency about how justice is being administered. The Supreme Court of Canada has identified openness as “a principal component of the legitimacy of the judicial process” and admonished that “as a result of their significance, the courts must be open to public scrutiny and to public criticism of their operation by the public.” These words arise in the open court jurisprudence, which is, to be sure, a different context. Also, it is true that Canadian judges have long used technological tools without disclosing the specifics of what’s on their computers. The underlying premise of this column is, however, that our current AI era demands more disclosure. We need to borrow from the open court jurisprudence (and Jeremy Bentham) to embrace the idea that “publicity is the very soul of AI-empowered justice”.
What makes the AI era different? In short, the relatively easy access to powerful tools that can perform types of work that previously only humans could do. While no Canadian courts are directly delegating decision-making to AI, even uses characterized as “assistive” – take, for example, producing legal analysis, drafting reasons (or parts thereof), and summarizing evidence – implicate core judicial tasks and can shape both the form and substance of the justice delivered. To the extent that AI is influencing how Canadian judges administer justice, the public has a right to know.
Transparency provides the basis for informed public opinion – whether laudatory or critical – about how AI is impacting the administration of justice. The one case of suspected judicial AI misuse has prompted public discourse about AI use and trial fairness. But we shouldn’t have to wait for things to go wrong to have a conversation. Also, a lack of transparency risks the public overestimating how much AI is being used in Canadian courts and potentially driving unfounded worries. Too much secrecy risks sowing seeds of misunderstanding and distrust.
Concerns about judicial independence also underlie a call for greater transparency. As I have written about before, the outputs of generative AI tools are shaped both by the data the tools have access to and the priorities encoded in the tools by their developers. For example:
An AI legal research tool that is designed to give users legal answers as opposed to just a list of cases is going to have a lot of internal – most likely hidden – instructions (i.e. system prompts) about how it should get to an answer. It may, for example, contain built-in instructions about what precedents and secondary sources to favour, what follow up questions (if any) to ask the user or to allow the user to ask, how much detail to provide in the output, which details should be emphasized in an output, and what language or tone to use.
All these instructions reflect choices embedded into the tool by private developers – choices which may or may not be aligned with judicial values or the public interest. And all these choices have the potential to shape our understanding of what the law is on a given topic and how that law is best described.
Commercial interests are also at play here. The AI industry is “big business”. To give one somewhat jaw-dropping point of reference: the legal AI platform Harvey was recently valued at $11 billion. As with all businesses, legal AI companies depend on customers and courts are potential buyers or, in the case of free licenses, legitimacy providers. My point is not that the AI industry has nefarious intentions toward courts; it is that AI tools are shaped by private-sector choices and commercial incentives, and that, therefore, their introduction into the justice system warrants scrutiny.
What might greater transparency about AI tools in Canadian courts look like? An interesting model is the federal government’s recently released AI Register. Launched in November 2025, the AI Register “provides Canadians with information about where and how AI is being used within the federal government.” As of writing, the Register has 409 entries, disclosing a fascinating diversity of uses — from identifying insect species on “sticky traps”, to financial forecasting, to streamlining passport processing. The AI Register includes both active uses and those in development, and each entry includes, among other things, information about how the AI system is being used, the system’s vendor, data sources, and results.
The AI Register was explicitly launched as a “minimum viable product”, with “only basic features” and with the goal of “engag[ing] partners and the public on the content and functions they would like to see in future versions” of the Register. Earlier this year, the federal government launched a consultation seeking this type of feedback. While the future form of the federal government’s AI Register is still up in the air, it already represents, in the words of my colleague Teresa Scassa, “an important commitment” and is aligned with international trends toward greater transparency in public sector AI use.
So, why not have a Canadian Court AI Register? There would, of course, be practical issues to sort out — where such a register would be housed, how to define AI, what level of disclosure to require, and how to capture both institutional and individual use. None of these complications, in my view, should foreclose a move to greater transparency. We don’t need some sort of precisely defined, strictly enforced, and universally scoped regulatory regime to make headway here. One option might be to have an independent non-partisan entity, such as a university or well-respected legal non-profit, host the Court AI Register, with voluntary submissions from courts across the country and with chief justices gathering information about individual judge use. While a model like this might not guarantee full information, it would be a start and perhaps encourage a broader culture shift towards greater transparency.
One possible argument against a Court AI Register is that moving to a culture of disclosure may have a “chilling effect”. If courts and judges must disclose AI use, will they simply opt out of using the technology rather than open themselves up to scrutiny? It is impossible to know whether this worry would be empirically borne out. Indeed, it seems equally possible that a Court AI Register could encourage better innovation. As Teresa Scassa has observed in relation to the federal government’s AI Register, “[b]y making its uses of AI systems more transparent internally, the government can avoid duplicative efforts, allow better collaboration across departments and agencies, and perhaps also share ideas for helpful uses of AI tools to streamline different processes.” Sharing examples of effective and responsible judicial AI use could lead to more uptake of the kind we want. And, to the extent that any judges might be using AI in more risky and possibly problematic ways, dissuasion resulting from more scrutiny would not seem to be a bad thing.
It does bear emphasis, though, that the point of a Court AI Register would not be “naming and shaming”. Canada has a serious access to justice problem that includes the overloading of our courts and judges. It is reasonable and responsible for the judiciary to look at new technologies and consider whether and how they might contribute to ameliorating access to justice issues by making court work more efficient. At the same time, it is important that we have informed and robust public discussion about the best ways to deploy technology in our justice system. A Court AI Register would facilitate this kind of discussion.
We are at a consequential moment in the evolving relationship between AI and the justice system. Establishing a norm of transparency now, before habits harden and norms calcify, seems both urgent and entirely achievable. The status quo is troubling and, in my view, unsustainable. A Court AI Register would be a helpful step forward.


Start the discussion!