CJC AI Guidelines for Canadian Courts Leave Room for Improvement
The Canadian Judicial Council (CJC) has released its “Guidelines for the Use of Artificial Intelligence in Canadian Courts” (CJC Guidelines), which represent a significant step towards integrating artificial intelligence (AI) into the Canadian justice system. This article evaluates the CJC Guidelines, analyzing their strengths, weaknesses, and potential implications. Given my experience drafting similar guidelines, I offer constructive recommendations for improvement, focusing on practicality, comprehensiveness, and responsiveness to the unique challenges of AI adoption in Canadian courts.
Practicality and Usefulness of the CJC Guidelines
The CJC Guidelines are undoubtedly useful in laying a conceptual groundwork for AI adoption in courts. They emphasize key principles such as judicial independence, accountability, transparency, and ethical considerations. However, the guidelines are high-level and aspirational; they need concrete guidance on implementation, which could lead to inconsistency or unintended and avoidable risks.
For instance, the Guidelines recommend “develop a program of education and provide user support” and “regularly track the impact of AI deployments” but do not provide guidance on what these programs should entail or how they should be implemented. This lack of specificity makes it difficult for courts, especially those with varying resources and technical expertise, to operationalize the Guidelines effectively. This disconnect represents a significant barrier to the adoption of technology, and AI specifically, in Canadian courts. Without a fundamental shift in priorities and funding, it is the unfortunate reality that most courts cannot explore and implement AI responsibly in the short term. While the Guidelines are useful as a discussion starter, they are premature in their attempt to provide a framework for implementation across Canada. The CJC would have been wiser to wait until courts had the opportunity to explore AI and develop best practices.
Evaluation
The CJC Guidelines represent a commendable effort to address the complex landscape of AI integration in Canadian courts. They successfully underscore the importance of preserving judicial independence, which is crucial for maintaining public trust in the judiciary. Additionally, the Guidelines acknowledge AI’s potential benefits in improving the efficiency of judicial decision-making, encouraging courts to explore its use to responsibly enhance the administration of justice. The emphasis on adopting and using AI consistently with ethical principles and legal obligations helps ensure that AI is used fairly, unbiasedly, and transparently.
However, the Guidelines also exhibit deficiencies. A primary concern is their lack of specificity regarding different types of AI. The Guidelines fail to adequately distinguish between AI applications such as generative AI, automated decision-making systems, and non-generative AI. This distinction is crucial, as each type presents unique ethical and legal considerations. For instance, the use of generative AI in court settings raises concerns about transparency, accountability, and potential bias in the algorithms used. Similarly, automated decision-making systems require careful scrutiny to ensure they do not perpetuate or exacerbate existing societal biases. The Guidelines’ failure to address these nuances could lead to AI implementation that inadvertently compromises fairness and justice.
Further, the Guidelines lack a detailed discussion of human rights and their potential vulnerability to AI’s influence. The right to a fair trial, the right to privacy, and the right to procedural fairness are all potentially impacted by AI use in courts. For example, AI could predict the likelihood of recidivism (see, for example, this discussion by Molly Callahan at Boston University), which could then be used to deny bail or impose harsher sentences, raising concerns about due process and discrimination. The Guidelines’ need for depth in this area is a key area for improvement.
Another critical operational issue is insufficient guidance on conducting algorithmic impact assessments. These assessments are vital for ensuring that AI systems are used responsibly and ethically. They help identify potential biases, ensure transparency in decision-making processes, and mitigate the risk of unintended consequences. The Guidelines’ lack of instructions on conducting these assessments leaves courts ill-equipped to navigate the complexities of AI implementation, especially when many courts lack the technical expertise necessary to responsibly explore AI on their own.
The Guidelines also lack detailed guidance on training and capacity-building, which is essential for ensuring court staff can use AI systems effectively and responsibly. The absence of a robust “human-in-the-loop” principle raises concerns about over-reliance on AI and eroding judicial autonomy. The Guidelines’ current approach may inadvertently create loopholes for the inappropriate application of AI in court administration and decision-making processes.
Finally, while the Guidelines’ motivation is a broad concern over judicial independence, much of their concerns play on hypotheticals. The question of how the use of AI by courts or the judiciary can impact the unwritten constitutional principle of judicial independence has not been explored. Without a fundamental understanding of this interaction, there is no way to truly validate these concerns or make AI implementation strategies that account for them.
Change Management and AI Implementation
The CJC Guidelines primarily focus on the responsible use of AI. However, they lack guidance on the crucial steps of exploring and implementing AI. This omission leaves courts with little direction on navigating the complexities of selecting appropriate AI tools, assessing their suitability, and integrating them into existing workflows.
My “Exploring AI at High-Risk Legal Institutions” report emphasizes the importance of a strategic, well-structured change management approach to integrating AI tools in high-risk legal institutions like courts and tribunals. Courts will face significant challenges in adopting AI without clear guidance on change management. The CJC Guidelines could be significantly enhanced by walking courts through best practices on change management with a focus on implementing new technologies. This would provide courts with practical advice to navigate the complexities and risks of AI implementation, ensuring a smoother transition, minimizing disruption, and maximizing the chances of successful AI adoption.
Judicial Independence
The CJC Guidelines state that their motivation is a broad concern that the use of AI could impact judicial independence. However, they do not meaningfully explore how AI can impact this fundamental principle. AI has the potential to both support and undermine judicial independence, but this interplay has not been explored or illustrated in a meaningful way.
The reality is the constitutional principle of judicial independence was never adapted for the 21st century. Judicial independence has three characteristics: security of tenure, financial security, and administrative independence (Ref re Remuneration of Judges of the Prov. Court of P.E.I.; Ref re Independence and Impartiality of Judges of the Prov. Court of P.E.I., 1997 CanLII 317 (SCC) at para 115). The use of AI in courts would not jeopardize judges’ security of tenure or financial security, so it could only ever threaten administrative independence. While this principle remains of central importance to our constitutional democracy, administrative independence of the judiciary is a concept elucidated in the days when “Zoom” was a sound made by fast cars and 16-bit computers were considered the “next generation” of high-end home computers. In Valente v The Queen, 1985 CanLII 25 (SCC) [Valente], the Supreme Court of Canada defined what is protected by administrative independence in narrow terms:
- Assignment of judges;
- Sittings of the court;
- Court lists; and,
- The related matters of allocation of courtrooms and direction of the administrative staff engaged in carrying out these functions. (Valente at 709)
In 1985, these tasks were performed by humans using paper. Little thought was given when courts began adopting technology to the idea that the technology itself could jeopardize judicial independence. What bears significant study and exploration is how the use of AI by judges or court staff could impact administrative independence, whether it could fall under the broader logic and reasoning of Valente’s terms or if it needs to be substantively revisited by the Supreme Court of Canada. This is not easily answered, but it is my next research focus with the Artificial Intelligence Risk and Regulation Lab. By addressing these questions, the CJC Guidelines can help ensure that AI is used to strengthen, rather than undermine, judicial independence in Canada.
Recommendations for Improving Guidelines
The CJC Guidelines or future guidelines on the use of AI by courts and tribunals would benefit from the following recommendations:
- Provide Specific Guidance on Different Types and Use Cases of AI: The Guidelines should offer specific guidance on using different types of AI, such as generative AI and automated decision-making systems, and their various applications within the court system. For instance, the Guidelines should offer specific guidance on using automated decision-making systems (see, for example, the Government of Canada’s Directive on Automated Decision-Making).
- Expand the Discussion of Human Rights: Include a more detailed discussion of how AI can impact specific human rights, such as the rights to a fair trial, privacy, and procedural fairness. There are helpful precedents in circulation, such as the UNESCO Draft Guidelines for the Use of AI Systems in Courts and Tribunals or the Law Commission of Ontario’s Submission on Bill 194 – Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024.
- Provide More Detailed Guidance on How to Conduct Algorithmic Impact Assessments: This guidance should include recommendations for the scope and format of these assessments. A standard approach may be desirable, such as the use of UNESCO’s Ethical Impact Assessment or the Government of Canada’s Algorithmic Impact Assessment.
- Provide More Detailed Guidance on Training and Capacity-Building: This should include guidance on developing and implementing training programs and providing ongoing education and support.
- Incorporate a Robust “Human in the Loop” Principle: This principle should unequivocally state that AI should not replace human judgment in judicial decision-making (see, for example, the Government of Canada’s Directive on Automated Decision-Making; see also the Alberta Courts’ Notice to the Profession & Public – Ensuring the Integrity of Court Submissions When Using Large Language Models; see also Federal Court’s Interim Principles and Guidelines on the Court’s Use of AI).
- Incorporate Guidance on Change Management: This would involve guidance on assessing organizational and justice system users’ needs, selecting appropriate AI tools, managing risks, and engaging stakeholders.
- Deepen the Analysis and Understanding of Judicial Independence: The Guidelines should dedicate a section to explaining the complex interplay between AI and judicial independence. In the alternative, such policies should defer to some established understanding of this interaction. Regulation cannot be effective when those being regulated do not understand why they are being regulated.
Conclusion & Next Steps
The CJC Guidelines provide a valuable starting point for AI’s responsible and ethical adoption in Canadian courts. However, these Guidelines and similar policies need to be further developed to address the deficiencies identified in this assessment. By incorporating these recommendations, the CJC and individual courts and tribunals can create a more comprehensive and informative set of guidelines that will help ensure that AI promotes fairness, access to justice, and the efficient administration of justice.
As a next step in this field, I am embarking on an effort to provide a more precise and more specific answer to whether, when, and how specific use cases of artificial intelligence may impact judicial independence. The aim of this next project is to inform ongoing discussions on how courts can responsibly and safely integrate technologies like AI without jeopardizing judicial independence or the public’s perception of judicial independence. As an overriding constitutional risk, it is paramount that any plan to explore, implement, and use AI in courts is contextualized and framed by the enduring necessity to preserve judicial independence.
You can read a more in-depth evaluation of the CJC Guidelines in “Towards Responsible AI Integration: Evaluating the CJC Guidelines for Canadian Courts”, published by the Artificial Intelligence Risk and Regulation Lab.
About the Artificial Intelligence Risk and Regulation Lab (AIRRL): Founded in 2023 under the Access to Justice Centre for Excellence, the AIRRL is dedicated to exploring how AI can transform the justice system while safeguarding foundational principles. Our research seeks to develop frameworks and recommendations that ensure AI’s integration enhances access to justice rather than detracts from it. We believe that responsible, evidence-based AI policies are essential to protect legal institutions’ integrity and broaden access to justice for all in the 21st century. By fostering interdisciplinary collaboration, we aim to support a balanced approach to AI regulation that aligns technological progress with public trust and ethical accountability.
The author does not take into account the limited jurisdiction of the CJC over judges, as opposed to the Chief Justices of the various superior and appellate courts in Canada. The Guidelines are rightly ginger, because the CJC is not the plenary regulator of judges. How courts approach the use of AI by parties or by judges is properly a matter for the courts themselves, most of which are actively engaged in considering these important issues, and which have the necessary resources and expertise to do so effectively. At the moment, in the area of legal research, AI is overhyped and undercooked, so the courts are moving cautiously.