Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 2
In part 1 of this article, we explored two different areas of the regulation of artificial Intelligence (AI) in Canada. These included existing laws of general application that apply to AI and are in force currently, as well proposed legislation that would regulate the commercial use of AI in Canada directly, known as the Artificial Intelligence and Data Act (AIDA). In part 2 of this article, I will introduce a number of international developments in the regulation of AI that have an impact on Canada and introduce the primary international norms that are developing in this area. The article will conclude with a brief discussion on how organizations can use these sources to inform their development of policies and procedures around the use of AI in their workspaces.
International AI Regulatory and Framework Developments
As the regulatory environment for AI rapidly develops around the world, it is important for Canadian lawyers, regulators and policy makers to have an informed view of this landscape. For Canada’s deeply integrated global trade networks and technological ecosystems, harmonizing AI regulations with international standards is essential. Such harmonization helps to prevent fragmented approaches that could stifle innovation, create trade barriers and lead to regulatory arbitrage that could harm Canadian interests in this important developing field.
Indeed, we have already seen a recognition of this importance in the evolution of the definition of “artificial intelligence system” in the proposed AIDA. The definition of an AI system has evolved from its original text to now align more closely with the definition used in the Organization for Economic Co-operation and Development’s (OECD) AI Principles as well as the European Union’s (EU) AI Act.
There is of course a significant amount of regulatory activity occurring at the moment around the world in regards to AI and a summary of all significant developments is outside the scope of this article. Stay tuned to the developing website of The AI Risk and Regulation Lab at the University of Victoria (www.AIRRLAB.com) where a more fulsome overview of international regulatory developments will be posted in the coming months. A brief summary of the most significant international regulatory and framework developments relevant to Canada to date include the following:
EU AI Act
The EU AI Act is one of the first comprehensive regulatory and legal frameworks for AI in the world. It came into force in August of 2024 with provisions gradually being phased in over a 36-month period. The legislation classifies AI systems based on the level of risk that they pose, with categories ranging from minimal risk through to unacceptable high risk AI systems. Higher risk systems are subject to stringent requirements including risk assessments, transparency obligations and human oversight.
G7 Hiroshima AI Process
The G7 Hiroshima AI Process, launched in 2023, is the latest collaborative effort by the Group of Seven nations to establish common principles and policies for the governance of AI, with a particular focus on generative AI technologies. This initiative builds on earlier efforts, emphasizing transparency, accountability, safety, and international cooperation to ensure AI technologies respect human rights, democratic values, and the rule of law
UNESCO Recommendations on the Ethics of AI
The United Nations Educational, Scientific and Cultural Organizations (UNESCO) Recommendation on the Ethics of Artificial Intelligence is a framework adopted in November 2021, aimed at guiding the ethical development and use of AI on a global scale. This document outlines principles and values that should underpin AI systems, such as respect for human dignity, privacy, non-discrimination, and environmental sustainability. The Recommendation emphasizes the importance of ensuring that AI technologies are inclusive, transparent, and accountable. The Recommendation also provides member states with guidance on implementing these ethical principles in national policies and regulations.
OECD’s AI Principles
The OECD’s AI Principles, adopted in May 2019, are one of the first comprehensive international frameworks designed to promote trustworthy artificial intelligence. These principles, endorsed by over 40 countries including Canada, emphasize the responsible development and deployment of AI systems. The key tenets include ensuring that AI is transparent, fair, and accountable, and that it benefits people and the planet. The OECD also stresses the importance of robust safety and security measures, as well as the need for ongoing research and innovation that respects human rights and democratic values.
Introduction to Developing Norms in AI Regulation
In this article when I use the word “norms” I am referring to widely accepted principles and best practices that are used to guide the responsible development and use of AI technologies. These norms play a vital role in shaping behavior and expectations across the AI community. The source of developing norms in this area include regulation, international frameworks, ethical guidelines and technical standards that help ensure AI aligns with societal values. The significance of norms lies in their ability to provide a consistent approach to AI governance. As AI evolves, these norms offer a foundation for developing regulations and policy.
The impact of these norms can be seen in the development of the AIDA in Canada which is guided a set of six principles that are intended to align with international norms. These include human oversight and monitoring, transparency, fairness and equity, safety, accountability and validity and robustness. Some versions of these principles are found in all of the international regulatory and framework documents referenced above and should form the foundation of any attempt to develop internal policy and procedure documents for the implementation of AI in Canadian workplaces.
Creating AI Policies and Procedures
As organizations attempt to navigate the complex and rapidly evolving landscape of AI implementation, there are several basic practical steps that they can take to develop robust internal policies and procedures. These steps not only ensure compliance with emerging laws such as the AIDA but also align with broader norms and ethical guidelines that are increasingly influencing AI governance around the world. These basic steps include the following:
Consider Domestic Legislation and International Norms
A useful first step is to identify applicable domestic legislation and document any relevant principles to incorporate into your policy. For example, almost all organizations in Canada will be subject to some level of privacy and use of information law. In this circumstance, you would want to identify any uses of AI in your organization that could relate to the legislative requirements under the relevant law such as the Personal Information Protection and Electronic Documents Act (PIPEDA) or provincial equivalents. This might include AI applications that handle personal data, make automated decisions about individuals, or involve data analytics.
Once all domestic legislation, including the AIDA, is considered, further guidance can be taken from the norms reflected in the framework documents referenced above. For instance, ensuring transparency, accountability, and human oversight in AI systems are core across these frameworks. By embedding these principles into your organization’s AI policy framework, you align with global standards, reducing the risk of future compliance issues and enhancing the trustworthiness of your AI systems.
Conduct a Risk Assessment
A core tenet of the requirements under the proposed AIDA is the need for risk management when implementing AI in a commercial setting. Although not all implementation of AI in a workplace in Canada will be subject to AIDA, it is nonetheless a good practice to consider risk when developing policy in this area. Higher risk AI systems, such as those implemented in a health care or legal setting, should be subject to more stringent oversight and controls. This includes implementing rigorous risk assessments, regular audits, and establishing clear protocols for human intervention when needed.
Consider Ethical Guidelines
Using the ethical frameworks provided by UNESCO and the OECD as a foundation, Canadian organizations should develop internal policy that ensure AI systems are used in ways that respect human rights, promote fairness, and avoid discrimination. This includes creating policies that address issues such as bias in AI algorithms, data privacy, and the environmental impact of AI technologies.
Establish Transparent Reporting
Transparency is a recurring theme in all major international AI frameworks and is likely to be a cornerstone of Canadian AI regulation under AIDA. If appropriate, organizations should consider developing clear and transparent reporting mechanisms that document how AI systems are being used, and the steps taken to mitigate any identified risks.
Engage in Continuous Monitoring and Adaptation
The rapid pace of AI development means that regulatory and normative standards are constantly evolving. Organizations should consider committing to continuous monitoring of both technological advancements and regulatory changes. This involves regularly updating policies and procedures to reflect the latest international standards and best practices, as well as ensuring ongoing compliance with Canadian law.
The practice of AI policy development is swiftly developing in Canada and while there are no blueprints for this relatively new field of practice, guidance can be taken from the sources discussed above and in part 1 of this article. I look forward to hearing perspectives on this topic from anyone who is working in this exciting and developing field.
Disclosure: Generative AI was used in the development of this post.
Start the discussion!