Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 1
Throughout my career, I have been attracted to and fortunate enough to work on various initiatives that push the envelope in particularly challenging and fast-moving areas. This work can sometimes induce anxiety due to its unpredictable nature and pace, but it is never boring and it is often highly rewarding. That being said, nothing I have done in the past has come close to the pace of change and potential for impact than working in the management of risk for artificial intelligence (AI) implementation.
The unique combination of the speed of advancement of AI technology combined with the slow pace of regulatory and policy reactions is currently creating a uniquely challenging but fascinating work environment. In the academic space at the AI Risk and Regulation Lab at the University of Victoria we are engaged in a number of projects, but none perhaps currently more important than a mapping project in which we endeavour to keep up to date with and map out the existing regulatory responses to AI across the world. On the professional practice side, the demand I am seeing from clients in this space is an urgent need for assistance with the development of policy and procedures for AI use and implementation. With this backdrop I was recently asked to deliver a presentation to provide a snapshot of the existing and developing AI regulatory environment in Canada. In part 1 and in part 2 of this article, I aim to summarize the key points of the presentation for those interested in this field who may not have the time to gather all these details themselves.
In part 1 of this article series, I will address two different areas of regulation for AI, being existing laws of general application and proposed laws to directly regulate AI in Canada. In part 2 of this article, I will address the rapidly developing international norms in this area as well as how organizations in Canada can use the existing laws, proposed laws and developing international norms to inform the development policies and procedures in this important area.
Existing Laws of General Application
There are a wide variety of existing laws in Canada that currently apply to AI. The list of these laws is substantial and includes privacy laws, human rights laws, intellectual property laws, consumer protection laws and many other industry specific laws such as those in the fields of health care consent and telecommunications. To highlight the challenges of the application of existing laws to AI I would like to focus on two specific examples being privacy law and intellectual property law.
Privacy Law
Privacy laws in Canada, whether federal or provincial, require that consent for collecting personal information be informed, specific, and voluntary. The current challenge in this arena is that while certain AI companies state that they are striving to adhere to these privacy law principles, the current practical application of AI technology makes compliance challenging. For instance, the opaque nature of many AI algorithms and data processing methods can make compliance with Canadian privacy laws almost impossible at this point in time. The result is that there are significant gaps in current compliance and from my perspective no clear roadmap forward to deal with them. As it is unlikely that the technological aspects of this problem will be solved in the short term, a useful first step will be the development of clear and robust consent processes that take into consideration the complexities of AI data usage.
Intellectual Property Law
The primary intersection of AI and intellectual property law in Canada is the field of copyright. Copyright law, governed by the Copyright Act, provides protection for original works of authorship, including literary, artistic, musical, and dramatic works. As AI technologies evolve, they raise significant copyright issues concerning both the inputs (data and works used to train AI systems) and the outputs (works generated by AI systems). It is no secret that AI companies frequently use copyrighted works without permission for training purposes. This practice raises questions about copyright infringement and numerous cases are currently moving through the courts on this subject. On the other side of the equation, the outputs generated by AI systems such as text, images and music, also raise significant copyright questions. The primary issue with output is that traditional copyright law recognizes human authorship and under the Copyrght Act, an author is usually a natural person. This leaves us currently in an uncertain position about the question of who, if anyone, holds the copyright to AI generated works.
Proposed Laws to Directly Regulate AI
The rapid development of AI technologies prompted the Federal Government to propose new legislation in 2022 specifically aimed at addressing the unique challenges and opportunities presented by AI. The centerpiece of this legislative effort is the Artificial Intelligence and Data Act (AIDA) which seeks to establish a comprehensive framework for the regulation of AI in Canada.
The scope and purpose of the AIDA is to regulate AI systems that have a significant impact on individuals’ rights, health, and economic interests. Its stated purpose is to promote responsible AI innovation and ensure that AI technologies are developed and used in a manner that respects human rights, fosters transparency, and enhances public trust. The AIDA adopts a risk-based framework, categorizing AI systems based on their potential harm. High-impact AI systems, which pose greater risks to individuals and society, will be subject to more stringent requirements, including mandatory risk assessments and oversight. The Act also proposes the creation of a centralized AI and Data Commissioner. This oversight body will be responsible for monitoring compliance, enforcing regulations, and promoting best practices in AI governance. The Commissioner will have the authority to conduct audits, investigations, and impose penalties for non-compliance.
The AIDA emphasizes the need for transparency in AI operations. It requires organizations to disclose information about their AI systems, including their functionality, data sources, and decision-making processes. This transparency is intended to ensure that individuals understand how AI decisions affecting them are made. The AIDA also integrates data privacy and security considerations, requiring organizations to implement robust data protection measures. This includes securing the personal information used in AI training and deployment, and ensuring compliance with existing privacy laws.
At the time of writing this article, the AIDA is currently in the committee stage and throughout the legislative process has garnered significant feedback and criticism from stakeholder groups. Key concerns expressed by stakeholder groups include that the scope of application for the act is either too broad or too narrow, that the transparency requirements are not feasible given the nature of AI technology, that the regulatory burden will discourage industry development in Canada, that the provisions on bias and discrimination are insufficiently detailed and that it is impossible to regulate such fast moving technology. In response to these concerns, several amendments have been made to the proposed language including a new definition for “artificial intelligence” that is in accordance with developing international norms in the field.
In part 2 of this article, I will discuss these developing international norms in detail including the work out of the OECD, EU, G7 and UNESCO. We will also discuss the important area of internal policy development for organizations in Canada and specifically how organizations can use the existing laws, proposed laws and developing international norms to inform the development policies and procedures in this important area.
Disclosure: Generative AI was used in the development of this post.
Great and informative article. Looking forward to Part 2. Thanks.
Really interesting article. I appreciate the analysis.