In the Absence of Federal AI Laws, Privacy Regulators Lead the Way: Lessons From the Clearview Case
On December 18, 2024, the Supreme Court of British Columbia issued a decision in Clearview AI Inc. v. Information and Privacy Commissioner for British Columbia, 2024 BCSC 2311. At its core, the case involved a challenge by a U.S.-based artificial intelligence (AI) company against a binding order from British Columbia’s privacy regulator. The company, Clearview AI, had amassed a large facial recognition database by scraping billions of publicly accessible images from the internet, many of which depicted individuals located in British Columbia, without obtaining their consent.
The decision is significant not only for its factual context, but for what it represents: one of the first concrete instances of a Canadian court upholding regulatory limits on the conduct of an AI company. In doing so, the court confirmed that British Columbia privacy law can and does apply to foreign-based AI companies where there is a real and substantial connection to the province. More importantly, it affirmed that the indiscriminate scraping of personal images, even those posted publicly online, does not exempt an AI company from the foundational principles of consent, transparency, and reasonable purpose.
This case arrives at a critical juncture in Canada’s evolving regulatory landscape. With no federal AI legislation currently in force and the Artificial Intelligence and Data Act (AIDA) having stalled following the dissolution of Parliament for a federal election, provincial privacy enforcement has emerged as one of the few effective regulatory mechanisms available to hold AI developers accountable in Canada. The Clearview decision not only underscores the role of privacy commissioners as AI regulators, but also provides practical guidance to AI companies operating in or affecting Canadian jurisdictions.
The Background
Clearview AI is a U.S. based technology company best known for its controversial facial recognition software. The company’s business model is straightforward but somewhat troubling. It collects images of human faces by scraping the internet, without consent, then uses biometric algorithms to create searchable profiles, offering these capabilities to law enforcement, government agencies, and other clients.
By 2020, Clearview’s activities had drawn criticism, prompting a joint investigation by four Canadian privacy regulators: the federal Privacy Commissioner, and their counterparts in British Columbia, Alberta, and Quebec. These regulators concluded that Clearview’s practices violated Canadian privacy laws, including BC’s Personal Information Protection Act (PIPA), by collecting and using highly sensitive biometric information without consent and for purposes a reasonable person would not consider appropriate.
Although Clearview subsequently suspended its services in Canada, it did not delete the data it had already collected, nor did it cease the underlying practice of scraping images from online sources likely to include Canadians. In response, the BC Information and Privacy Commissioner issued a binding order in 2021 requiring Clearview to stop offering its services in the province, to delete all data collected from BC residents without consent, and to make best efforts to prevent further collection and use of such information.
The Decision
Clearview challenged the order in the Supreme Court of British Columbia, arguing that BC’s privacy law should not apply to a U.S.-based company, that the images it used were “publicly available” and therefore exempt from consent requirements, and that its purpose for collecting the data was both lawful and reasonable. The company further contended that the Commissioner’s order was overbroad and unenforceable.
The Court rejected all of these arguments. The court found that there was a clear and substantial connection between Clearview’s activities and British Columbia, noting that the company’s database included images of residents and that its services had been used by local law enforcement. The Court also affirmed the Commissioner’s interpretation that online availability does not equate to legal availability for biometric processing under PIPA. Most significantly, it found that Clearview’s purpose, mass identification of individuals without notice or consent, was not one a reasonable person would find appropriate in the circumstances.
The Court upheld the Commissioner’s order in its entirety, thereby sending a strong message that AI companies operating in Canada, even indirectly, cannot bypass privacy law simply by sourcing data from public websites. The decision stands as a repudiation of the notion that technological capability creates legal entitlement, a principle with far-reaching implications for the broader AI industry.
The Implications
The Clearview decision is more than a privacy enforcement action against a single company, it is an important moment in the regulation of artificial intelligence in Canada. In the absence of a federal AI regulatory framework, the decision illustrates how privacy law is being leveraged as a practical, enforceable mechanism for AI governance. It signals to developers, regulators, and policymakers alike that AI systems will not be permitted to operate outside the boundaries of established legal principles, even if those systems originate outside Canada.
Clearview’s core assumption, that images posted publicly online could be freely collected and used to train a facial recognition system, was also firmly rejected. The Court upheld the Commissioner’s finding that this practice breached PIPA’s requirements for meaningful consent and purpose limitation. Notably, the Court accepted that even publicly accessible data retains its status as “personal information” when processed in invasive and transformative ways by AI technologies.
This reasoning cuts to the heart of current debates about AI model training, especially in areas like facial recognition, language modelling, and biometric surveillance. The decision establishes that AI companies must consider the nature of the data they collect and the context in which it was made available, rather than relying on the flawed assumption that public equals permissible. In effect, the Court confirmed that technological capacity does not displace legal obligation.
The decision also confirms that foreign AI companies are not insulated from Canadian privacy law. By affirming that a “real and substantial connection” existed between Clearview’s activities and British Columbia, the Court opened the door for extraterritorial enforcement of privacy obligations in appropriate cases. For global AI developers and vendors, especially those offering biometric identification, predictive analytics, or automated decision-making tools, this case sends a clear warning: Canada’s privacy regime has teeth, and it can bite even across borders.
Most importantly, the Clearview decision highlights the regulatory gap that continues to exist at the federal level. With the Artificial Intelligence and Data Act (AIDA) now effectively dead following the call of a federal election, there is no standalone statutory framework governing the design, deployment, or auditing of AI systems in Canada.
In the meantime, provincial privacy statutes like PIPA are stepping into the regulatory void, offering a flexible yet enforceable mechanism to respond to the most urgent risks posed by AI technologies. While these laws were not drafted with AI in mind, they are increasingly being interpreted in ways that constrain opaque, high-impact, and non-consensual uses of personal data in automated systems. The Clearview case stands as the clearest example to date of how this approach can work in practice.
The Future
The Clearview decision stands as an important moment in Canada’s ongoing effort to define meaningful regulatory boundaries for artificial intelligence. In the absence of comprehensive federal legislation, it appears that it is the privacy commissioners who are shaping the practical contours of AI governance in Canada. The case confirms that AI companies cannot rely on technical capability or cross-border detachment to escape compliance with Canadian privacy law. It also affirms that scraping publicly accessible data to train AI systems does not exempt developers from the foundational obligations of consent, transparency, and fairness.
As AI systems become increasingly embedded in both public and private sector decision-making, Canadian regulators are clearly signalling that the legal status of data cannot be divorced from the context of its use. The implications of this reasoning are likely to extend well beyond facial recognition. Indeed, we are already seeing further developments. In April 2023, the Office of the Privacy Commissioner of Canada launched an investigation into the use of ChatGPT by OpenAI, examining whether the collection and use of personal information to train large language models complies with federal privacy law. This was expanded into a joint investigation in partnership with the privacy commissioners for British Columbia, Quebec and Alberta in May of 2023. The scope of the investigation includes whether valid and meaningful consent was obtained for the collection, use and disclosure of personal information, whether obligations in regard to openness and transparency, access accuracy and accountability have been respected and whether Open AI has collected, used or disclosed personal information for purposes that a reasonable person would consider appropriate.
Until federal legislation such as the AIDA is passed and comes into force, privacy law appears to be the primary tool available to address the risks posed by unregulated AI. The Clearview decision confirms that these tools can be both effective and enforceable. For AI developers, this ruling provides clear guidance. Those who seek to operate in Canada must treat privacy compliance not as a secondary concern, but as a core design principle. For regulators, Clearview affirms the legitimacy and necessity of using privacy frameworks to shape AI accountability. And for the public, it offers reassurance that legal protections do, in fact, apply, even in the face of novel and rapidly evolving technologies.
Note: Generative AI was used in the preparation of this article.




Comments are closed.