Algorithms Without Anchors: The High Stakes of North America’s AI Regulatory Void
In previous columns, I have examined the evolving trajectory of AI regulation and warned of the precarious path ahead. Regrettably, I must now report that the regulation of artificial intelligence in North America has become a project stalled by political circumstance. In both Canada and the United States, efforts to establish comprehensive governance frameworks for AI have encountered untimely political disruption, legislative dissolution in Canada and executive reversals in the United States.
This confluence of events has left two of the world’s most influential jurisdictions without durable regulatory mechanisms to manage the profound legal, ethical, and societal risks posed by increasingly pervasive AI technologies. As the deployment of AI systems accelerates across sectors such as healthcare, legal services, and public administration, the absence of binding regulatory safeguards is a vulnerability with far-reaching implications.
Canada’s Tentative Foray into AI Regulation: The Rise and Stall of AIDA
As described in previous columns, in Canada, the federal government made a significant attempt to regulate AI through the proposed Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, the Digital Charter Implementation Act, 2022. AIDA sought to establish one of the world’s first national regulatory frameworks for AI, emphasizing a risk-based approach grounded in principles such as transparency, human oversight, accountability, and fairness. It required developers and deployers of “high impact systems” to adopt internal governance policies and proactive risk assessment protocols.
However, AIDA faced procedural and political obstacles. The bill’s complexity and the evolving nature of AI regulation led to ongoing consultations and amendments, and its path to enactment was halted entirely when Parliament dissolved for the federal election. The legislative vacuum left by AIDA’s demise has resulted in a barren regulatory environment in Canada where businesses face uncertainty, and consumers are left without clear protections against the potential harms of high-risk AI systems.
The U.S. Executive Orders: From Guardrails to Deregulation
In contrast, the United States approached AI regulation primarily through executive action. President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mandated interagency efforts to develop AI safety standards, promote transparency, and uphold civil liberties. However, the fragility of this framework became apparent with the change in administration.
Upon assuming office in January 2025, President Donald Trump issued Executive Order 14148, repealing Biden-era AI directives. Shortly thereafter, Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” was introduced. This order established a deregulatory framework aimed at promoting U.S. dominance in AI innovation. It directed federal agencies to suspend or revise existing AI-related actions deemed obstructive to innovation and omitted enforceable provisions concerning safety, transparency, or liability.
More recently in May of 2025, the United States House of Representatives passed perhaps one of the most consequential legislative proposals concerning artificial intelligence to date: the One Big Beautiful Bill Act (OBBBA). The Act contains a provision that would impose a ten-year federal moratorium on all state and local regulation of AI in the United States. The bill has not yet been adopted by the Senate, and it remains the subject of considerable political and legal scrutiny. If enacted, OBBBA would establish broad federal pre-emption in the field of AI governance. Proponents argue that a consistent national approach would prevent regulatory fragmentation and enhance the country’s competitiveness in AI development. Critics, however, contend that it would erode local democratic governance, hinder policy innovation and may raise constitutional concerns related to federalism and the limits of congressional power. The enactment of OBBBA would have far-reaching consequences for the evolution of AI regulation in the United States and the world.
The Risk of Inaction: Consequences of a Regulatory Void
Operating in a legal vacuum carries several risks. First, the absence of binding AI-specific legislation increases the likelihood of inconsistent and reactive enforcement, creating uncertainty for developers, deployers, and affected individuals. This is illustrated by the recent case against Clearview AI, in which Canada’s federal and provincial privacy commissioners jointly found that the company’s facial recognition technology violated national and provincial privacy laws. Despite the severity of the findings, the fragmented legal framework and absence of AI-specific statutory obligations complicated enforcement efforts and highlighted the limitations of existing privacy regimes in addressing novel AI-related harms. The case underscores the challenges of regulating powerful AI technologies without a coherent legal structure that ensures accountability, transparency, and meaningful redress.
Second, without clear regulatory mandates, organizations may underinvest in risk management. This is particularly problematic in high-stakes sectors such as healthcare and law, where AI errors can result in irreparable harm. As noted in prior commentary, even well-meaning professionals may deploy AI systems without fully understanding their limitations, exacerbating the risk of harm through misapplication or overreliance.
Third, and perhaps most troublingly, the absence of regulation allows market forces alone to dictate the development trajectory of AI technologies. This risks entrenching systems that are opaque, biased, or insufficiently accountable.
Conclusion: The Cost of Delay
The development of AI technologies in North America is proceeding at an unprecedented pace. Yet neither Canada nor the United States has established a durable, enforceable framework for managing the associated risks. The failure of AIDA and the volatility of U.S. executive action illustrate the fragility of current regulatory efforts. Worse, recent legislative proposals that seek to prohibit state-level regulation threaten to stymie more nimble local governance initiatives without offering a coherent national alternative.
The risks posed by unregulated AI systems are not hypothetical, they are real, present, and growing. Legal, professional, and ethical standards must evolve to meet the challenge. Absent proactive and principled regulation, society risks being governed not by the rule of law, but by the opaque logic of algorithms.
Note: Generative AI was used in the preparation of this article.




Comments are closed.