Canada’s online legal magazine.

Archive for the ‘Legal Technology’ Columns

Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance

Over the past two years, much of my writing in this space has focused on the accelerating risks associated with artificial intelligence and the uneven state of AI regulation in Canada. I have written about stalled federal legislation, the growing role of privacy regulators, the increased risks of AI use for regulated professionals, and the early signs of AI related litigation beginning to surface in Canadian courts. Taken together, these developments point to a growing tension. Artificial intelligence is being deployed at speed, while the institutions tasked with managing risk remain fragmented, reactive, and unevenly equipped.

This column steps back . . . [more]

Posted in: Legal Technology

From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter

Generative AI is nearly impossible to avoid as a law student. Over the past few years, it has been embedded into many of the products commonly used for legal work (See e.g., proprietary research platforms, Google, Microsoft products, etc). Whether welcomed or resisted, generative AI is now part of the legal information environment.

There are many questions remaining about how to prepare students for the use of generative AI during their legal education for their future practice. While technological competence ≠ generative AI, we know that use of generative AI systems is a technical skill . . . [more]

Posted in: Legal Information, Legal Technology

The Real Problem in Hallucination Cases Is Not the Failure to Verify

Cases keep cropping up where counsel has used AI to create a court submission containing made-up cases. The common response on the part of courts and the profession has been: ‘prompt, but verify.’ It’s okay to use AI, just make sure it’s accurate.

I think this response misses the mark. But consider first how fixated we’ve become over the issue of verification — implying that this is all we need to be concerned about in deciding whether counsel should be using AI to write court submissions.

As Judge Moore in a Federal Court case wrote earlier this year:

The use

. . . [more]
Posted in: Legal Technology

Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals

In my last column, I moved away from regulatory analysis to explore how artificial intelligence may affect specific functions within the legal profession. In this piece, I return to the theme of risk and broaden the discussion to consider the challenges AI presents across all regulated professions.

The rapid development of generative artificial intelligence has already begun to reshape practice across a wide range of professions. For regulated professionals in Canada, including lawyers, physicians, engineers, and others governed by statutory, ethical, and fiduciary duties, these advances bring both significant promise and considerable risk. However, the legal and regulatory frameworks are . . . [more]

Posted in: Legal Technology

How Profs and Students Are Using AI in Law Schools Around the World

Law schools everywhere are confronting the same issue: how to use AI to help rather than hinder student learning.

In an earlier column, I speculated on ways we might help law students foster good over bad uses of AI. A paper published this summer by Dutch law professor Thibault Schrepel surveys the growing literature on experiments with AI in legal education. His overview provides a more concrete sense of what better uses of AI might entail.

These applications all have potential pitfalls, but these too can be harnessed as part of the learning process. To begin with the most . . . [more]

Posted in: Legal Education, Legal Technology

Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing

Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that make it fundamentally dissimilar to an inexperienced first-year associate. How is AI deceptive? In short, generative AI can fail in unpredictable ways and sometimes in ways that mimic reliability, making errors harder to detect than those flowing from simple inexperience.

Before elaborating, three important caveats . . . [more]

Posted in: Legal Ethics, Legal Technology

Quantitative Assessment of Access to Justice Initiatives

Quantitative methods are at once well-established and novel when speaking about access to justice. We’ve been reporting on our activities to funders, boards, and communities for decades, but we’ve also occasionally been complacent about what message we are conveying. When I think about data on the law and how we can approach using it better, I often think about Jon Snow and his search for the source of a cholera outbreak in London in 1854. Here you can see the original map that allowed him to identify the source as the water pump on Broad Street, which he created through . . . [more]

Posted in: Legal Information, Legal Technology

AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers

Over the last few columns, I have focused primarily on the regulation side of my work in artificial intelligence (AI) risk and regulation. That focus has reflected, in part, my concern about the current regulatory patchwork surrounding generative AI in Canada and the very real dangers of unregulated implementation of AI into our daily lives. That discussion will continue at a later date, but for the next few articles I plan to shift the focus to the research and perspectives on the risk management side of the equation.

The risks associated with AI implementation are not hypothetical. Many readers will . . . [more]

Posted in: Legal Technology

Should Courts Allow Counsel to Record and Transcribe in-Court Testimony on Their Phones?

In July, I was counsel in a voir dire in BC Supreme Court, where four police officers testified over three days. While the officers gave evidence, I took over 30-pages of handwritten notes. I could capture verbatim maybe 30 percent of what was said. The rest of the time — when answers went on for too long or counsel and the witness talked over one another — I got only the gist of it. Yet, precision was key.

At one point, we stood down for over an hour for the court clerk to go through the recording to find a . . . [more]

Posted in: Legal Ethics, Legal Technology, Practice of Law

Another Brilliant Idea! the Hidden Dangers of Sycophantic AI

Author’s Note: After I wrote this column, but a couple of days before it was published, Open AI upgraded its GPT Chatbot from version 4 to version 5. Among the negative reactions to the change was a sense that ChatGPT-5’s artificial personality had becomes more distant and less complimentary. As you’ll see below, I don’t think that’s a problem. But there are early indications that Open AI might tweak the model again to reintroduce the earlier version’s “warmth,” which would make my warnings below more relevant again.

Something that many people have expressed concern about, when it comes to using . . . [more]

Posted in: Legal Technology, Practice of Law

Your Feelings, Their Profit: How AI Misreads Your Emotions and Sells Them to the Highest Bidder

As humans, we tend to navigate the world through emotion: quietly, instinctively, and sometimes unconsciously. What are emotions, if not the very fabric of how we live in the world? They’re how we feel, of course, but are also how we communicate, often without even realizing it. They drive our decisions: in relationships, in politics and in marketplaces. They connect us to each other and shape how we understand ourselves. But emotions are also deeply personal. While our faces might betray a flicker of joy or sadness, only we know the full story; the nuanced reasons why we feel what . . . [more]

Posted in: Justice Issues, Legal Technology

Algorithms Without Anchors: The High Stakes of North America’s AI Regulatory Void

In previous columns, I have examined the evolving trajectory of AI regulation and warned of the precarious path ahead. Regrettably, I must now report that the regulation of artificial intelligence in North America has become a project stalled by political circumstance. In both Canada and the United States, efforts to establish comprehensive governance frameworks for AI have encountered untimely political disruption, legislative dissolution in Canada and executive reversals in the United States.

This confluence of events has left two of the world’s most influential jurisdictions without durable regulatory mechanisms to manage the profound legal, ethical, and societal risks posed by . . . [more]

Posted in: Legal Technology

3li_EnFr_Wordmark_W

This project has been made possible in part by the Government of Canada | Ce projet a été rendu possible en partie grâce au gouvernement du Canada