Canada’s online legal magazine.

Archive for the ‘Legal Technology’ Columns

Keeping Hold of the Reins When Using AI

What many of us in law, legal education, and other fields still want to know at this point is: what is AI really good for? What does it do reliably well and better than we could do on our own? And when we use it for those purposes, what risks do we take on?

In the early days of ChatGPT, those risks were clear. AI hallucinated authorities and generated biased output grounded in its training data. But as models have improved and we’ve learned to guard against these problems, those concerns have become more manageable.

A different and more subtle . . . [more]

Posted in: Legal Technology, Practice of Law

Hallucinated References, Government Reports, and Managing Your Citations

Given the high value placed on research excellence by legal professionals and consultants, I am surprised that stories continue to be reported about the lack of rigour exercised in the creation of work product by these professional groups. In addition to the ongoing stories of professional sanctions placed on lawyers for including incorrect citations and other issues associated with the use of generative AI, there have been regular stories about the high values for government report contracts and the use of AI to create them. Here are some articles on a report prepared by Deloitte for the Province of Newfoundland . . . [more]

Posted in: Legal Ethics, Legal Information, Legal Publishing, Legal Technology, Practice of Law

Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance

Over the past two years, much of my writing in this space has focused on the accelerating risks associated with artificial intelligence and the uneven state of AI regulation in Canada. I have written about stalled federal legislation, the growing role of privacy regulators, the increased risks of AI use for regulated professionals, and the early signs of AI related litigation beginning to surface in Canadian courts. Taken together, these developments point to a growing tension. Artificial intelligence is being deployed at speed, while the institutions tasked with managing risk remain fragmented, reactive, and unevenly equipped.

This column steps back . . . [more]

Posted in: Legal Technology

From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter

Generative AI is nearly impossible to avoid as a law student. Over the past few years, it has been embedded into many of the products commonly used for legal work (See e.g., proprietary research platforms, Google, Microsoft products, etc). Whether welcomed or resisted, generative AI is now part of the legal information environment.

There are many questions remaining about how to prepare students for the use of generative AI during their legal education for their future practice. While technological competence ≠ generative AI, we know that use of generative AI systems is a technical skill . . . [more]

Posted in: Legal Information, Legal Technology

The Real Problem in Hallucination Cases Is Not the Failure to Verify

Cases keep cropping up where counsel has used AI to create a court submission containing made-up cases. The common response on the part of courts and the profession has been: ‘prompt, but verify.’ It’s okay to use AI, just make sure it’s accurate.

I think this response misses the mark. But consider first how fixated we’ve become over the issue of verification — implying that this is all we need to be concerned about in deciding whether counsel should be using AI to write court submissions.

As Judge Moore in a Federal Court case wrote earlier this year:

The use

. . . [more]
Posted in: Legal Technology

Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals

In my last column, I moved away from regulatory analysis to explore how artificial intelligence may affect specific functions within the legal profession. In this piece, I return to the theme of risk and broaden the discussion to consider the challenges AI presents across all regulated professions.

The rapid development of generative artificial intelligence has already begun to reshape practice across a wide range of professions. For regulated professionals in Canada, including lawyers, physicians, engineers, and others governed by statutory, ethical, and fiduciary duties, these advances bring both significant promise and considerable risk. However, the legal and regulatory frameworks are . . . [more]

Posted in: Legal Technology

How Profs and Students Are Using AI in Law Schools Around the World

Law schools everywhere are confronting the same issue: how to use AI to help rather than hinder student learning.

In an earlier column, I speculated on ways we might help law students foster good over bad uses of AI. A paper published this summer by Dutch law professor Thibault Schrepel surveys the growing literature on experiments with AI in legal education. His overview provides a more concrete sense of what better uses of AI might entail.

These applications all have potential pitfalls, but these too can be harnessed as part of the learning process. To begin with the most . . . [more]

Posted in: Legal Education, Legal Technology

Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing

Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that make it fundamentally dissimilar to an inexperienced first-year associate. How is AI deceptive? In short, generative AI can fail in unpredictable ways and sometimes in ways that mimic reliability, making errors harder to detect than those flowing from simple inexperience.

Before elaborating, three important caveats . . . [more]

Posted in: Legal Ethics, Legal Technology

Quantitative Assessment of Access to Justice Initiatives

Quantitative methods are at once well-established and novel when speaking about access to justice. We’ve been reporting on our activities to funders, boards, and communities for decades, but we’ve also occasionally been complacent about what message we are conveying. When I think about data on the law and how we can approach using it better, I often think about Jon Snow and his search for the source of a cholera outbreak in London in 1854. Here you can see the original map that allowed him to identify the source as the water pump on Broad Street, which he created through . . . [more]

Posted in: Legal Information, Legal Technology

AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers

Over the last few columns, I have focused primarily on the regulation side of my work in artificial intelligence (AI) risk and regulation. That focus has reflected, in part, my concern about the current regulatory patchwork surrounding generative AI in Canada and the very real dangers of unregulated implementation of AI into our daily lives. That discussion will continue at a later date, but for the next few articles I plan to shift the focus to the research and perspectives on the risk management side of the equation.

The risks associated with AI implementation are not hypothetical. Many readers will . . . [more]

Posted in: Legal Technology

Should Courts Allow Counsel to Record and Transcribe in-Court Testimony on Their Phones?

In July, I was counsel in a voir dire in BC Supreme Court, where four police officers testified over three days. While the officers gave evidence, I took over 30-pages of handwritten notes. I could capture verbatim maybe 30 percent of what was said. The rest of the time — when answers went on for too long or counsel and the witness talked over one another — I got only the gist of it. Yet, precision was key.

At one point, we stood down for over an hour for the court clerk to go through the recording to find a . . . [more]

Posted in: Legal Ethics, Legal Technology, Practice of Law

Another Brilliant Idea! the Hidden Dangers of Sycophantic AI

Author’s Note: After I wrote this column, but a couple of days before it was published, Open AI upgraded its GPT Chatbot from version 4 to version 5. Among the negative reactions to the change was a sense that ChatGPT-5’s artificial personality had becomes more distant and less complimentary. As you’ll see below, I don’t think that’s a problem. But there are early indications that Open AI might tweak the model again to reintroduce the earlier version’s “warmth,” which would make my warnings below more relevant again.

Something that many people have expressed concern about, when it comes to using . . . [more]

Posted in: Legal Technology, Practice of Law

3li_EnFr_Wordmark_W

This project has been made possible in part by the Government of Canada | Ce projet a été rendu possible en partie grâce au gouvernement du Canada