Unavoidable AI?: The Increasing Ubiquity of Generative AI and Lawyers’ Duty of Technological Competence
More than a year after the public release of ChatGPT, excitement continues to build about the use of generative AI in the delivery of legal services. Notes of caution persist, too, as examples of lawyers using AI badly continue to trickle out (see, e.g. here and here). Although the full impacts of generative AI on the work of lawyers are yet to be seen, there is clearly an immediate need to discuss the responsible and ethical use of AI by legal professionals.
Canadian lawyers have a general duty of technological competence. In previous Slaw columns, I’ve discussed two ways that this duty interacts with generative AI:
- If lawyers decide to use generative AI tools, they must ensure that they have sufficient knowledge and skill to avoid misuse. An obvious example of misuse would be employing ChatGPT for legal research, not double-checking the results and submitting fake cases generated by ChatGPT to a client or court; and
- At some point, certain generative AI tools may become “mainstreamed” to the extent that failing to use them could amount to incompetence. An appropriate analogy here is the reality that lawyers cannot now refuse to do computerized legal research and only rely on print reporters (like they would have done decades ago); if a lawyer misses an important case doing this, it could easily be considered incompetence.
Underlying these two angles is the premise that lawyers will be actively choosing (or not choosing) to adopt generative AI tools. However, in significant ways, choice is starting to be absent from the equation; that is, that instead of choosing whether, when and how to start working with generative AI, generative AI is being pushed onto lawyers’ desks, whether they like it or not.
For example, generative AI functionality is now becoming embedded into commonly used legal tech tools, such as practice management software and legal research platforms. Outside the world of legal tech, generative AI is also cropping up in commonly used word processing software, workplace collaboration tools and video conferencing platforms.
To be sure, in some of these cases, generative AI functionality is an opt-in premium feature that users must explicitly adopt and pay for. But, this isn’t always the case. Sometimes the generative AI is “just there”. Take, for example, Microsoft’s recent announcement that “it will introduce a new, dedicated [keyboard button] to launch its AI tool, Copilot, on Windows 11 laptops and PCs.” Another example: CanLII users looking at Saskatchewan case law may have noticed the new “AI Case Analysis” tab appearing on their screen. In 2023, CanLII started a pilot project experimenting with using AI to generate case summaries and summarize legislation. The project started with Saskatchewan data, but it will soon be rolled out to Alberta, Manitoba and PEI. Also, generative AI features that are “premium” choices now may well become built-in defaults in the short-to-medium term.
As generative AI becomes increasingly ubiquitous in all these ways, it becomes harder for lawyers to avoid interacting with it. This reality means that gaining competence in generative AI is also increasingly unavoidable for lawyers. This does not mean that lawyers must become coders or tech whizzes; it does mean that lawyers must develop a basic understanding of how generative AI works and where the ethical hotspots lie.
Building this sort of literacy is, in my view, a profession-wide project involving multiple stakeholders. It starts with law schools integrating the topic into their curricula. It continues with legal workplaces ensuring that they offer sufficient internal training on AI and/or facilitate access to external education opportunities. Workplace policies regarding appropriate AI-use will also be increasingly essential. Law societies, too, have a role in providing guidance to lawyers on their professional obligations when using generative AI in the delivery of legal services.
Ensuring lawyers can engage responsibly and productively with generative AI is a dynamic and ongoing project. New products and technical developments seem to appear almost daily and the underlying regulatory dimensions are far from settled. New regulation may impact what tools are available to lawyers and how such tools can be used.
The legal profession’s journey with generative AI has just begun, and the future of this technology in legal is unpredictable. But, this is a journey that we are now all on – time to buckle up and continue our educations!
I think a lot of confusion comes from not knowing the difference between “generative” AI and “extractive” AI.
Generative AI describes a service like ChatGPT which can generate a piece of text based on all of the information on the internet, using recognized, predictive word patterns (which may or may not suit your purpose).
Extractive AI pulls relevant information from a controlled set of information, and can be programmed to pinpoint or footnote back to the sources used. So far, most (if not all?) of the research tools and services designed specifically for the legal market are extractive. The CatLII AI function on CanLII creates a preconfigured case summary based on six specific aspects of each case. Lexis and Westlaw are starting to market AI tools that pull answers to natural language research questions from within their caselaw and secondary source data sets.
With either type of AI, the risk of incorrect information or hallucinations is still there , but perhaps less likely in an extractive AI environment. To be honest, the risk of an incorrect result is inherent in any legal research project, regardless of what tools are used, if the tools aren’t used properly. The best or newest research tools used incorrectly will never generate correct results.
Law librarians specialize in knowing legal research tools and how and when to use them effectively and accurately. Despite more and more information becoming digitized, accessible, and mineable, librarians remain as relevant as ever and will continue to be the best resource for ensuring any legal research is done accurately, even in an AI world. Linton Weeks said it best: In the nonstop tsunami of global information, librarians provide us with floaties and teach us to swim.