Someone on LinkedIn Sold Me These Magic Beans: Generative AI and Legal Research
The hype around generative AI and legal research continues, and it seems everyone has an opinion. There are concerns about use of AI in practice, but there is less clarity about how to approach finding sustainable solutions. It is however apparent that we need to consider the risks associated with using these systems, especially those that were not designed for certain uses. This is particularly important because general purpose tools like ChatGPT are likely to continue to be developed, and, given the complexity of navigating legal information, if they can be used with law they will be.
The first solution is to develop more sophistication about what tools can do and how they work. It may feel like things are changing quickly, and they are, but this work has been developing for a long time. Ada Lovelace laid out the principles of computer coding in the 1840s, and the work building AI systems in the 2020s is continuing work that started in the 1950s, 1970s, and 1980s. Applications and approaches first proposed in the 1980s seem to have been particularly fruitful in recent years as computing becomes capable of delivering on concepts developed then in cost effective ways.
Beyond proofs of concept, AI tools have been widely deployed in the legal sector for some time now. Some common applications include translation, transcription, e-discovery, search, and document drafting. We’ve been using these tools for several years and talking about their implications for longer. Any tool that doesn’t explicitly state that it fulfills confidentiality or data security requirements can be assumed not to do so. Google Translate for example saves the text of translated documents in users’ browser history, as well as being used as training data for Google to improve its systems.
This doesn’t mean that we shouldn’t use these tools, but when someone tries to sell you magic beans, by which I mean things which we are told are valuable without being told precisely what they do or how, it’s reasonable to be cautious.
There is a substantial difference between purpose-built tools for expert users designed by reputable companies specializing in technically challenging subjects or known agencies creating tools for the public and general purpose free tools. Part of the problem is that people aren’t being savvy about the tools they are working with in high-risk situations like litigation.
These systems require consideration and close human oversight, and lawyers are generally familiar with this. We know that the work of articling students or first year associates needs to be checked closely to verify it’s correct before using it in court until they have earned the trust to be less closely watched. Generative AI systems should be checked at least as carefully as a student who shows promise, but sometimes makes things up for no apparent reason. Though it also has similar potential upside: students improve, and the AI systems will too.
ChatGPT, for example, is not intended for use in legal practice, and the verification such a product would require was not part of the development process. It was designed as a loss leader so OpenAI can sell other products. In contrast, many purpose-built AI generated drafting software and research tools are well established and reliable.
There’s a real difference between simple and complex legal problems for how these tools will impact work. There is a small likelihood that complex practice areas with often unique issues like Supreme Court advocacy and constitutional law will be automated. However, there are real opportunities to help a large number of people who have problems that are insurmountable to them, but which are related to relatively simple legal issues like traffic tickets, eviction appeals, and other common legal problems.
It can be hoped that we can keep these issues in perspective: we are used to people making mistakes and not checking everything. There are many cases that are regularly cited for particular points of law, which are not checked each time they are used to see if they actually say what they are cited for. I have been asked to look up cited cases like this and had to reply: “I can’t see that it does say that, but this is the place where it’s supposed to say it.”
It is astonishing that anyone would rely on a free, general-purpose tool for legal practice in the ways people seem to have been doing over the previous months. To a large degree this is a reflection of a lack of technical savvy and understanding of what the tools are doing. These are still valuable tools. ChatGPT is great at doing many things, but claiming ignorance about new tools while using them in ways that don’t control risk is a limited approach as are directives that ban them outright. AI is already intimately integrated into our work and the only viable solution now is to learn how to use it in appropriate ways.
This column is based on my notes from the session “Can the Chatbot Write My Factum? AI and Legal Research” at the CLEBC Legal Research Conference held in Vancouver on October 17, 2023. Thank you to Justice David Masuhara and Abdi Aidid for presenting on the panel with me and discussing these issues. I’d also like to thank Julia Lawn, Megan Vis-Dunbar, Raymond Lee, the other speakers, attendees, and CLEBC staff for organizing a great event.
Great post, Sarah, very lucid.
Thank you for this great post Sarah, and for your contributions to the day. So appreciated.