Column

The ChatGPT Lawyer: Promises, Perils, and Practicalities

ChatGPT was launched by OpenAI, an American-based AI research and deployment company, in November 2022. The free, easy to access tool quickly captured the public’s attention. Described in the New York Times as “quite simply, the best artificial intelligence chatbot ever released to the general public”, ChatGPT hit 1 million users in 5 days and skyrocketed to 100 million users in January. In response to ChatGPT’s launch, both Microsoft and Google have announced plans to offer new AI-enhanced, chat-based search tools.

It wasn’t long before lawyers started paying attention. A quick Google search yields over 40,000 hits for “ChatGPT” and “the practice of law” including dramatic headlines such as: “ChatGPT: Will AI Replace Lawyers?”, “Will ChatGPT make lawyers obsolete? (Hint: be afraid)” and “ChatGPT and the Practice of Law: Ignore at Your Peril.” In January, the CEO of a consumer legal tech tool (reportedly built on the same underlying GPT-3 technology as ChatGPT) offered $1 million to anyone with a case before the United States Supreme Court “to wear AirPods and let our robot lawyer argue the case by repeating exactly what it says.” This month, a judge in Colombia received media coverage after using ChatGPT to assist him in preparing written reasons in a court case involving health insurance coverage for an autistic child.

Does the release of ChatGPT mean a brave new world of robo-lawyers and robo-judges? A clear “no” to this question is obvious, for the reasons elaborated below. But, asking about complete obsolescence isn’t really the right or most interesting inquiry (see “what is a strawman argument?”). More relevant and pressing questions surround how this technology may start to impact, rather than supplant, the work of lawyers. This column takes on these questions, including exploring important intersections with legal ethics and access to justice.

What is ChatGPT?

If you haven’t already used it, an easy way to explore ChatGPT is to log in and use the tool’s free research preview. You will see a chatbot interface where you can ask ChatGPT different questions or give it different instructions and it will very quickly present you with its response.

Part of ChatGPT’s appeal is its ability to provide relatively well-written and correct answers across a wide array of subject areas, including answers to legal questions. The tool has been able to generate responses that pass some law school exams and components of American bar exams.

ChatGPT also has an uncanny ability to “mash-up” tones and styles of writing. You can prompt it, for example, to “describe losing your sock in the dryer in the style of the declaration of independence”, or “draft a cease-and-desist letter in Shakespearean language”. It completes these unusual tasks with success.

At one level, this can all feel somewhat magical; and perhaps worrisome if you are a lawyer (or judge) wondering if you will soon be obsolete.

On a panel about ChatGPT and the practice of law, American engineering professor Kris Hammond observed that ChatGPT is “outrageously compelling because for years we’ve only known of one thing in the world that can generate language and that was us and so we look at the thing and we think, oh my god, it is like us.” However, as Hammond acknowledges as he continues, “there is a moment where you have to realize…no it’s not [like us]”.

It’s Not Like Us: The Illusory Robo-Lawyer

Understanding why ChatGPT is “not like us” sets the stage for a more nuanced conversation about the future of this technology that moves beyond dramatic questions about lawyer extinction.

ChatGPT is built on a large language model, GPT-3.5. A basic explanation of how ChatGPT works:

It is trained on a large dataset of text, such as books and articles, to understand and generate human language. When given a prompt, it uses patterns it has learned from the training data to generate a response. It can answer questions, write stories, and have conversations, but its responses are based on patterns it has seen in the training data, rather than its own understanding of the world. It’s like a language-based computer program that can answer your question or generate text if it has seen a similar pattern in the data on which it was trained.

Another simple explanation:

[W]hat ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

These high-level descriptions do not capture all the nuances of ChatGPT or how it was created (for example, ChatGPT is also the product of human training and contains guardrails to refuse inappropriate requests, see here). Also, there is a lively debate among experts about what exactly large language models are doing and what they might be capable of doing in the future.

However, knowing that patterns and relationships in language feature significantly in this technology can help explain why tools like ChatGPT generate not only accurate responses to questions but also sometimes provide, as acknowledged by its creators, “plausible-sounding but incorrect or nonsensical answers.”

For example, the tool has had difficulty completing basic math equations. It sometimes gets basic facts about the world wrong, such as when it told a user who asked “what is the fastest marine mammal?” that “the fastest marine mammal is the peregrine falcon” In AI-parlance, serving up assertive, yet clearly wrong, answers like these is called “hallucination”.

Returning to the legal context, ChatGPT will sometimes simply hallucinate case law. In fact, I was easily able to generate an example of this by asking ChatGPT about a favourite topic of mine: judges using pop culture references in decisions:

Sounds like a good example. The problem is that a case involving Gamehaus and Blizzard Entertainment that includes a reference to Moby Dick doesn’t appear to exist.

In light of these limitations, it becomes abundantly clear that ChatGPT isn’t going to replace legal professionals anytime soon. And, in fact, in December, Sam Altman, the CEO of OpenAI, advised:

Aside from the technical limitations of this tool, there are a whole host of other reasons why “robots” aren’t ready to replace human lawyers. Not least, lawyers do a lot more than mechanically analyze the law or draft text. As Kristin Hodgins observed in a recent tweet:

But, just because ChatGPT and other tools based on large language models currently have major limits does not mean that lawyers should ignore them. At least for certain practice areas, Monica Goyal is surely right in observing that “lawyers who can embrace these technologies will realize the benefits over others that will not.”

Using ChatGPT in the Practice of Law: Tailored and Assistive Interventions

So, how might lawyers work with tools that use large language models? There are already some legal tech products available:

  • Drafting materials: Even before ChatGPT’s launch, there were tools based on GPT-3 technology that could do things like suggest language for contracts or prepare first drafts of court materials.
  • Legal research: Some legal research tools have announced that they have started integrating GPT-3 technology to assist in drafting legal answers for their users (see, e.g. here).
  • Communication: There are also tools that use GPT-3 technology to assist lawyers with preparing first drafts of emails (see, e.g. here for a tool that can do this, among other things).

The above list of tools is meant to be illustrative. A more comprehensive inventory can be found here.

Other legal use cases that have emerged in recent weeks, but which are not necessarily attached to specific legal tech tools, include legislative drafting and legal marketing.

Also notable: last week, global law firm Allen & Overy, made headlines by announcing that it was “deploy[ing] GPT-based legal app Harvey firmwide” to assist with a wide array of legal work, including contract analysis, due diligence, litigation and regulatory compliance.

The available tools generally present themselves as being assistive rather than offering a wholesale replacement for lawyer judgment and work. In the case of drafting tools, they are best considered as providing a helpful “first pass” in preparing documents.[1] Meanwhile, other sorts of tools look to help lawyers with relatively narrow and specific tasks: take, for example, a tool that can quickly tell a lawyer what legal test was used to decide a particular case (see, here).

It is not yet clear how powerful and useful legal tech tools built on large language models might become. That said, we are undoubtedly closer to the beginning of the story of how this technology may impact legal practice rather than the end.

New types of jobs are certainly on the horizon. For example, there is growing discussion of “legal prompt engineering” and even a job posting from Mishcon de Reya LLP, a large international law firm, for a “GPT Legal Prompt Engineer” who can, among other things, “design and develop high-quality prompts for a range of legal and non-legal use cases, working closely alongside our data science team” (prompts in this context are, basically, the questions or directions that you give to the system).

Ethical Issues?

As this technology starts to be used in legal practice, what ethical issues ought to be on the minds of lawyers and legal regulators? There are a few obvious intersections:

  • Competence & Quality of Service: Lawyers have an obligation to provide competent legal services to their clients (FLSC Model Code Rule 3.1-2). There are, obviously, good reasons for lawyers to be wary of simply trying to “plug and play” ChatGPT into their practice as a means of answering legal questions—no one wants to advise a client based on made up cases. That said, more tailored and narrowly deployed tools that are specifically built for lawyers can come with improved reliability and guardrails. For example, the dreaded “hallucinations” can be reduced (for further discussion, see, for example, here and here).
  • Efficiency: Lawyers are also required to provide efficient legal services (FLSC Model Code Rule 3.2-1). If sufficiently reliable technological tools can drastically, or even materially, reduce the time necessary for certain tasks like initial drafting, then a question is raised about whether and when there might be a positive professional obligation to use such tools.
  • Confidentiality: Lawyers must protect client confidentiality (FLSC Model Code Rule 3.3-1). In this area, there are, again, significant concerns with using a general purpose, public tool like ChatGPT. Indeed, ChatGPT’s website states “please don’t share any sensitive information in your conversations” and notes that it may view conversations for training and quality assurance purposes. Of course, not all uses of this technology necessarily engage confidential client information – like, for example, asking a tool to summarize or synthesize a case. Also, this is another area where bespoke tools have an advantage. The founders of Harvey have commented, for example, on the firewalls and other various protections that their tool uses to prevent confidential information travelling between firms or even intra-firm between client matters.
  • Delegation and Supervision: Trickier issues arise in relation to lawyers’ obligations on delegating work. Rule 6.1-1 of the FLSC Model Code states that “[a] lawyer has complete professional responsibility for all business entrusted to him or her and must directly supervise staff and assistants to whom the lawyer delegates particular tasks and functions.” What happens when a lawyer essentially “delegates” a judgement-based task to a technological tool? In some sense, this isn’t a new concern. Consider, for example, the use of technology-assisted review to classify documents in e-discovery. But, at some point, as delegation to computers in legal practice becomes more significant and commonplace, a question arises about whether new rules are necessary to clarify the level of due diligence and transparency to clients that is required.
  • Unauthorized Practice of Law: While lawyer regulators have been fairly criticized in the past for aggressively policing those outside of the legal profession who try to provide innovative legal services to the public, there is a legitimate public protection concern here. Many members of the public will not be able to independently assess whether a confidently presented answer from a ChatGPT tool or ChatGPT-like tool is correct in law or if it is providing a misleading or flat-out wrong response. Regulatory attention is required, particularly if direct-to-public AI chatbots emerge that are specifically marketed as providing legal advice.

Access to Justice: Access to What, for Who and by Whom?

New technological tools are often accompanied by the hope that they might contribute to improved access to justice. While it is too early to know where the access to justice balance may lie with tools like ChatGPT, there are reasons to be optimistic about the possibility of helpful, targeted use. As observed by Quinten Steenhuis:

ChatGPT is excellent at summarizing and extracting relevant information from documents provided to it, translating legalese into plain English, and helping us quickly analyze thousands of existing court forms so we can identify ways forms can be simplified and made more user-friendly.

At the same time, as technological capacity advances, we might also be concerned about the further stratification of legal services. Although there is a free version of ChatGPT, the tailored, higher-quality and more privacy-protective tools that law firms are incorporating come with a cost. Even ChatGPT now has a paid premium version. As tools become more powerful and more omnipresent, we need to pay attention to the divide between the legal AI that wealthy individuals can afford as in comparison to those of low or more modest means.

We also ought to be on guard for how technological tools might be used to cover over some of the fundamental changes needed to ensure that everyone receives appropriate legal help—which will include, in some (albeit certainly not all) cases, full representation by a human lawyer. A “let them have bots” attitude can be damaging and dangerous.

Finally, as legal technology tools become more integrated into legal service delivery, we should also be wary of providing subtle but outsized roles to private interests in the legal system. In some cases, tools will be built on top of common models developed by small sets of, or even single, companies. The underlying technology will not be neutral – no technology is. Those developing and training the models will be making choices along the way, like, for example, what data, guidance and guardrails to include. Such choices will presumably then bleed into the outputs of any subsequent tool – in the case of large language models, this would seemingly include outputs based on determinations about what language is “best” or “inappropriate” in certain contexts. As these outputs then integrate themselves into legal advice, pleadings, legislative drafts or even maybe written judicial decisions, we might fairly wonder how the very “language of the law” is itself being impacted and by whom.

Last Thoughts

ChatGPT has captured the attention of lawyers and rightly so. The capacity that it has shown us is impressive and there are already interesting ways that the underlying technology is being deployed in legal practice. While the future of large language models and lawyering is not yet clear, it is clear that legal AI is not going away. AI usage in law is now modest, but it is also increasing.[2] Lawyer and legal regulators need to take notice and ensure that they are equipped to engage with the current and emerging realities of AI-empowered legal service delivery.

 _______________

[1] The use of “first pass” in this context is borrowed from Professor Gabe Teninbaum’s account in a recent podcast.

[2] This characterization is adapted from language used in this article.

Comments are closed.