AI and Legal Ethics 2.0: Continuing the Conversation in a Post-ChatGPT World
Six months ago, I wrote a column about ChatGPT and other tools using large language models (“LLMs”). My aim there was to introduce this technology to readers and briefly outline intersections with legal ethics and access to justice issues. This column provides an update on this topic, including a deeper dive into legal ethics considerations.
I. What are we talking about?
My previous column included a basic overview about how ChatGPT and other tools built on LLMs work. I reshare the following two quotes as a starting point here:
A basic explanation of how ChatGPT works:
“It is trained on a large dataset of text, such as books and articles, to understand and generate human language. When given a prompt, it uses patterns it has learned from the training data to generate a response. It can answer questions, write stories, and have conversations, but its responses are based on patterns it has seen in the training data, rather than its own understanding of the world.”
Another simple explanation:
“[W]hat ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, whereby ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”
These are, of course, simplifications that do not capture all the nuances of how these models are created and work. A more thorough and technical explanation can be found in this video primer by Professor Harry Surden.
Two further caveats. First, a tool like ChatGPT is not created only by feeding it data. Human feedback is also used to improve the quality and appropriateness of outputs (see here for more detail). Second, there are vigourous debates among experts about how capable (and perhaps even dangerous) this sort of technology is or could become. One strand of these debates centres on whether such tools can (or will eventually be able to) engage in something akin to human reasoning. There are strong views on both sides.
II. Lawyer-use of ChatGPT and LLM-empowered legal tools
Following its November 2022 release, some lawyers have started to use ChatGPT in their legal work, sometimes with problematic results.
For example, an American lawyer made headlines in May after referencing fake cases in their submissions to a New York court. It eventually came to light that the source of the fake cases was ChatGPT, with the submitting lawyer explaining, “I heard about this new site, which I falsely assumed was, like, a super search engine.” The lawyer admitted they “did not comprehend that ChatGPT could fabricate cases.” The lawyer (and another lawyer involved with the court brief at issue) were fined $5000 by the court.
That ChatGPT fabricated cases is not surprising to those familiar with the technology. Because it has “seen” many citations and can replicate citation formatting conventions, ChatGPT can present users with a case citation that looks real but does not refer to an actual case. As noted above, the tool’s outputs are based on statistical modelling rather than “looking up” answers in a database.
Bottom line? “ChatGPT is not a reliable legal research tool. Do not use it for this purpose.”
Lawyers need to understand these limitations.
However, the story of legal AI doesn’t end here. There are several reasons why there is more to consider when it comes to using AI tools in legal practice, including: (1) legal research is not the only possible way lawyers might use ChatGPT; there are less risky applications; (2) ChatGPT is not the only LLM-empowered tool available to lawyers; there are other tools are built specifically for the legal services context with tailored training and controls; and (3) tools that use LLMs are not the only type of legal AI used by lawyers; for example, for over a decade, AI has been deployed to help lawyers with e-discovery and to detect patterns in case law with predictive analytics tools.
Given that this column focuses on LLM-empowered tools, I’ll address points 1 and 2 in more detail below. For more on point three, you can see some of my previous writing, here and here.
(a) How are lawyers using ChatGPT?
While the American lawyer who misused ChatGPT for legal research has attracted lots of attention, other lawyers are using ChatGPT for a variety of tasks other than legal research, including:
- Marketing: ChatGPT can write a draft of a blog or social media post, or suggest catchy language for a firm’s holiday card.
- Correspondence: ChatGPT can generate a first draft of a client email or help assistants prepare drafts of routine correspondence to courts.
- Editing: ChatGPT might offer suggestions on improving or “tightening up” first drafts of various legal documents.
- Visualizing information: ChatGPT could suggest ways to present evidence in a table as part of a factum.
- Brainstorming: ChatGPT could help generate possible risks associated with a client’s business activity or provide an initial outline or set of questions for discoveries.
These are all “real life” examples of actual uses that I’ve seen lawyers share online (see, e.g. here and here) or talk about at legal conferences.
These uses of ChatGPT don’t carry the same risks as using it for legal research; the “fake case” problem doesn’t exist. That said, some of these use cases could raise concerns about client confidentiality. The ChatGPT privacy policy is clear that the information a user inputs does not remain private and may be used to train the tool unless you opt-out. Concerns about users leaking sensitive business information through ChatGPT have arisen in other industries.
(b) Other LLM-empowered tools
But, also, ChatGPT isn’t the only AI tool powered by a LLM that lawyers can use. An increasing number of tools, trained and purpose-built for the legal services context, are available.
In February 2023, global law firm Allen & Overy made headlines by announcing that it was “deploy[ing] GPT-based legal app Harvey firmwide”. A month later, PricewaterhouseCoopers announced that it was giving thousands of legal professionals access to Harvey. Casetext’s CoCounsel tool, marketed as using AI to do “document review, legal research memos, deposition preparation, and contract analysis in minutes”, has also attracted significant attention and high-profile users (see, e.g. here).
These are just two examples; the list of LLM-empowered legal tools being newly developed and marketed is long (see, for example, this list from February 2023) and continues to evolve. Large legal research companies are also moving into this space. LexisNexis has announced Lexis+ AI, “a new product that uses large language models (LLMs), including GPT-4, to answer legal research questions, summarize legal issues, and generate drafts of documents such as demand letters or client emails.” Last summer, Thomson Reuters acquired Casetext (maker of the CoCounsel tool referred to above) for $650 million. Some law firms are even building their own internal tools using LLMs (see here and here).
Increased reliability is a focus for those developing tools tailored to legal practice. A variety of techniques are used to improve reliability, including pairing LLM-empowered tools with legal databases to ensure more accurate results. For example, on the question of “hallucinations” (such as presenting users with fake cases), a LexisNexis representative noted, in the context of discussing its new Lexis+ AI product:
“I don’t think it possible to warrant, at this stage of AI development, that any model can hallucination-free…but we try to minimize it to the greatest degree possible by interconnecting the model and its answers to content that it’s exposed to from LexisNexis in real time as that query interaction is taking place.”
These tailored tools also use a variety of techniques to improve privacy. As noted on the webpage for Casetext’s CoCounsel, that tool,
“uses dedicated servers to access GPT-4, meaning your data isn’t sent to ‘train’ the model as part of publicly accessible knowledge. Your and your clients’ information stays private and is secured by bank-grade AES-256 encryption.”
These added measures to increase accuracy and privacy make tailored legal AI tools more “fit for purpose” than ChatGPT, especially for legal research and drafting tasks. While it is still unclear what might be the full set of use cases for which these tools can be used effectively, reliably and securely, a lot of resources are being dedicated to trying to find out. I noted six months ago that “we are undoubtedly closer to the beginning of the story of how this technology may impact legal practice rather than the end.” This is still the case.
And lawyers, or at least some subset of them, are engaged. 53% of the 610 Canadian lawyers who responded to a recent LexisNexis survey stated that they have used or were planning to use “generative AI” (another way of describing LLM-empowered tools) for legal purposes.
III. Intersections of legal AI and legal ethics
(a) Lawyer competence
One of the obvious intersections between legal ethics and LLM-empowered legal AI tools relates to a lawyer’s duty of competence. In most Canadian jurisdictions, lawyers now have an explicit duty of lawyer technological competence, requiring them to have an “understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities.”[1] In the few hold-out jurisdictions that have not yet adopted this duty, there is little doubt that the long-standing general duty of lawyer competence, which includes, among other things, “adapting to changing professional requirements, standards, techniques and practices” requires lawyers to understand and use relevant technology.[2]
What does competent practice mean in an age of AI and LLM-empowered tools? As a starting point, it is important to avoid misuse. The story of the American lawyer presenting fabricated cases is a stark example of what can go wrong if one doesn’t properly understand a tool they are using. We also ought to be on guard for less obvious errors. In an article titled “Hallucination is the last thing you need”, the authors warn of “common law contamination with subtle, non-obvious errors”. Totally fake cases should be easy for courts to spot, but what about smaller changes in the words of a quoted passage? In the legal field, where so much depends on the particular words used, the potential for this sort of “contamination” is concerning. For example, an AI tool might substitute the word “fair” for the word “reasonable”. This change may make little or no difference in non-legal contexts but could have unintended but dramatic effects in law – a court applying a “fair person” standard of care in a negligence case may well reach a different result than one applying a “reasonable person” standard.
These accuracy concerns circle back to the discussion above. What it means for a lawyer to use an AI tool competently will turn on context or the task at hand. The risks and stakes associated with asking a tool to generate holiday card language are vastly different than those arising from using it to brainstorm discovery questions which are again different than attempting to locate a case that will help win your matter or to draft a final contract from scratch. Some people see the most promising (and possibly less risky) uses of LLM-empowered legal tech tools to be extractive as opposed to generative (see, e.g. this podcast discussion). For an example of an extractive use, see this LinkedIn post discussing using an AI tool to “hoover through the 500 page deposition of former President Donald Trump, extracting everything he said related to inflating his net worth, which is one of the NY AG’s principal claims in its civil fraud lawsuit” and noting the tool’s ability to provide results in minutes, as opposed to the hours which it would likely otherwise have taken to complete the task.
Not only do benefits and risks turn on what one is doing with a particular tool, it also matters which tool a lawyer uses. Doing legal research with a tool connected to a legal database and that provides links to actual cases is different than trying to search for cases or statements of law with a general “all-purpose” public tool, like ChatGPT.
Another important angle on the competence issue: when does failing to use a particular legal tech tool amount to incompetence? While there isn’t, in my mind, a compelling argument that lawyers must now use LLM-empowered AI tools to practice competently, this could change in the future if reliable and secure tools become widely available and adopted. We have in the past reached this tipping point with other legal technologies – for example, the transition from doing legal research with print resources to using computerized resources (e.g. CanLII, Quicklaw and/or Westlaw). As an Alberta Court of King’s Bench judge recognized in 2010, “the practice of law has evolved to the point where computerized legal research is no longer a matter of choice.” In that context, lawyers had to adjust and start using certain newly available tools.
To some extent, the issue of AI use may end up being somewhat forced on lawyers if and when AI functionality becomes embedded within common tools that they are already using, like, for example, word processing software or email programs (see, e.g. here), practice management software or commonly used legal research databases. Engaging with AI may become hard to avoid, making it necessary for lawyers to have the requisite skills and awareness of limitations and risks.
(b) Lawyer efficiency and fair/reasonable fees
Lawyers are also subject to a duty of efficiency. Rules of professional conduct across Canada explicitly include an obligation to provide efficient legal services.[3] If there is a sufficiently accessible, reliable, and secure AI technological tool that can radically, or even materially, reduce the time that a lawyer takes to do a task, then there would seem to be a strong argument that a lawyer’s efficiency obligation mandates its use (or the use of similar tools).
Relatedly, lawyers have a professional obligation to only charge fair and reasonable fees.[4] The prospect that AI might significantly reduce the time it takes lawyers to complete certain tasks has led to increased interest in moving away from billable hours and toward adopting alternative fee arrangements (see, e.g. here). But what does ethical value pricing (by way, for example, of a flat fee) look like in a scenario in which a lawyer can complete a task that used to take ten hours in only one? Charging a flat fee for this task that is equivalent to ten billable hours doesn’t seem reasonable given the vast reduction in time spent, but, arguably, neither is only charging for the equivalent of one hour if we consider the necessity of applying the lawyer’s legal training and expertise (and the costs associated with acquiring this training and expertise). The market will surely do some work sorting out what “reasonable” means in this new context, but that doesn’t absolve lawyers from considering what ethical codes require when setting fees, especially when dealing with less powerful clients.
(c) Lawyer-client confidentiality
A lawyer’s duty of client confidentiality is also engaged. As noted above, the legal tech industry is aware that lawyers have strong obligations to protect client confidentiality and are deploying a variety of techniques to mitigate confidentiality concerns with legal AI tools. It is also worthwhile to consider whether existing client confidentiality rules are sufficient in a digital era. The current confidentiality rule requires that “a lawyer at all times must hold in strict confidence all information concerning the business and affairs of a client acquired in the course of the professional relationship and must not divulge any such information unless [certain narrow exceptions apply].”[5] As I’ve noted elsewhere, in a digital era, it can be very challenging to use standard tools or to operate without some exposure of client confidential information, either as a result of inadvertent data sharing with third-party service providers or because of sophisticated attacks by malicious actors. The current confidentiality rules don’t meaningfully engage with or explicitly account for these confidentiality risks. The American Bar Association’s Model Rules of Professional Conduct include language mandating lawyers “make reasonable efforts” to prevent inadvertent or unauthorized disclosure. The commentary to this rule elaborates:
The unauthorized access to, or the inadvertent or unauthorized disclosure of, information relating to the representation of a client does not constitute a violation …if the lawyer has made reasonable efforts to prevent the access or disclosure. Factors to be considered in determining the reasonableness of the lawyer’s efforts include, but are not limited to, the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use)
In my view, the contextual and proportionate approach described in this commentary provides more realistic and helpful guidance in a digital era than a simple hard-and-fast rule that only tells lawyers they must protect client information.
(d) Lawyer candour
Another place where American rules may offer a helpful template is in relation to lawyer candour with clients. In Canada, our candour rules focus, in large part, on the need to be candid with a client about their legal position.[6] The American Bar Association’s rules recognize the importance of this type of candour but also mandate that lawyers “reasonably consult with the client about the means by which the client’s objectives are to be accomplished” (emphasis added). Several U.S. commentators have argued, quite reasonably in my view, that this rule requires lawyers to disclose their use of AI tools to their clients. Indeed, in discussing this rule in a 2019 resolution, the American Bar Association stated:
A lawyer’s duty of communication under Rule 1.4 includes discussing with his or her client the decision to use AI in providing legal services. A lawyer should obtain approval from the client before using AI, and this consent must be informed. The discussion should include the risks and limitations of the AI tool. In certain circumstances, a lawyer’s decision not to use AI also may need to be communicated to the client if using AI would benefit the client. Indeed, the lawyer’s failure to use AI could implicate ABA Model Rule 1.5, which requires lawyer’s fees to be reasonable. Failing to use AI technology that materially reduces the costs of providing legal services arguably could result in a lawyer charging an unreasonable fee to a client.
This reasoning seems equally applicable in the Canadian context. Law societies ought to consider issuing guidance specific to this issue.
(e) Supervision and delegation
One of the trickiest ethical questions about using AI tools in legal practice is what constitutes appropriate supervision and delegation. Rules of professional conduct state that lawyers retain “complete professional responsibility” for all matters that they delegate and contain restrictions on what work can be delegated to “non-lawyers”.[7] Obviously, lawyers must remain responsible for their work product, whether they or not they have assistance and whether or not this assistance comes from other humans or machines. But, also, what does appropriate supervision mean when a lawyer uses AI tools, especially when it isn’t always entirely clear how such tools arrive at a particular output or conclusion? Elsewhere, I’ve suggested that, in the case of AI tools, law societies should consider “introducing a new due diligence rule that requires lawyers to take reasonable steps to ensure that the technology they are using is consistent with their professional obligations.” Additionally, the issue of delegation raises some new questions when it comes to LLM-empowered legal AI tools, given their potential to do significant work across a range of lawyering tasks. Just because a machine may be technically able to complete a task that doesn’t necessarily mean that it should be taken out of human hands. If LLM-empowered legal AI tools become more ubiquitous and powerful, determining what tasks are essential for lawyers to do (and why) will become an increasingly pressing issue.
(f) Administration of justice
Finally, when it comes to AI tools, it is crucial that lawyers be engaged with the broader ethical, legal, and normative issues that arise in this context. The production and use of AI tools raise very serious concerns in relation to copyright, privacy, environmental impacts, labour practices, and bias. Under rules of professional conduct, lawyers have an ethical duty to “try to improve” the administration of justice, which includes “a basic commitment to the concept of equal justice for all within an open, ordered and impartial system.”[8] Lawyers and law societies have an important role to play in ensuring that the use of AI in our legal system is appropriate, fair, just and equitable. The American Bar Association has already urged lawyers to address the emerging issues relating to the use of AI in the practice of law, including “bias, explainability, and transparency of automated decisions made by AI” and the “ethical and beneficial usage of AI”.
Conclusion
It has been a whirlwind year for AI, resulting in a vast amount of commentary, experimentation, and financial investment relating to the use of LLM-empowered legal AI tools. These developments raise important questions about lawyers’ ethical obligations, the fundamentals of which are not new but which do require new consideration and application in the context of this particular technology.
Helpfully, many stakeholders, including Canadian law societies and courts, have already started to seriously engage with these questions. And, we can also potentially benefit from work in other jurisdictions. The Task Force on Responsible Use of Generative AI for Law, hosted at MIT, has already come out with draft principles that aim to ensure “factual accuracy, accurate sources, valid legal reasoning, alignment with professional ethics, due diligence, and responsible use of Generative AI for law and legal processes.” Several American state bars are also actively considering what guidance they can and should provide to lawyers regarding AI usage (see here for some more discussion).
As developments in legal AI technology continue, so must our conversations about how lawyer’s ethical obligations are impacted. Stay engaged and stay tuned!
_________________
[1] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.1-2, Commentary [4A] and [4B], and corresponding provincial and territorial rules.
[2] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.1-1(k).
[3] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.2-1.
[4] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.6-1.
[5] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.3-1.
[6] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 3.2-2.
[7] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 6.1-1 and 6.1-3.
[8] See Federation of Law Societies of Canada, Model Code of Professional Conduct, Rule 5.6-1 and commentary thereto.
Thank you for a very thought-provoking post. On the subject of the use of ChatGPT and the subject of “reasonable fee” could it not be argued that despite the shortcomings of ChatGPT for certain legal tasks because legal information is siloed a lawyer or legal professional might be tempted to choose the latter over having to subscribe to at least two if not more information providers? Would subscribing to one information provider suffice to meet the lawyer’s duty of efficiency, and competent service to their client?