Column

AI and Legal Ethics 3.0: Looking Beyond Professional Conduct Rules and Towards Independence of the Bar

Following ChatGPT’s public release in November 2022, there has been growing discussion about how generative AI intersects with lawyers’ professional obligations as found in codes of conduct. I published some early thoughts on this topic in Slaw.ca (see here and here). Several Canadian law societies have now also published guidance. Providing this sort of information is essential, in my view, for building needed tech literacy in the profession, particularly as AI becomes increasingly integrated into common tools that lawyers already use on a daily basis. Unfortunately, we have already seen that appropriate AI use by lawyers is not inevitable (see, for example, the instances of fake cases being included in submissions to courts). We need to keep talking and learning about how lawyers’ core ethical obligations, such as competence and confidentiality, intersect with AI use.

However, we should also look at ethical questions that reach beyond the four corners of conduct codes. One important area to explore, in my view, is lawyer independence. Independence of the bar is generally equated to independence from undue government incursions. In the words of the Supreme Court of Canada, “the independence of the Bar from the state in all its pervasive manifestations is one of the hallmarks of a free society.” But, lawyer independence has also been framed as including appropriate independence from “any source”, in addition to independence from public authorities.

How might the use of AI in the delivery of legal services impact lawyer independence? The answer to this question begins with recognizing how much human intervention can go into developing AI tools, whether it be training or other infrastructure that helps guide a tool to “better answers” or to use “better language”, choices made about what guardrails to put in place to prevent “bad responses”, or choices about what data to include.

To be sure, we need human intervention to have usable and useful AI tools. But, as generative AI comes to be deployed in the legal services context – particularly as it might start having a more significant role in locating answers to legal questions via legal research platforms or determining what language should be used in legal documents such as contracts or pleadings via drafting tools – it is important to recognize that the outputs of such tools are informed, to some extent, by prior choices made by other humans.

At the highest level of generality, we can consider questions of lawyer independence in relation to generative AI under the banner of “AI alignment.” AI alignment is the broad term for “the process of encoding human values and goals into large language models to make them as helpful, safe, and reliable as possible.” Often, those “encoding human values and goals” into large language models are working for a small set of private companies which are, in turn, motivated by their own private interests. The values and goals encoded into these models are not necessarily aligned with the values and goals “encoded” into the legal profession.

At its most intense, the AI alignment question has been posed in terms of possible existential threats (e.g. not letting the robots take over). But, in the case of lawyer independence, there are more subtle issues that are arguably of more immediate concern. Take, for example, the internal instructions that an AI legal tool might be given by its developers to guide it towards “helpfully” answering a legal question that a user submits. Such instructions are generally not visible to users but could contain built-in priorities about what precedents to favour, what follow up questions (if any) to ask the user, how much detail to provide in the output or what details should be emphasized. These “ingredients” will then become baked into the answer and consequentially shape our understanding of what the law is on a given topic and how that law is best described.

Similarly, the text produced by large language models is a product of the dataset used but also a product of human choices and training, including, in some cases, the manual scoring of outputs in relation to what language might be “best” or “appropriate” in certain contexts. As tools with these encoded preferences then get used in the context of legal service delivery, whether it be to produce legal advice, pleadings or contracts, this raises potential concerns about how the very “language of the law” is itself being impacted and by whom.

The types of influence on the practice of law which are discussed in the above paragraphs are, no doubt, subtle and not the product of any sort of conspiracy by private interests to control how law is being practiced. Moreover, there are ways of productively responding, at least partially, to these sorts of concerns, including providing more transparency and ensuring that, where relevant, AI tools used in the legal services context are designed with this context in mind and in consultation with legal experts.

At the same time, the overall and well-worn point here stands: no technology is neutral and we are better served when we are as clear-eyed as possible about this reality. As the legal profession considers using AI tools to perform more tasks that were traditionally the realm of lawyers’ own judgement and thinking processes—including possibly giving more control over to AI agents—the alignment issue and what it means for lawyer independence should feature into our conversations.

There is also a related concern about dependence here. As more legal AI tools get built and used—many of which may be running on the same small set of underlying models built by a few AI companies—concerns may arise about the degree to which the legal profession becomes dependent on these AI companies continuing to operate and provide access to their models. Again, there are possible ways to address these sorts of concerns, including, in some cases, building tools in a way that can swap between models if that becomes necessary. But, the fact that a concern isn’t insurmountable doesn’t mean that we ought not to be attentive to it. Often, our attention to the concern is what prompts the inclusion of mitigation measures.

None of this should be taken as an “anti-AI” plea on my part. To the contrary, I wholeheartedly believe that the legal profession should always be evaluating how it can better serve clients and advance the public interest, including exploring innovative technological tools and solutions. The call here is simply for us to be careful about what parts of our work we are off-loading to machines (and the humans who develop them), to be critical about the trade-offs we are facing and to call on industry to adopt best practices, particularly in the area of transparency. Those who develop AI tools have choices to make, but so do lawyers.

Comments

  1. Great article Amy! There is certainly lots to think about when using AI to create or edit written material. I don’t see it as that much different in principle from lawyers traditional reliance on precedents created by (who even knows most of the time?). We can’t just blindly use what we didn’t create ourselves. We have to apply our own professional judgment to everything that goes out under our names.

    Jeff Surtees
    Centre for Public Legal Education Alberta

Leave a Reply

(Your email address will not be published or distributed)