Column

On Legal Ethics and Artificial Intelligence

There continues to be extensive discussion about artificial intelligence and law, and concerns are regularly raised about the ethical and moral issues this presents, so I was happy when Marcelo Rodríguez invited me to be on a panel at the American Association of Law Libraries Conference on “Legal Ethics in the Use of Artificial Intelligence” with Kristin Johnson, Steven Lastres, and with Kim Nayyer moderating this year. Here’s the session description:

There is a pressing need for both innovators creating the datasets as well as users such as law librarians and attorneys to be aware of the ethical implications of using artificial intelligence (AI). Despite the fact that the American Bar Association (ABA) and state bars have no specific ethics opinions on the use of AI by lawyers, existing ethics rules do apply, such as duty of competence, duty to supervise, and others. The ability to understand AI, its results, and the impact on litigation is not only beneficial for attorneys, it may be required by legal ethics.

Since I wrote up my thoughts on the topic. I thought I would expand on them here.

AI is a difficult topic because there are simultaneously strong ethical reasons to push forward and to pull back. First the push: there is a clear access to justice problem, and even clients who can afford the legal system are pushing for more reasonable pricing. AI has the potential to drive productivity gains that will help resolve these problems. Then the pull: these technologies are untried, most of us don’t really understand why they work the way they do, and they are difficult to audit or confidently verify. It may impossible to really know where their recommendations come from when using machine learning, and it is natural to think that our ethical obligation is to hesitate in their adoption.

These considerations are important even without considering the overwhelming influence of hype around AI. Many reports tell people that their jobs will disappear and they will be left behind. By adopting these tools now they may be able to navigate these changes better.

The easiest AI to justify is applications that provide services to knowledgeable people who can assess the information they are given and use it to make this jobs easier or gain insights they might find difficult to access another way. Some good examples of these kinds of applications include: e-discovery, search, document assembly, and form filling.

AI applications are more difficult to justify when they are based on questionable data or make claims that cannot be supported by the technology or implementation. Some bad examples of AI include machine learning that affects people’s lives in opaque ways without providing an opportunity for review. Hopefully as artificial intelligence tools are further developed and more widely adopted there will be more ways to assess tools and whether they’re appropriate.

Ideally this will eventually lead to better tools that both members of the legal community and members of the public can use to do what they need. This may look like AI or it may just look like a thoughtful website with information in plain language and usable forms.

If machine learning is going to be widely used in law, there is going to have to be thought put into how data is made available and how it is used. Government bodies, such as parliaments, courts, and tribunals, as the originators of this data need to take the lead in making sure it’s available in a way that works for the community as a whole. I think there’s an important discussion to have on how savvy people need to be about the tools, how transparent suppliers need to be about what data and programs they’ve used in development, and how much the tools can be relied on.

Developers want to drive technological change, and they want to be successful in one way or another. In many cases they are impatient and don’t want to wait for the right time (and maybe without their pushing it never would be the right time). It is then up to governments and professional governing bodies to set limits on these applications in the way they would with another innovation like an alternate business model. Eventually there will need to be oversight of these tools, especially if people using them are not going to develop the expertise to confidently select tools that will work for their needs. This could take many forms, such as government or professional body regulation, or a certification from an issuing body that guarantees that a product meets some criteria for quality.

I said it last year in relation to AI and bias, and I think it continues to be one of the most important ways to make sure that we are using AI in an acceptable way: we should not be developing experimental AI tools to be used on the poorest people who don’t have a choice or reasonable access to recourse if they don’t work the way they should. AI applications should be developed for people who have options and who can hire lawyers and appeal decisions if they need them. This will ensure that they are being used appropriately.

Thank you to Kristin Johnson and Steven Lastres for speaking with me on the panel, and to Kim Nayyer for moderating the session. I learned something from all of you. Thank you also to Marcelo Rodríguez for developing this and so many other conference sessions. Identifying topics of importance and finding people to bring the expertise to help us all understand them better has been a great gift of insight to the community.

Comments are closed.