Column

The Future of Law and “Intelligent” Technologies: Prophecies, Technologies and Opportunities – Part 1

Career prophets have been announcing the demise of the legal profession for decades already. According to some, the traditional lawyer will soon follow the way of telephone operators, bank clerks and travel agents. Recently, the prophecies have taken a new turn. While outsourcing, offshoring, legal Taylorism—all previous threats of course remain, we are now being forewarned about a new source of disruption: “intelligent software” in law. Once again, prophets insist that those who are not already into it are laggards, and by now we should all know what will happen to them.

In this post, I address the deployment of artificial intelligence (AI) systems in law in a much lighter way. Law firms and intern positions are not about to disappear overnight, and no move is called for in the coming weeks. I believe that, globally, we have more opportunities than risks and that nobody can be very far behind if we really take stock of where we are. I intend to address the future, the potential impacts of AI, but, being no prophet, I will limit my prognostication to the very short term.

I counted a dozen of posts on SLAW tackling artificial intelligence (AI) and law matters over the last twelve months. It is enough to say that even though most, if not all, of us are not artificial intelligence specialists, we feel the need to debate, to ponder, to reflect on risks accruing from the use of AI in the legal domain but also that we do not want to miss the potential huge benefits of any big technological change. I personally think that we are still at the very beginning of the development of intelligent legal information systems. For sure, there is a lot of noise, and, thinking about it, it is quite normal that start-up ventures overpitch their AI offerings as prime-time ready. AI is everywhere; AI is “in the air”. Some commentators will even suspect or suppose some AI magic when facing any quality engineering. We can see an illustration of this “AI hypothesis” in Artificial Intelligence in Law: The State of the Play 2016:

Artificial intelligence is hard at work in the law – for example, in legal research,

[…]

Lexis® and Westlaw® have applied natural language processing (NLP) techniques to legal research for 10-plus years. No doubt Bloomberg BNA does as well. After all, the core NLP algorithms were all published in academic journals long ago and are readily available. The hard (very hard) work is practical implementation against good data at scale. Legal research innovators like Fastcase and RavelLaw have done that hard work, and added visualizations to improve the utility of results.

So, if it is good, no doubt there is some AI involved. Such analysis is similar but the exact opposite to what has been called the “AI effect”. Indeed, while some see AI everywhere, others will never encounter any successful AI system implementation. This is because, since AI aims at making computer systems do something that only humans can do, when a computer starts being able to do it, the AI part dissolves. The task can be done by a computer now, so it is no longer AI!

Tremendous AI successes may fall victim to this AI effect. Optical character recognition (OCR) and voice recognition software products are AI-based, but they are now considered trivial. OCR software benefits are huge in the legal field. Just for sake of illustration, CanLII’s historical case collections could not have been built without such an advance in character recognition. Many of us also enjoy using voice recognition technology on our smartphones, laptops or in dictation software. This unbelievable technology too owes everything to AI research. (I still remember it like yesterday when Quebecois friends were describing twisting their tongues to get Dragon software to transcribe the draft reasons for judgment they were dictating… alone in a room.) With only these two technologies, plus spell-checkers, so commonplace we tend to overlook them today, we are already reaping huge benefits from the last thirty years of AI research.

Those are generic technologies. Applications of AI techniques to specific legal tasks and problems remains at a very early stage. Despite the hype, when the more generic technologies just mentioned are excluded, AI techniques and software based on AI are not very much in use in Canadian law firms today (and probably not much more in the US). However, there are some legal software products based on AI technologies that are starting to be used. E-discovery teams may use software supporting predictive coding of discovery material built on AI technologies. We can also safely guess that some Canadian law firms are using AI-enhanced systems for contract analysis during due diligence processes or compliance analysis. Knowledge management systems in use in law firms could make use of AI techniques to classify and enhance retrieval of the stored knowledge. But, I believe that this is about it. There is no “artificially intelligent lawyer” yet. To complete the picture, besides the law firms, legal data bank providers use various natural language processing (NLP) software and implement specialized heuristics to ensure good performance of their information retrieval systems. For instance, we at Lexum are using rules and heuristics to help identify risks related to decisions’ publication restrictions, other software use inferences to establish relationships between legal citations, to improve search results ranking, and so on.

These developments are certainly interesting and even at times fascinating, but we are still far from clients interacting with a robot lawyer. Significant benefits will come as NLP and automatic classification techniques get commoditized in the next years, but substantive legal AI technologies are still nascent and for now they are applied only to narrow domains. Automatic learning can be used to help you search bankruptcy matters in a specific legal environment, but if you ever want to search about bankruptcy in a neighboring jurisdiction, you will have to train the machine with a large enough sample appropriate for that other place.

Yet, despite current limitations, I share the belief that AI will become a big thing in the legal field. If nothing else because law is based on information and today a large part of our information is computer-mediated. In such a world software matters. The evolution of the software we use, the ongoing introduction of AI research results in it will affect what we do and how we do it. As we go, new, more ambitious, heuristic-based or learning-capable systems will have an impact on the profession but also on the legal system itself. The stakes are higher for such new software systems. To use the vocabulary of the time, AI systems can be terrific and amazing, or bad. In the latter case, they can have very negative impacts. Let’s consider two illustrations of risks before turning our attention to the benefits that can be expected.

Search engines used to return “calculated” results, search results exactly fitting the user query. Nothing less, nothing more. New search technologies, such as those inviting “natural language queries”, go beyond that, and make decisions about the meaning of the query. Brian Sheppard (mentioned on SLAW here) observed that:

“[a]lgorithm-powered searches can improve a search, but they can make important search decisions automatically and without the searcher’s knowledge. As a result, they lack transparency, particularly compared to terms-and-connectors searches.”

To Sheppard, it might matter:

Companies want to show that their natural-language searching is more efficient than their competitors’ are. One tactic is to shorten the stack of potentially relevant cases for lawyers to wade through. The trend is for companies to emphasize “answers” over the comprehensiveness of cases provided […]

We might be witnessing the start of an efficiency race, with competitors speeding towards ever-shorter results based on ever-simpler inputs.

The broader consequences of a race are unclear. On the one hand, it could increase productivity, potentially leading to reduced lawyers’ fees. On the other hand, it could increase error, which might occur when overaggressive, secret algorithmic choices cut out vital cases.

There is loss of control when the tool takes command of some of the decisions. In some circumstances, it may be exactly what we want, but in others in can be risky. In all fairness, it must be said that the current generation of search systems offering “term-and-connectors searches” could also present incomplete results when some relevant cases terms do not correspond those used in the query, but at least the deal is more transparent.

In a second example, we can see that the stakes are significantly higher when software products venture into a more substantive legal terrain. Last June, the New York Times reported about a significant backlash against the use of a software system called COMPAS (developed by Northpointe Inc.) to assess defendants’ risk levels for sentencing purposes in many states. The software uses a proprietary algorithm that massages the information obtained from the defendant or taken for his record to produce a report used to help decisions on pre-trial release, sentencing and parole release. A recent report from Pro Publica strikingly illustrates the actual consequences of entrusting software with such an advisory role. It appears that the conclusions are strongly biased against African-Americans. In using data from Broward County, Florida, ProPublica journalists discovered that, even though the software’s predictions were right 61 percent of the time, it erred quite differently with respect to black and white offenders. (Northpointe Inc. disputes the ProPublica analysis results, but refuses to disclose the proprietary methodology.) The following table is from the ProPublica report:


Source: Machine Bias, J. Angwin, J. Larson, S. Mattu and L. Kirchner, ProPublica, May 23, 2016

Without a doubt, if we are to give a more “substantial” role to software systems in the law, we should know what we are doing. At a time when use of “intelligent” systems in law is in a nascent stage, it appears wise to reflect upon their risks and dangers. I would add that any system designed to advise or provide legal conclusions should be able to justify the advice or the conclusion based on well-established sources of law and verifiable facts. Proprietary approaches may not lend themselves to the required level of transparency. Furthermore, AI systems that are based on machine learning systems and that cannot explain how they reach their results may not be appropriate for producing substantive legal conclusions. Of course, the potential of AI technologies remains undisputed in situations when justifications are not required.

In the second part of this blog post (to be published on Monday February, 20th), we will turn our attention to a study on the effect of new AI technologies on lawyers’ employment and we will explore some of benefits to be obtained by these systems in the legal world.

— Daniel Poulin, Lexum

Comments

  1. The danger of AI is algorithms that are designed on historic/past patterns of decision-making in case law which may be inherently flawed/biased and do not address systemic /institutional problems and biases.

  2. Jean, You are right. I would add thatin many legal situations even when based on well-founded past decisions a software should be “transparent”. One must be able to find out the reasons for the conclusion. Software which are based on data mining and learning algorithms do not perform well in this regard.