Column

The Use of Artificial Intelligence to Facilitate Settlements Through ODR

Fellow Slaw columnist Omar Ha-Redeye recently wrote a blog entry on how artificial intelligence is making its way into the Canadian legal community more slowly that expected (by some) due to the fact that the data repositories that are behind SOQUIJ, CanLII, and other caselaw search engines are simply too limited in size to allow for true predictive capacities. In other words, there are too few decisions to generate reliable trends that can be identified through A.I. This has pushed Mr. Ha-Redeye to question how successful and useful a tool like Premonition will be on the Canadian market.

According to a press release announcing the arrival of the Premonition database in Canada:

“Premonition is an Artificial Intelligence system that mines Big Data to find out which Attorneys win before which Judges. According to the company, “It is a very, very unfair advantage in Litigation.” Premonition’s system can identify top performing Lawyers by win rate, case type, case duration and, most importantly, Judge. The company claims that the Attorney/Judge pairing to be worth 30.7% of case outcome on average. “The old saying that ‘A good lawyer knows the law, but a great one knows the Judge,’ is true,” says Guy Kurlandski, CEO.”

Notwithstanding the fact that “winning” is an extremely relative concept in litigation (having a court judge in your favour, but for half the amount in play, could be construed both as a win and a loss), data mining court decisions to retrieve usable statistics is but one way artificial intelligence can be used by lawyers and other stakeholders in the legal process.

Before going any further, however, we need to clarify a few intertwined concepts that can cause some confusion when studying the probable impacts of artificial intelligence on the legal process.

The first thing one needs to remember is that bots (i.e. software using A.I.) are not actual robots. When referring to “robot lawyers”, one shouldn’t imagine C-3P0 in a gown holding briefs, but rather a more intuitive version of CanLII or SOQUIJ. Secondly, artificial intelligence encompasses a variety of types of technologies that range from software using simple search algorithms to advanced applications (such as that behind IBM’s Watson), and beyond. Therefore, when the Merriam-Webster dictionary defines artificial intelligence as “the power of a machine to copy intelligent human behaviour”, it is, in our view, making a broad overgeneralisation and, we dare say, an ill-fated attempt to give human traits to software. We are reminded of the wise words written by Justice Mahoney in Apple Computer, Inc. v. Mackintosh Computers Ltd.:

“The principal difficulty which this case has given me arises from the anthropomorphic character of virtually everything that is thought or said or written about computers. Words like “language”, “memory”, “understand”, “instruction”, “read”, “write”, “command”, and many others are in constant use. They are words which, in their primary meaning, have reference to cognitive beings. Computers are not cognitive. The metaphors and analogies which we use to describe their functions remain just that.”

Machines are not really copying intelligent human behaviour, software is being created that can “learn” by comparing recently uploaded data with previous models and establishing commonalities. This is what is commonly referred to as machine learning. Therefore, even the most advanced forms of machine learning (often referred to as deep learning) are not really functioning like the human brain.

This sidebar shouldn’t be understood as a rejection of machine learning as a legal tool, we simply wish to warn against attributing unjustified qualities or defaults to these types of technological advances.

Our brief overview of the vocabulary associated with A.I. also serves to demonstrate that those who claim that artificial intelligence will not be useful for lawyers for a few years are using a very narrow definition of the term. While it’s true that predictive tools such as Premonition are just starting to appear in the legal landscape, SOQUIJ, Quicklaw, and the likes have used basic A.I. for years.

As Yann LeCun, Yoshua Bengio, and Geoffroy Hinton (some of the foremost experts in the field) put it, machine learning can be used to “identify objects in images, transcribe speech into text, match news items, posts or products with users’ interests, and select relevant results of search”.

Using this broad definition, it becomes obvious that many of these uses for machine learning are already part of the legal landscape. Dictation apps are being used by more and more firms, caselaw search engines will suggest cases based on certain underlying patterns, etc. That being said, this is just the tip of the iceberg. In fact, imagine how these same technologies could be put to use in a dispute resolution context.

And this is where artificial intelligence links up with our research on online dispute resolution. Let us take the example of British Columbia’s Civil Resolution Tribunal, where individuals locked in a strata dispute can log on to the CRT website to try and settle their dispute. As a reminder, the process begins with using the Solution Explorer to help understand the context of the case before entering into an online negotiation phase. Should negotiations hit a wall, a facilitator joins parties to try to bring them together. Should the facilitator fail, the Tribunal will render a decision.

Now, imagine if, instead of having to type in their complaint, litigants were able to speak to the system and be guided through the resolution process (think of an ODR version of Siri). Imagine if, through a study of similar patterns, the system could also give parties (including the facilitator and tribunal) statistics as to the success rate of similar claims, the amount granted, etc. These tools could facilitate settlements, make the tribunal’s work easier, and create a certain predictability that is often lacking in ODR procedures. In other words, incorporating machine learning capacities into ODR applications made available to the courts could go a long way in making the judicial system more accessible for all Canadians. In our opinion, this is the way to truly harness the power of A.I. That is why the cyberjustice Laboratory is collaboration with experts in machine learning to hopefully develop the next generation of ODR applications for the courts.

Of, course, until that work is done, we can always be grateful that artificial intelligence allows us to see who has bragging rights for the best win/loss record…

Comments

  1. I love that my post has kickstarted so many conversations (see Mona Datt’s post as well here).

    To be clear, I don’t take the position that AI cannot be useful in Canada at all. I only frame the problem in such a way that it will not fully achieve its potential given the inherent limitations in our legal system, and how data is created through it.

    Although I deliberately chose to illustrate this ongoing problem by contrasting it to the American jurisdiction through a new entrant into our market, the reality is that this problem exists in any industry or jurisdiction where filtered data samples are limited, and this is a far more common problem than most people realize.

    What’s needed to strengthen the rigor of the analysis is greater statistical tools to test and validate data properly. Any statistical analysis of the AI offerings currently available (likely anywhere) fail this rigorous type of examination. That’s not to say that this type of work should not continue, or that it cannot be useful to practitioners. It’s simply a reminder that any conclusions drawn from such data should be very carefully qualified, and used with reservations. Failure to do so will result in erroneous assumptions.

    Robert J. Parnell has created an excellent paper on this subject, When Big Legal Data Isn’t Big Enough – Limitations in Legal Data Analytics, which was released just a few weeks ago.

    Parnell states,

    Overall, the quantitative analysis of legal data is much more challenging and error-prone than is generally acknowledged. Although it is appealing to view data analytics as a simple tool, there is a danger of neglecting the science in what is basically data science. The consequences of this can be harmful to decision making. To draw an analogy, legal data analytics without inferential statistics is like legal argument without case law or rules of precedent — it lacks a meaningful point of reference and authority.

    Very few lawyers have a thorough background in scientific research or advanced statistical modelling, so it’s an industry that’s perhaps particularly prone to overly broad generalizations about what AI can do, without fully understanding the data underlying the models.

    The utility of emerging technologies will only properly be realized if we not only recognize what it’s capable of doing, but also what it’s not capable of doing. A failure to scrutinize technology in this way is to buy into the false promise of technology, a parallel we’ve seen in education, and in the long-term, to respond instead with hostility and skepticism when it doesn’t produce as we initially expect it to.

  2. All of this is wishful thinking, without cheap access to the valuable data.
    (Will judge write reasons in XML ? Will counsel correspond exclusively
    through machine-readable media ?)
    These companies should make themselves useful and write me an app
    that streams lawyer jokes to my cell phone – I might pay a few bucks
    a month, if it’s any good.