Cross Examining the Artificial Intelligence

As much as I’m an enthusiast for technology, including artificial intelligence, I find that my energies are far more productive discussing the limitations, challenges, and pitfalls of blinding embracing technology in law. It’s only through these discussions that we can use the technology intelligently.

Most of the use of artificial intelligence to this date has been in developing predictive techniques in law, but the possibilities of this technology is only beginning to be explored. Although copyright law may pose some barriers to AI development, machine learning could also be the key to better data security.

The applications in legal practice are only starting to emerge. Luis Millan describes in the cover story of Canadian Lawyer how the legal industry is on the cusp of a major transformation,

Fuelled by Big Data, increased computing power and more effective algorithms (a routine process for solving a program or performing a task), AI has the potential to change the way that legal work is done, the way that law firms conduct business and the way that lawyers deal with clients. A number of technologies under the umbrella of artificial intelligence, such as machine learning, natural language processing, experts systems (the ability to emulate decision-making of a human expert) and others, allow computers to perform things that normally require human intelligence. Artificial intelligence systems, also known as augmented intelligence or cognitive computing, can be used to do many of the tasks lawyers routinely perform in areas such as compliance, contract analysis, case prediction, document automation and e-discovery. According to proponents, the emerging technologies will do it cheaper, faster and more efficiently, a development some law practitioners find disconcerting.

At the Canadian Corporate Council Association (CCCA) National Conference on April 4, 2017, Yves Faguy interviewed Benjamin Alarie, CEO of Blue J Legal, at a session sponsored by Thomson Reuters. Alarie noted that law students today are in a very fortunate position in their legal careers to anticipate the future legal developments, but the entire profession is not going to change overnight because of artificial intelligence. It’s because of these changes that calls are increasing for both law students and lawyers to learn how to code.

On May 15, 2017, LexisNexis hosted a forum at Ryerson University’s Legal Innovation Zone on Lawyering in the 21st Century. Zev Eigen, a Los Angeles lawyer who focuses on complex data analytics and spoke at the conference, indicated,

The future of law won’t demand that lawyers know how to build those equations themselves… The future will be about knowing how to benefit from the information such math can provide.

During Alarie’s interview, Faguy cited David Brooks’ 2013 piece, which described the growing philosophy of the day as “data-ism,”

We now have the ability to gather huge amounts of data. This ability seems to carry with it certain cultural assumptions — that everything that can be measured should be measured; that data is a transparent and reliable lens that allows us to filter out emotionalism and ideology; that data will help us do remarkable things — like foretell the future.

Faguy posed Brooks’ dilemma to Alarie, asking in which circumstances the lawyers should follow intuition, and in which circumstances they should follow the data. Alarie described an approach where the practitioner would still use their intuition, but verify their approach through use of analytical techniques.

Steve Lohr in the New York times seems to concur,

An artificial intelligence technique called natural language processing has proved useful in scanning and predicting what documents will be relevant to a case, for example. Yet other lawyers’ tasks, like advising clients, writing legal briefs, negotiating and appearing in court, seem beyond the reach of computerization, for a while.

But natural language processing, typically employing techniques like Naive Bayes text classification, is still a very rudimentary form of machine learning. They do not employ the same creative faculties that legal practitioners often use to create new law or develop a test case.

Predictive algorithms in law are currently being trained exclusively on existing case law. Alarie described a future development of machine learning, which would rely on broader forms of data, including news sources and other information to better contextualize judgments, and then providing predictive outcomes on richer sources of data to identify a new or emerging area of law.

The broader approach of using analytical tools in this way was described by Alarie as an artificial intelligence 2.0, but the reality is that this is already used outside of the legal industry. Will Knight notes in MIT Technology Review that deep learning is already being used to guide important decisions in many sectors, including medicine, finance and manufacturing, but also notes the challenges with these tools,

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

In other words, the proper use of machine learning allows for the algorithm to continuously use “test sets” of data to verify assumptions, continuously improving, but also continuously changing the algorithm. Even the creators of the algorithm would never be able to fully understand how the algorithm comes to its conclusion. It’s in this mystery that the legal quandaries emerge.

While a self-driving car operated by artificial intelligence that crashes into a pole might be properly addressed by traditional notions of manufacturer’s liability, the use of machine learning in other areas of society that are litigated might be much more complex, especially where they are used to assist decision making by data that can never be fully understood by the user.

Consider, for example, a human resource practitioner who uses an algorithm of employee productivity, attendance, performance reviews, and other attributes to make determinations of for just cause dismissal. Reliance on the data output alone would complicate litigation, as the human making the decision would still have to justify the basis for doing so. The decision-maker would be unable to explain why the algorithm came to the determination that it did, but would still be entirely liable for the decision made. In this context, lawyers would have to employ experts to examine the algorithm, or even consider proceedings against the developer of the software developer itself, in the appropriate circumstances.

Even more troubling is that machine learning can still learn fallacious, prejudicial, and even discriminatory patterns from humans, without realizing that this is what is happening. Sometimes the data matches our intuition because both of them are simply wrong.

Researchers at Stanford University employed a word-embedding association test (WEAT) to analyze computer embedding of words, and revealed biases around gender, race, ethnicity, and many other grounds. The ways that we convey prejudice through our language is still not even understood by us, so when algorithms contextualize data, we do not even realize that they are learning our biases too.

Algorithms making risk assessments for criminal offenders are already being criticized for impacting bond amounts and sentencing, disproportionately hurting racialized minorities more than whites, even when the subsequent data demonstrates the opposite of predictive findings. The predictive data used proved remarkably unreliable in forecasting violent crime, and yet may soon be used across the American justice system.

Knight describes some of the reactions in other jurisdictions to the incomprehensible nature of black box machine learning,

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

Proponents still insist that with enough scrutiny, enough revision, and enough testing of the algorithms used, the predictive abilities of machine learning can improve. That may be true, but what is clear is that in the interim, artificial intelligence will be used for some very non-intelligent ways, supporting business decisions that are erroneous because they lack proper context, or impacting the justice system in ways that perpetuate injustices. Challenging data outcomes may even help better understand our own hidden biases.

During those transition period, it’s clear that we will need legal practitioners, not only well versed in the social contexts of business and law, but will also have to be technologically savvy enough to understand the shortcomings of artificial intelligence. Given the expected prevalence of these technologies in all sectors, artificial intelligence will not just be a way to improve the way we do law, but it will be part of the subject matter of legal disputes themselves.

The benefit to lawyers of the future is not just in knowing how to benefit from the information provided by math, but in how the math itself works, and how it does not. For all those who choose a legal career because they assume there is no math involved, they may want to choose again, or brush up a bit on their skills.

At the very least, they’ll need it when they’re cross examining the machine.

 

 

Comments

  1. Simon Stockdale

    Suggesting lawyers should learn to code is like saying architects should learn to make steel beams or window glass. A little bit of coding knowledge may be useful background learning, but what lawyers need to understand is how logic and AI systems work in general and specifically how any particular tool they use works. Without that knowledge, they’ll allow machines to replace rather than augment their professional skills.

  2. Simon,

    We had a similar discussion about this recently on LinkedIn.

    This premise is based on the function that the lawyer is engaged in. If a lawyer is involved in a review of a commercial contract relating to a property, then a lawyer would have little need to know what an architect does. But if the lawyer is in litigation over a building involving shoddy beams or window glass, those technical components would indeed be pertinent.

    Similarly, nobody expects lawyers involved in medical malpractice to go to medical school. But at least a basic understanding of anatomy and physiology, and likely much more, is necessary to properly litigate these files.

    The point in this piece is that the ubiquity of AI across all sectors in the years to come guarantees that some legal disputes will also involve the use of these systems, and very likely, their use inappropriately based on faulty premises. Lawyers don’t necessarily need to do the coding themselves to handle these matters, but they should have a general overview.

    To go back to your original example, if an algorithm was used to develop a faulty building, or even to construct a deficient contract, understanding the technology behind either of these would be pertinent to the proceedings. To make things more challenging, Google just announced that its AutoML AI, which focuses on the type of deep learning I describe above, has now become better at creating AI than Google’s own engineers.

    Given the relative inexperience and lack of exposure that most lawyers have had to this subject matter, it’s likely to be a particular area of need for continuing education and professional development.