It’s the End of Judges, Not the Lawyers


At yesterday’s inaugeral Law Tech Camp in Toronto, Prof. Daniel Katz of Michigan State University College of Law spoke on “Computing and the Law.” The topic was covered in The Globe earlier this week,

The crucial question for anyone heading into a court battle – whether a mighty corporation or a mere slip-and-fall victim – is the same, every time: “What are my chances?”

Depending on the answer, a client will decide to settle, or to stand and fight. The answer generally comes from across the oak-panelled desk of a lawyer, who draws on wisdom gained from years of practise, experience with similar cases and knowledge of the law to make a judgment call.

But as Deep Blue showed on the chess board and Watson recently demonstrated on the TV quiz show Jeopardy, it turns out robots might also be better at calculating those odds than humans. Computers, using complex algorithms to analyze tens of thousands of similar cases and decisions, can now be used to predict the outcome of court fights, according to Daniel Katz, an assistant law professor at Michigan State University.

Katz’s slides are available here. My notes from Katz’s talk, and some commentary stemming from additional reading and further conversations are below. (My slides from my Law Tech Camp presentation are here).

Background

Katz commenced by saying there is nothing exceptional about law. All of the excuses that lawyers currently make against automation and the development of predictive systems were made previously by the financial sector, which didn’t prevent these technologies from being utilized there.

The reason why things are starting to change now in law is due to general counsel, who are sick of paying different fees for different clients. General counsel are going to lead law’s innovation. And quantitative legal prediction is the ultimate disruptive technology.

Big Data and Moore’s Law

This is the era of big data, and what is true for science is true in industry. We are probably the last ones to do this. The question then is, “how do you ride the wave?”

We carry data in our brain to make these estimates, but we can’t scale the way computers do. Data is only half of it. Algorithms can compute on a scale that is impossible for humans to do.

Moore’s Law describes an exponential growth in computing technology, and these changes will only accelerate further.

The Soft AI Revolution

Programs like Watson are soft artificial intelligence (AI). They don’t have empathy or feel anything, but can win at Jeopardy. The game is now on for lawyers.

Historically data review was an area that employed a lot of lawyers. But not any more.

We are now creating a virtual supply chain in the legal profession. When we have an e-discovery problem, we are more likely to find an expert on it and outsource the problem, rather than trying to figure it out internally. The law firm is now just the maestro, telling the violins of various virtual legal services when to play. At the ABA Tech Show there are countless vendors who are all just selling one small slice, pushing innovation in their own respective sphere.

Eventually these changes will have to come about through law schools as well. The traditional top-level law schools won’t take on the risks, so don’t expect the innovation to come from them. Instead, you can expect an upset school to lead this new revolution.

Prediction

Katz used the example of weather systems to help demonstrate the predictive limitations with complex systems. Weather systems can only be accurately predicted for about 10-15 days in advance. If you want predictions beyond that you need an almanac, which is not a high grade class of prediction, but still does provide some useful information. For example, if we are looking at June 18, 2012 in Toronto, we can probably fairly assume there will not be 16 inches of snow. In other words, if you have an almanac and someone else does not, it can be a big deal.

Lawyers are in the weather business, and not the tide business. Tides follow a much more predictable patter of cycles. The inherent system variability in weather is important. We can predict certain things at certain times of the year. But there are some times of the year where we will just predict a level a chaos. Any properly constructed algorithm accounts for all of these problems that we find in other areas. Insurance companies already have great databases for example on slip-and-falls to help determine the price of premiums.

The quality of the inputs will always affect the quality of the predictions. The market does not always price things properly, which is why there are arbitrage opportunities for astute investors. Improving the quality of the inputs will improve the predictive quality of any algorithm.

Mental Models vs. Aggregation

Human beings are actually horrible at aggregating data with our brains. It’s far too consuming for us, and we don’t do it effectively or accurately.

We cannot effectively read thousands of cases efficiently. Computerized systems will invariably demonstrate superior research and analysis.

Quantitative Legal Prediction

Quantitative legal predictions already exists. Over 10 years ago studies examined the voting trends of the courts, comparing groups of law professors against an algorithm.

No surprise, the algorithm performed better than the accumulated expertise of the academics.

Disputes vs. Decisions

The potential of predictive technology is that it will compel more parties to settle. We will need to figure out bargaining in the shadow of the law by considering the strength of a case.

We already have this process in place, but how much is subject to aggregation? Machines are dumb, but they can scale far better than we can. There are lots of factors that matter, including time scales and complexity.

But we also have to be aware of other metadata that we may not have information on. We can’t make a model for something if we don’t know that the model is.

A standard client memo in the future may include a statistical portrait of 10,000 similar cases and their outcomes. Could it potentially become an issue of malpractice to not provide this information to a client to help them decide about their case? Could law societies ask lawyers why they didn’t pull a statistical report?

Assessing Similarity

Assessing similarity between cases is often the hardest part of predictive algorithms. There is a science to similarity, one which has been used repeatedly in the online context; Amazon’s collaborative prediction, Google’s spell check, the music genome project and Pandora. They have a vector of attributes and millions of clicks. There is absolutely no theory of similarities, the key is in the clicks. Essentially what these companies have done is outsource the similarity specialty to the crowd.

We have citation systems that are essentially a recommender system. People who cite one source will also cite something else. The way we develop these distance or similarity functions are not unlike a Facebook friend wheel.

Discussion

The question I had for Prof. Katz related to the quality of inputs as I understood litigation in Ontario. The vast majority of personal injury and civil cases are issued with a jury notice, and consequently rarely have a reported decision with a trial amount unless it is appealed on quantum, costs, or an unrelated point of law. Furthermore, the vast majority of civil cases already settle, for amounts that are rarely disclosed to the public or third-parties. Gathering a large enough body of information to create a robust database capable of accurate predictive analysis would appear challenging in our jurisdiction, where we have a much smaller volume of cases that most American jurisdictions.

Katz responded by stating there are already a number of assumptions that litigators rely upon that could be tested through statistical analysis. Jury awards could be compared with non-jury awards, settlement figures compared to trial figures, and other variables involve in litigation could also be tested.

Garry Wise graced us with his presence, well after the event had concluded, and had a number of interesting observations. The Ontario Court of Appeal has nearly codified the Spousal Support Advisory Guidelines (SSAG), which were originally designed as advisory (hence the name) to help judges make decisions. A similar trend may occur with the statistical determination of quantum of damages, thereby removing the need for litigants to go to court in the first place.

If this was to occur these changes would not result in The End of Lawyers, but rather “The End of Judges” as we know it today, according to Wise. Quantitative legal prediction could help alleviate the pressure on the courts and transform legal practice completely. It certainly will not spell the end of lawyers, but it may signal the end of legal practice as we currently understand it.

Comments are closed.