Machines Regulating Humans: Will Algorithms Become Law?

Benjamin Alarie, Osler Chair in Business Law at the University of Toronto and CEO of Blue J Legal, gave a lunch time presentation at Osgoode Hall Law School last Tuesday. This session was based on the paper “Regulation by Machine” co-written with Anthony Niblett and Albert Yoon delivered at the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. [3]

The paper looks at how the process of machine learning could be used to improve the regulation of human activity. This idea is contrary to the usual and “preoccupied” view that legal scholars have had with the regulation of “automated systems.” The authors propose that using machine learning in this way is a “promising approach to regulation that has been under-appreciated thus far in the literature.”

Alarie had hoped to frame his remarks by playing us a short video on the evolution of Formula 1 racing video games, but as luck would have it, and especially true when you are speaking to a group about technology, the video did not work. I believe this is the video he would have shown us if the tech stars had aligned.

It’s a fascinating illustration of how the technology has evolved over the last 40 years starting out with 8-bit black and white graphics and progressing to the realistic behind the wheel experience we see today. F1 video game 1976Alarie’s point was that we are at the level of the 1976 F1 games when it comes to machine learning and law.

He also noted that the time between milestones in the development of artificial intelligence has been compressing: IBM Deep Blue, chess (1997); Watson, Jeopardy (2011); Deep Mind, Go (2016); and Libratus the poker playing AI system that recently beat the four top-class human poker players at no-limit Texas Hold’em (2017).

Machine learning has the potential to “unlock the common law system,” Alarie said, it’s well suited because of the built in feedback loop of shared precedents established in case law. This creates a unique and evolving data set in each area of the law. However, in order for human agents to use this today we need to curate the data, observe what we’ve got and apply it to the new situation. Machines can look at all of the case law at once and analyze everything at the same time. The outcomes of this process means that the algorithms used will essentially “become the law.”

He explained it like this: “Law is what the courts do, if you predict what the courts do you’re predicting the law.” Blue J Legal has achieved a 90% accuracy rate for fact-based dispute resolution. The example he provided was based on the relationship test established in Wiebe Door Services Ltd. v. the MNR that evaluates whether someone like a message therapist should be taxed as an employee or an independent contractor. These determinations are expensive and take a lot of time for humans to make but machine learning algorithms can consider the entire corpus of case law in minutes.

The facts for the evaluation would be collected using a questionnaire derived from the prevailing legal test. The resulting analysis would include a summary explanation along with references to similar cases which helps address the “interpretability problem”: that is, we humans need a level of confidence in the system so we want to see how and why the system arrived at the results it did. These results then inform our decision on whether or not we take the case to court.

In this sense the algorithms that help us make these decisions are codifying the law using computer code. Alarie said it’s a way to operationalize the standards used in the law. There will still be “boundary cases” that would require human intervention. However, overall legal costs would be reduced, the consistency of decisions would improve and we would end up with fairer and more equitable judgments with lower degrees of human bias and errors.

A very lively Q&A took followed the presentation and I will report on that discussion in a follow up post.


  1. Certain simple algorithms are used now for lower-risk business process step series to do an automated quality assurance check and ultimately save everyone time and money. Example: Our organization has developed an simple online checker for simple infrastructure drawings. Checks for certain required symbology to meet a very basic drawing submission standard. It isn’t true machine learning but it is red-flagging applications and giving an estimated charge by our organization, if person does not revise their drawing to meet a standard.

    Applied in the legal world, most palatable would be an application process and automated screening that gives client/applicant an estimated cost /additional effort. However there needs to be an easy ability online to deal with more complex AI assessments and go direct to an online human being consultation. A final decision provided strictly by AI, for certain areas of law needs human assistance and certainty: there can be easily human fall-out for ie. divorce / child custody decisions with far-reaching implications if an AI decision is provided without human interpretation, advice.

    More than ever, consideration AI points to reality that large areas of law exists within cultural contexts.

    However AI maybe useful for speeding up research analysis for drafting changes to law. It would be time-consuming and fine case law analysis to pin down accidents where drivers killed a cyclist yet were never fined or fined very little. Yet in some European countries the approach is a driver needs to prove that they were not guilty killing /injuring a cyclist.

    I made my comment re concerns on another post: