R.I.P. Van Winkle?

While many might have given up on AI in law due to its special challenges, interest is stirring nearby. IBM’s Watson ingestion of millions of journal pages, medical evidence and patient records, means that it is allegedly better at diagnosing cancer than human doctors. Meanwhile Google has been buying up the world’s top machine learning experts and their companies, with the latest being UK-based DeepThink. It is in part explained by the fact that Google’s engineering director Ray Kurzweil is a relentless inventor and an artificial intelligence pioneer. While Google helps make us smarter now, Kurzweil believes that AI will more directly supplement our mental capacities. It could be the guardian angel to the infirm, and speed up and extend the expert.

The recent surge in interest in AI has come from 3 things:

  1. Increase of power in computers.
  2. Development of programming styles more suited to AI.
  3. Defining the goals of AI towards the possible, and away from the vague ideal of constructing a thinking machine.

The goal of an artificial intelligence, or expert system, is to make the knowledge of an expert available when the actual expert is not. As it is usually very difficult for experts to define the steps they take to solve a problem, it is almost impossible to use standard programming techniques to build an expert system.

Modern expert systems are split into two parts the knowledge base and the inference engine. The knowledge base consist of human knowledge, painstakingly put into a form which can be manipulated by the other part of the program. Ideally, the inference engine is completely separate from the knowledge base, and can thus be used on different knowledge bases (such an engine is “an expert systems shell”). The inference engine is a program which follows a rigidly prescribed sequence of steps, which lay out how the knowledge base is to be manipulated.

The most popular form for the knowledge base is the production rule, usually of the form: IF (some situation) THEN (some action). This approach is based upon the psychological research of the 1950’s and 1960’s.

FINDER is an expert system which gives legal advice on problems involving the right to possession of unclaimed goods of the finder of those goods. Expert systems can be built on different levels. FINDER requires the user to make decisions based on legal precedent. For instance, a user may have to decide whether a certain party was the “occupier” of premises. Later versions may give the system the ability to decide the question. The use of legal expert systems is likely to go through 5 phases of use:

  1. By professional lawyers in areas where they are not entirely up to date.
  2. By paralegals where the amounts involved in a matter preclude the expense of a fully qualified lawyer.
  3. Directly by non professional users – possibly to determine whether it is worth consulting a lawyer.
  4. By non professional users to replace advice from professionals.
  5. To resolve disputes: it will become an electronic judge.

In considering the last phase, one should consider that because of the costs involved, for many the choice may be between machine justice and no justice at all. Particularly when systems are used by non-professionals the advice given by expert systems is likely to be accepted uncritically.

One thing I need to mention, and I wonder if otherwise anyone would have noticed my time travel: apart from the first paragraph the above was written in 1986. It was an abstract I wrote of an article called “Will justice fall to bits?” written by Dr Alan Tyree from Current Affairs Bulletin from March 1986 at pages13-18. The author is a respected researcher into the legal application of Artificial Intelligence techniques.

One day the professional standard of care may include consulting an expert system. Expert systems should be used as a starting point to help the lawyer analyse a case or make a decision. Not all a lawyer’s expertise can be embodied in the systems, nor is every area of law suited to expert systems.

One of the most advanced expert systems created to address a legal problem is called SAL which stands for System for Asbestos Litigation. It evaluates plaintiff’s claims for asbestos exposure. The system uses the facts of a particular case, along with the expertise and problem-solving techniques extracted from attorneys and claims adjusters, to produce a dollar value for the case. Conceivably, a lawyer who settles an asbestos case for a value below that recommended by SAL could be sued for malpractice.

Oops, there I go again, the above 2 paragraphs I wrote in 1987 as an abstract of an article by Nancy Blodgett from the ABA Journal dated 1 January 1987 at p.70. The title was “Artificial intelligence comes of age” which was probably the only part one could take issue with.

Other comments by Nancy included:

  1. The more legal expertise captured in an expert system, the more profitable and efficient a firm will be.
  2. They speed up an expert’s own process or bring expertise to a lawyer who doesn’t have it.
  3. Law firms can charge the client less but increase profitability.
  4. One day the professional standard of care may include consulting an expert system.

I suspect that in the age of NewLaw, those comments will now be taken more seriously than they were.

Despite the publicity at the time, the enthusiasm by pioneers, and numerous tools, the blame cannot lie directly at the feet of lawyers and clients for the lack of support.

In the The Modern Law Review of March 1986 at pages 168-194, a bloke by the name of Susskind highlighted some of the many problems which have contributed to the far fewer positive results than in comparable efforts in other disciplines: 

  1. No commercially available, satisfactorily operating expert systems in both statute and case law.
  2. No guidelines available to overcome the problems already successfully tackled by pioneers.
  3. Minimal jurisprudential input to the field; much of the work has been produced from a computational perspective.
  4. The prototypes cannot be instructed in natural language but require computer language or very restricted English input, “and/or” responses to questions asked of the user.
  5. Neglect in interfacing expert systems with existing database computerised legal information retrieval systems.

The problem with rule-based learning was the effort required not only in development but the less obvious need for maintenance – all expert-intensive. The breakthrough has been the bottom up “machine learning”, as opposed to top-down “expert teaching” approach. It took us a while to achieve it, but once the machines get it, there will be no stopping them, maybe literally. Think of the exponential potential of “life long learning” for machines.

All the recent talk of disruption in the business of law is nothing compared to what lies not far ahead as the pace of change accelerates dramatically. As Susskind often points out, the only uncertainty will be time’s winged chariot, no doubt driven, by Google.


  1. David Collier-Brown

    I suspect the machine-learning systems will be utterly frustrating to work with. As they don’t “know” how they arrived at a decision in logical terms, they’re unable to explain why something might be true. Telling a judge that “the mechanical guru says my client is in the right” is not going to be terribly helpful (:-))

    We’ll probably need some kind of backwards-searching tool that can take a pronouncement by the tin guru and figure out the possible paths that could have got it there.

  2. Ricardo Barrera

    Won’t the judge be the mechanical guru?

  3. David J. Bilinsky


    I agree with you. With Moore’s Law marching on towards ever increasing computing power and computational methods increasing as well, we are on a path towards greater and greater power to analyze more and more complex problems. I disagree with David Collier-Brown’s point that the system would not be able to explain why it arrived at a decision. It is a closed system and as such it should be able to log/document each step in the process.

    We have already reached the point where virtually any garden variety laptop or desktop can outplay us at Chess and supercomputers can outperform us at games such as Jeopardy. We use other supercomputers to attack such complex issues such as weather prediction and nuclear reactions.

    Currently Tianhe-2 is ranked as the world’s fastest supercomputer on the TOP500 list, having achieved a performance level of 33.86 petaflops on June 16, 2013 (Wikipedia). Tianhe-2 performs 33.86 thousand trillion floating point operations per second.

    In the face of the fact that we have over 500 ever-increasing supercomputers on the Top500 list (with lesser machines dropping off) it seems inevitable that this ever-increasing computing power will eventually munch up all sorts of problems that currently are on the ‘too hard’ pile. I suspect that analyzing legal problems will be only a matter of time.



  4. “[v]irtually any garden variety laptop or desktop can outplay us at Chess” for what it’s worth – I don’t know the rules of chess and techniques etc.; in other words I really don’t know how to play chess but have won against the computer on a number of occasions on the 2nd level of difficulty. Where does that leave us?

  5. I tend to agree with Dave Bilinsky’s conclusion but not, I think, with his reasons. It’s true that computing “power” is continuing to increase (and I think continuing to increase exponentially); but we’re still at the stage of “brute force” problem solving, more or less. IBM’s Watson is extremely “knowledgeable” but not very smart, even though guided by interesting algorithms. The trick will be to enable “a computer” to think and not just check its databanks recursively. We might never get there — though I suspect that we will. Anyone interested in this now neglected notion, once called AI, might read an Atlantic piece about the redoubtable Douglas Hofstadter (he of Gödel, Escher and Bach fame) and his lonely work on computer thinking: