Column

Can Computer Programs Produce Legal Arguments?

“Today, we look for information. In the future, information will look for us,” says Dr. Ya-Qin Zhang, Ph.D. and president of Baidu, one of China’s largest Internet companies and a leader in global artificial intelligence (AI).

AI systems have generated much speculation and it has many lawyers, including myself, wondering if lawyers could be replaced by robots. Personally, I thought the headlines that say lawyers would be replaced by computers was a bit exaggerated. Take persuasion and negotiation, or formulating legal arguments in court, or assessing the credibitility of a witness for example. I find it hard to believe that a computer could reproduce the cognitive thinking necessary for exercises such as these.

There is support for my perspective on the impact of AI on law: a 2013 Oxford study, The Future of Employment: How susceptible are jobs to computerisation [sic]. The authors in the study considered new technologies such as machine learning and robotic advancements in their examination of 702 jobs to determine the likelihood of being replaced by computers. Specific to lawyers, the study drew the following conclusions about AI in the legal field and deemed the field of law to be less likely of computerization:

  • Legal writing will be automated but persuading will not (page 4);
  • The time required for paralegals and junior lawyer to do their tasks will be reduced with legal research tools that include text and datamining algorithms. Time will also be saved using computers that can review thousands of legal documents and use language analysis to identify legal concepts within the documents; and
  • Use of these tools will assist decision makers, like lawyers, to make more accurate and informed decisions.

Fast-forward four years to 2017. Recent articles in the advancements in AI in the legal context has me reconsidering my position. Firstly, there appears to be a rapidly growing area of research and advancements in an area called argument technology that might eventually lead to a computer being able to argue, persuade, and negotiate. Secondly, it appears that scientists were able to design an algorithm so a computer was capable of passing a law bar exam, which is an incredible result.

A recent BBC article indicates a surge in research in argument and persuasion, and asks that if AI could handle the most human of tasks, such as navigating the minefield of subtle nuance, rhetoric, and even emotions, could it take us on in an argument? Until I saw this article, I did not think that we would see machines that could argue. BBC article states that AI could “advance decision-making on everything from how a business should invest its money, to tackling crime and improving public health. Giving evidence is certainly a part of the process, but social rules, legal requirements, emotional sensitivities, and practical restraints all influence how advocates, jury members and judges formulate and express their reasoning.”

Research in argument technology began by thinking about how to model aspects of human arguments, and has been advanced by a rapid increase in the amount of data available to train computers in the art of debate. According to the article, “work is now underway to capture how such exchanges work and turn them into AI algorithms” and leverage “philosophy, linguistics, computer science, and even law and politics in order to get a handle on how debates fit together.”

Though research is still in its infancy, it is far enough along that the BBC is partnering with the Center [sic] of Argument Technology (ARG-Tech) to provide analysis of several debates to be piloted in conjunction with BBC programming. ARG-Tech is in the field of computing at the University of Dundee and at the Institute for Philosophy & Sociology at the Polish Academy of Sciences. Researchers are interested in many aspects of argumentation and have a free software tool called OVA, or Online Visualization of Argument that performs argument analysis, which boasts 10,000 users.

Secondly, in a CBC article, Randy Goebel, a Canadian Professor from the University of Alberta, and Japanese researchers partnered together to create an AI algorithm that was capable of passing the Japanese bar exam. If you think about the cognitive thinking that goes into thinking through the questions in the bar exam, then this is really an amazing result. The article then goes on to say the work is progressing and now they are building AI software that “could weigh contradicting legal evidence, rule on cases and predict the outcomes of future trials”.

What is really interesting about the two examples mentioned above is that it demonstrates advancement in AI in the legal context that was just four years ago thought highly unlikely in the Oxford study. Further, it may be that with assistance, we, lawyers, may be able to use AI software to improve the legal arguments we make and the way we present evidence in court or the drafting of a legal document in court. Full AI is still best left to Hollywood, in my opinion. However, today and in the near future, it has the potential to augment legal work and contribute to lawyers making better decisions.

Comments

  1. Giving an answer to a question by a robot is a resulting ‘perfection’. The programmer has coded the answer(s) to match the question(s). That’s all.

    Even if the human user selecting a multiple choice, after robot triages the general question, to more specific choices. Just like the automated/robot phone answering service we experience today for customer/technical service.

    Negotiation, persuasion can be part of innovative or non-predictive response which programmer did not code into the software for artificial intelligence/robot answer. Human beings make that quantum leap to solve difficult problems with nuanced understanding of client personal history, culture and language.

    A. Other analogies which I got into a discussion with a computer coding geek, who has some natural musical ear:
    Digital music vs. a high quality instrument+ performer
    Digital music is perfect, but can sound “dead” without the nuance/quality of actual instrument and performance style of musician.

    B. Since some of us make art on the side:
    Adjusting colours on graphics software is not at all the same as painting a picture with different unexpected colour mixes, hues and medium/paint viscosity in different parts of the picture. “Control” by the artist in graphics software (to me) is far more mechanical vs. the freedom to use tangible pigment on the canvas.

  2. The fact is, yes, AI is a tool. However, can AI replace lawyers in the future? Probably, depending on the society — robotic lawyers could function well in an autocratic society. Robots can be programmed to carry out the letter of the law especially if the aim is to use law to suppress opposition or questioning of the law. It would depend on who controls information or how information is distributed.

    Is the above scenario far-fetched especially if, for instance, information is fragmented into various databases and there is no means to access on a universal search and most are skeptical about where information is located? Where the majority of knowledge or information is owned or possessed by a few entities and accessible or retrievable only by those who can afford access, does it make for a more progressive and democratized society? I’m not sure whether the argument is about robots versus human or democracy versus autocracy. I however remain hopeful of a future where AI is used for a more balanced distribution of information and resources.

  3. Jean, you say, “Giving an answer to a question by a robot is a resulting ‘perfection’. The programmer has coded the answer(s) to match the question(s). That’s all.”

    With respect, you’re talking about programming in the classic sense as it has existed for decades. But that’s not how advanced AI works. The topic of discussion is software that applies an algorithm to learn: not just to give the answer coded to the question, but to figure out the questions.

    I think a better observation is that a human being processes masses of information about being human (a social being in a material world), and that processing happens with efficiency that is orders of magnitude greater than AI can muster. Hooray for biology.

  4. In my experience, most lawyers already lack the ability to craft a decent legal argument. Maybe the machines will be able to do better. But luckily I doubt it will happen before I retire. :-)

  5. I’m familiar with Professor Goebel’s work, and the software he helps develop cannot pass the Japanese bar exam. His software just comes closer than the others in that regard.

  6. For the moment, an AI decision-maker or litigator probably would make the answer to the question posed by application of Hanlon’s Razor more obvious.

  7. A. Lawyer: Still one has to machine train, by asking same legal questions. I agree with Verna that it depends who asks the questions, who controls the answer.

    I expect a lawyer not to be overly linear. Just like a engineer for some areas of engineering, it’s an art after studying all the soil conditions, weather…it’s STILL an art to figure out how to place that building properly ..

    See: https://quamproxime.com/2017/04/08/educating-for-the-ai-enabled-economy/

    “This reveals something about both the nature of AI and the nature of the division of labor in contemporary capitalism arising from industrialism. First, it helps us understand that intelligent systems are best viewed as idiot savants, not Renaissance Men. They are specialists, not generalists. As Tom Mitchell explains in the opening of his manifesto on machine learning:
    “We say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.”
    Confusion about super-intelligent systems stems from the popular misunderstanding of the word “learn,” which is a term of art with a specific meaning in the machine learning community. The learning of machine learning, as Mitchell explains, does not mean perfecting a skill through repetition or synthesizing ideas to create something new. It means updating the slope of your function to better fit new data. In deep learning, these functions need not be simple, 2-D lines like we learn in middle school algebra: they can be incredibly complex curves that transverse thousands of dimensions (which we have a hard time visualizing, leading to tools like t-SNE that compress multi-dimensional math into the comfortable space-time parameters of human cognition).”

  8. The argument technology research is targeted at looking at the parts of an argument to understand how well it is formed. It is not at the point of actually crafting legal arguments. I do think we are not at that stage. But what does interest me is that there is a lot of research activity generally in artificial intelligence. It is mind-boggling to see how quickly AI is developing in many areas, not just law.
    Secondly, I also see all these distinct and separate areas of research as lego pieces that at some time in the future may be stitched together to work together. We can currently see how programmers are stitching together separate cloud applications to create workflow solutions in the business context.

Leave a Reply

(Your email address will not be published or distributed)