The International Association for Artificial Intelligence and Law (IAAIL) is a non-profit organization that has organized academic conferences on artificial intelligence and law every two years, since 1987.
The most recent International Conference on AI and Law (ICAIL) was held this June in Montréal, QC, hosted by the CyberJustice Lab at the University of Montréal Faculty of Law. I had the opportunity to attend, presenting the results of my work as part of the ABA Innovation Fellowship program in 2018/2019, which was sponsored by Canadian legal practice management software company, Clio.
The conference is an academic event. It features people who are computing academics, people who are legal academics, and people who are both. Among attendees there was a considerable contingent of practicing lawyers, and software companies that sell AI services to practicing lawyers.
With the conference a few weeks in the rear-view mirror, here are a few thoughts that stand out for me today.
“Explainability” is the Brass Ring
Right now, in the academic AI and law community, the next big goal is explainability. Tools that can make predictions using machine learning techniques and good data are no longer news. But the techniques we have can’t explain or justify those predictions in the way that the legal system anticipates. Techniques for explanation or justification are a very active area of research across all kinds of AI, but in Law and AI they are a major preoccupation.
Older, rule-based AI techniques have a great deal of explainability built in, but they have challenges in terms of power and usability for legal practitioners. There were people at the conference trying to solve those problems in innovative ways, too, and they have made impressive progress.
In Law, “AI” and “Machine Learning” are Not the Same Thing
Speaking of older AI techniques, in almost any other context, you would expect an AI conference to be swamped with people taking about machine learning, reinforcement learning, deep learning, adversarial systems, and all of the data-based AI techniques that have become what people think of as AI.
But it seems that the legal problem domain is different. Machine learning approaches were discussed in only about half of the conference papers. A surprisingly large part of the conversation at ICAIL was about rule-based reasoning techniques, ontology, and similar methods that would have been familiar ground for the people who presented at the inaugural ICAIL in 1987.
“SkyNet” this is Not
Here’s a sampling of some of the papers that were presented, restated as “the thing I’m trying to get the computer to do [better].”
- Write headnotes for cases.
- Be able to tell the difference between the next page in the same document, and the first page of a new document.
- Find and use synonyms for words used in legislation.
- Model obligations that arise after a more important one has been violated.
- Find the case that supports a decision, and the paragraph in that case that states the ratio.
- Find the section of statute that applies to a decision.
That’s not an exhaustive list, but I hope it’s representative. If you read that list again as “things artificial intelligence can not yet do adequately,” you realize that the scope of what AI is capable of doing in the legal domain, right now, is quite limited.
Interestingly, there are things that AI can do that human beings cannot, in the legal realm. Those include the ability to detect signs of bias, or over-consideration of irrelevant factors, in large databases of decisions. Significant work on that topic was presented at the previous ICAIL in London, and I was disappointed to see little in the same vain this year.
Rules as Code is Gaining Momentum in Canada
Staff from Transport Canada and the Canada School of Public Service participated on a panel on artificial intelligence and the administrative state, discussing a “Rules as Code” pilot project they are undertaking. Several other federal government staff attended the conference as well.
With the lead of the federal government, expect governments across the country to start asking themselves whether laws should be written so that they are easier for both people and computers to use. For details on Rules as Code, see my last column.
There is a Generational Shift, and a Communication Gap
There is a new generation of law and technology practitioners and advocates and scholars that are coming into the field, but they were mostly not at ICAIL. Some, like Legal Hackers organizer and Stanford CodeX fellow Jameson Dempsey were there, asking questions about how to bridge that gap.
But I left concerned.
Unless academic conferences like ICAIL can effectively engage with these young turks who publish on GitHub and not in peer-reviewed journals, they risk losing relevance in the future.
On the other side of the same coin, there is a wealth of wisdom in organizations like IAAIL that is effectively hidden behind ivory walls. Academic journals and blog posts (including this one) cannot do the job.
There is a lot to gain if we can find a way to get academics and practitioners to talk to one another.
Change is Slow, Fruit Hang Low, Long Way to Go
It is frustrating to think that the freely-accessible technologies Canadian tax expert David Sherman used 34 years ago, when he presented his LLM thesis work at the inaugural ICAIL, have barely changed at all.
But the other thing that hasn’t changed is that there are a lot of low-hanging fruit, here. There are real opportunities to make life better for Canadians, and to enhance access to justice. In fact, the technology has improved, and the opportunities in using it are only greater.
Maybe after 34 years, it’s time to start treating technology for access to justice as an issue of justice infrastructure, and stop waiting for altruists or capitalists to bridge the gap. Because we haven’t moved much, and there is a long way to go.