What makes CLEBC publications so popular with BC lawyers? There’s no doubt our talented editorial and production staff make a great contribution, but I believe the magic ingredient is the combined wisdom of our excellent authors and editorial boards.
Once the authors have made their submissions, their work is carefully reviewed—page by page—by the legal editor and the editorial board. Editorial board meetings usually involved spirited discussions of law and practice. The implications of recent decisions, the significance of new statutory requirements, a new practice or understanding that has arisen within the BC legal community—all are flagged and discussed to ensure accuracy. The final product is a reliable guide to the practice of law in BC.
So can that wisdom be captured by the algorithms used for artificial intelligence?
Information and speculation about the use of AI in law has certainly been a constant this past year. Along with everyone else, I wonder about the effect of artificial intelligence on our world of legal information.
Some key themes emerged:
- The algorithm is key, but it’s not neutral. Tim Knight has made a great contribution to this discussion in his columns here and here. He reminds us that there should be some accountability around the content of the algorithms used in AI. A recent column at 3 Geeks also calls for algorithm accountability. The author reports on research by Susan Nevelow Mart in which different legal research databases were searched using the same search terms and data content. The results were quite surprisingly (or alarmingly) inconsistent, with only about 7% of the cases returned in all six database results. Understanding the bias written into the algorithm is essential to understanding knowing how accurate or effective it is.
- Understanding the size and quality of the data set is also essential to evaluating the reliability of the results; this is the topic of Robert Parnell’s excellent paper When Big Legal Data Isn’t Big Enough: Limitations in Legal Data Analytics. Omar Ha-Redeye also addresses this problem. For example, is there enough data coming from the BC courts and legislature to support effective AI? CanLII contains just over 20,000 decisions from the BC Court of Appeal (that’s every decision since 1990). The court’s 2015 annual report tells us that the court disposed of 850 appeals in 2015, 230 of which were criminal appeals. The small size of our data sets may means that it will be difficult to get useful results from AI for our jurisdiction.
- There’s a tension between the trend to uniformity generally and location-specific practice that often develops. We see this in BC when we discuss how law is practiced in downtown Vancouver compared to the smaller and more remote communities. As long as there’s information specific to a particular jurisdiction, there will be a need for information about those practices.
On the other hand, changes in the way law is used and understood may make local practice less relevant. It’s become very easy to research law from other provinces or other countries. Greater access to information from other jurisdictions means that different ways of thinking about or resolving problems are more readily available. After all, dispute resolution professionals everywhere are looking for tools and ideas that will lead to early resolution, wherever those ideas come from.
- A couple of columnists have addressed the effect of AI on the work of lawyers and information professionals. To what extent with the human element still be required? Jamie Baker explains the three main levels of natural language processing, predicts that computers will not be able to master high level pragmatic language processing (that is, mastering context) until about 2100, and cautions:
What we all must be cognizant of is the notion of “premature disruption”—losing valuable players (lawyers and librarians) before the technology is truly ready to replace them. ROSS, for example, is really good at PR, but stakeholders must be careful to avoid premature disruption. ROSS is a great tool that will augment lawyers and librarians, but it will not replace us. And it won’t replace us for the foreseeable future.
Meanwhile, Louis Mirando asks: what are the possible consequences of AI for law librarians? He speculates that they will need to develop different skills: “All librarians will need to improve their technological literacy, especially coding and text markup, better to understand and assist with the transformation of information into data.”
So will the capabilities of AI replace the insight we find in excellent secondary sources? At some point will the need for wisdom or agreed practice (as reflected in CLEBC publications) be eliminated or become irrelevant?
Of course, it’s too soon to say whether or when AI will replace secondary sources from a small jurisdiction. It may be that by the time high level pragmatic language processing is successful, CLEBC publications will be just a happy memory. On the other hand, our content may well become part of the data that is being examined.
(For a further and very interesting take on these issues, see Colin Lachance’s recent column Racing to Nirvana—Legal Research Edition and the robust discussion in the comments; his column was published just as I was completing this one.)