Column

7 Reasons to Celebrate Legal Research and the Fact That It’s There to Stay

In his book What Technology Wants, Kevin Kelly (the founding executive editor of Wired magazine) writes that:

History is rife with cases of misguided technological expectations from the inventors themselves. Thomas Edison believed his phonograph would be used primarily to record the last-minute bequests of the dying. The radio was funded by early backers who believed it would be the ideal device for delivering sermons to rural farmers. Viagra was clinically tested as a drug for heart disease. The internet was invented as a disaster-proof communications backup. Very few great ideas start out headed toward the greatness they eventually achieve.

We don’t seem to have the Edison phonograph problem with AI. The de facto thinking is that it can replace any cognitive tasks done by humans. The more tedious the task, the more likely it is to create “technological expectations” that it will be replaced. With that in play, legal research has been singled out as one of the prime targets of a robot takeover. This post is about my belief that a takeover is not quite the right way to think about what AI will bring to research.

Two warnings:

  • My last posts were complete bores, so this one is made to be more irreverent and festive. Think of it as office Christmas party talk—very geeky party talk.
  • After reading an earlier version of this post, my colleague Sarah Sutherland wrote “My only criticism of your Slaw column is that I think you miss acknowledging the simple things – applying rules to straightforward issues, automated compilation of draft documents, etc. that AI will make a big difference on. You are betraying your background as a big firm lawyer by assuming big fancy problems are the main problems in law.” As always Sarah, you are obviously right. I had tried to address this with what I wrote in the second endnote below and in other caveats. Less and less problems will need a bespoke (trying to score of few points on my Richard Susskind bingo card here) intervention by a lawyer, but my guess is that there will still be a significant market for custom work in law, and this post celebrates the fact that research will remain a significant part of that.

1. “Not all those who wander are lost”

Sometimes your search doesn’t immediately return what you want, but what’s not immediately relevant is not always just noise. This might be exactly where you find the out-of-the-box analogy that will make you win a difficult argument or steer the law in a new direction, or make you realize that there was an entire area that you ignored with your earlier framing of the research question. And you would have missed it had you delegated the research completely and filtered out these results in the first place.

Legal research has an element of serendipity, and that’s not inherently bad. To quote Tolkien’s poem “Song of Aragorn”:

All that is gold does not glitter,

Not all those who wander are lost;

The old that is strong does not wither,

Deep roots are not reached by the frost.

I suspect that one area where AI will provide more than merely incremental gain over today’s keyword searching is in predicting the areas in which you should search, or anticipating your next search.

We’ll attribute more value to this AI that provides ideas and hunches on what to do next, than to the ones that seek to provide answers in well-rounded packages.

2. Research provides more than “just” answers to the main legal questions in the case

When I was still in practice, I almost always had a window open in Word, and a window open in CanLII (most of the time) or on other services (the extremely rare times something wasn’t on CanLII). I frequently stopped mid-sentence when writing a legal document, and, even for the most basic principles in my field, I would ask myself “Is this right? Let me CanLII that just to make sure”. This iterative back and forth between drafting and searching generally doesn’t require more than keyword searching in a current legal database. I think I used search services at least as much for this than for searching the big issue of the merit of the case.

3. Research helps drafting

Sometimes research helps drafting. Countless times I stopped, again, mid-sentence and asked myself “Is this the right formulation? There must be a case where a learned judge said that more elegantly than my mere mortal self”. A search engine that would try too hard at providing a definite answer to a legal problem would be as annoying as asking a simple question to C3P0 only to get a 500-word answer in the most obsequious language. [1]

4. It’s often more important to find questions than answers

The biggest problem with the idea that research can be completely separated as an activity from the rest of legal work and delegated to AI-based service is that it oversimplifies legal matters [2]. Most of the time, in my experience at least, you start searching the initial question, only to discover that there are many more questions to search and think about. Research is an iterative process, not a visit to the Oracle. You might have the best AI-powered search, but somebody will still need to look at the results and likely search again with new parameters, or reformulate, or search something completely different. You might not be using search tools the same way and you might be using completely new tools, but you’ll still be conducting legal research, and to that I say: “Good for you”.

5. Law is not the only thing that changes the law

I’m confident that, one day, AI will be good at identifying trends in decision-making and new ideas generated in literature and use this to predict upcoming changes in the law. For now AI will (at best) return the law as it is currently understood, and based on what is already done in the field. By saying that AI will soon eliminate the need for lawyers to conduct research, we’re pretty close to saying that the law never changes and that the law is not influenced by anything other than what’s already been said in past cases.

It’s a truism that societal changes, science, and clever lawyers finding good arguments in different fields, jurisdictions, or legal systems, can influence the evolution and application of legal concepts. This is what lawyers are supposed to do: understand the world and how it changes, and steer the law accordingly. As such, we will still need to question the answers that an AI system would provide as potentially being too conventional, and not sufficiently informed about the context in which it operates [3]. When you challenge the answer provided to you by AI as being disconnected from the world, change the parameters of your research accordingly, and iterate, you will be doing legal research. Again, good for you.

6. Sometimes, you need to find “Strangers Things”

Countless times in my practice I was searching for the only decision that said something very specific and which gave me a possible argument to go against a whole body of cases that, I found, subtly inapplicable to my situation. AI will get better at dealing with edge cases, but I suspect it will often be a nuisance.

By definition AI is a model: an approximation based on a series of different weights given to a certain number of factors. It took 10 million cat pictures for computers to build a model allowing them to identify cats [4] and cats haven’t really changed for millennia. In comparison, Canadian courts issue about 40 thousand decisions a year, and the law is a moving target. There are about 50 top-level subject titles in the Canadian Abridgement, so on average courts issue about 800 decisions a year in each field (not all fields of the law are equally jurisprudentially rich, so many fields will be thinner than this). Even with access to the entire court record of cases and to all the content in a firm’s KM system, the volume of available legal data makes it likely that there’s going to be a lot of generalization if you use AI to build a general purpose robot researcher. Most of the time, this generalization is good. But sometimes the last thing you want is the product of generalization. No amount of AI would have been able to tell Dustin (from Stranger Things) what Dart was because Dart is an edge case.

To stick with the Netflix theme, there’s a Black Mirror episode that provides a good analogy (skip this paragraph if you don’t want spoilers). It starts with a couple driving to their cottage. “How deep is your love” plays on the radio. The guy likes it and hums. The girl says “but you hate disco”. He answers, “Yes, but I like this one song”, and he proceeds to enthusiastically sing it by heart. The guy dies shortly after this. While she’s in mourning, a friend convinces her to order an AI-powered replica of the guy (created by analyzing all their chat logs, social media profiles and Skype calls). She likes the robot at first, but then she starts realizing flaws. She takes the robot for a drive at some point. “How Deep Is Your Love” plays on the radio. The robot says something like “I hate this song”. The AI had missed the quirky exception in the deceased’s general hate for disco and it’s the last straw that makes her decide to get rid of it.

Most of the time I was looking for the “How Deep Is Your Love” element to my case. This is where I provided value and would have been annoyed by any tool trying to provide a model answer as opposed to the tiny thing I specifically don’t want “the model’s” answer to.

7. It’s like doing your own stunts

We sometimes hear suggestions to the effect that research is tedious. My introduction suggests as much. I think this is mostly wrong.

I (of course) know many lawyers, old and young, and the best ones spend the most time doing their own searches. Steve McQueen did his own stunts (except, disappointingly, the very best one), and the Steve McQueens of the law do their own research. They frequently delegate research, but never before having spent some time searching themselves to help them think about the case and identify the areas where more research time was needed. They genuinely like this activity that had the potential to give them fresh ideas or help them stumble on new cases that end up providing the key argument in the case and most of their time spent doing that was absolutely undistinguishable (as far as trying to segregate it in distinct time entries at least) from the time spent doing legal reasoning. They often look excited like Indiana Jones in the “X marks the spot” scene.

Yes, there’s sometimes grunt work in research. This happens most often, I found, when not working with such a Steve McQueen type of lawyer. Better technology could probably eliminate some, but probably not all, of it: there will still be cases where you will need to turn over all the proverbial rocks. Then it’s not the best, but most of the time, research is one of the most fun parts of the process.

It may well be that clients are more reluctant to pay for time billed for conducting hours of “grunt” research now (or any research for that matter). Stuff you can’t bill, if not inherently despicable, can become a drag. But I suspect the problem is that we have (1) too poorly delegated / organized research for too long, and (2) in response to clients having now figured out 1, overcompensated by too easily conceding that research was so completely distinguishable from legal reasoning and, as such, easy to delegate to machines.

Setting aside billing, targets and the frenzy and constant emergency mode that is imposed to lawyers (and most professionals) these days, research is one of the most enjoyable parts of legal practice, as it should be. It’s the activity that can help you find both the silver bullet in the other party’s case and the Bullitt in you.

To go back to Kevin Kelly, whom I cited in my introduction, he also writes:

The predictivity of most new things is very low. The Chinese inventor of gunpowder most likely did not foresee the gun. William Sturgeon, the discoverer of electromagnetism, did not predict electric motors. Philo Farnsworth did not imagine the television culture that would burst forth from his cathode-ray tube. Advertisements at the beginning of the last century tried to sell hesitant consumers the newfangled telephone by stressing ways it could send messages, such as invitations, store orders, or confirmation of their safe arrival. The advertisers pitched the telephone as if it were a more convenient telegraph. None of them suggested having a conversation.

(…)

We make prediction more difficult because our immediate tendency is to imagine the new thing doing an old job better. That’s why the first cars were called “horseless carriages.” The first movies were simply straightforward documentary films of theatrical plays. It took a while to realize the full dimensions of cinema photography as its own new medium that could achieve new things, reveal new perspectives, do new jobs.

It’s obvious that AI will bring many new tools to legal work and research (it already does), but our tendency to see it initially as something that will merely do an old job better is very similar to Kelly’s examples in the above quote. Cars did not just replace horses, they led to highways, increased mobility, drive-ins, motels, the democratization of family vacations at the beach, suburbs, traffic jams and air pollution. AI in legal search (and in the law, and everywhere else) will lead to completely unforeseeable concepts and strategies, and will create new problems. Research itself doesn’t seem to me like something that will ever be outdated as a way to find the ideas that will lead to these new concepts, and solve these problems.

[1] To the risk of looking like someone who reads one thing and only that (Wired and its writers), the October 2017 edition of the magazine has two pieces on how search and, more generally, interacting with information systems can be made annoying by presumptuous or overly chatty bots. I didn’t find them online (at least not somewhere I would link to), so I risked a DMCA notice and scanned them here and here.

[2] At least those entrusted to lawyers. What type of problems should be entrusted to lawyers in the first place instead of being solved (or prevented) with AI-based systems is an entirely different and valid question.

[3] And this might be perfectly fine. As Kevin Kelly, again, writes, “if you want a human mind, make a baby.” I will risk a political joke by saying that there may be a market in the U.S. for a deliberately “context independent” AI among originalists.

[4] That was more than five years ago, so it might be much less now at the pace at which AI progresses: http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html

Comments are closed.