Applying Artificial Intelligence in the “Legal Aid Space”

I’ve never been more optimistic more excited about where we’re going with technology than I am today. This is just an absolute, I mean, we’re in the middle of a revolution and I think it’s going to completely impact how we deliver legal services. And I actually think that in this space, in the nonprofit legal aid space, we’re going to make great strides in this, and kind of lead the way in many ways.”–IV Ashton


IV Ashton is a Chicago-based attorney and founder of LegalServer, a “web-based case management platform for the legal aid community.” He’s also involved with an initiative called Houston.AI which began with the realization that emerging technologies, based on artificial intelligence and machine learning, had tremendous potential to improve access to legal services. As Ashton says in a Houston.AI promotional video there was a realization that “this is the moment in time when we can apply that technology and actually make a substantial difference.”

In March Ashton gave a presentation to the Legal Services, National Technology Assistance Program that has recently made its way their YouTube channel. He begins with a very accessible primer on artificial intelligence reviewing the effects of Moore’s Law, improvements in sensor technology, the emergence of the Internet of Things, all of which combined essentially adds up to a world full of intelligent, connected objects.

He observes that “a lot of people are really nervous about what’s happening in our society and how quickly things are changing.” One way we can look at it, he says, is to understand activities in terms of what humans do best compared to activities that are better suited for machines to do. He refers to the H+ Humalogy Scale (developed by Future Point of View) as a way to evaluate this comparison. Take a look at this FPOV blog for more information on this idea.

Ashton says, “Artificial intelligence is not a technology that is coming: it’s here.” However, there is “artificial intelligence” and then there is what’s called “artificial general intelligence.”

Artificial general intelligence would mean you could, “talk to a machine and the machine understands what you’re saying, has emotional intelligence, [and] is able to detect … what feelings you’re having and give you an answer back … you could ask it anything. It’s like talking to a human being.” That form of artificial intelligence is not likely to appear until at least 2029.

There are “expert systems” where humans predefine the logic present in the system. And expert systems will continue to be an important part of problem solving. Then there is “machine learning” where the machine acquires “information and rules using information.”

“… they literally learn [to] reason. They use rules to derive conclusions and they … self correct. So, that means that computers are basically writing their own programs now. And the efficiencies that we’re getting by using something like machine learning and learning algorithms is huge in this space.”

Question and answering platforms like IBM’s Watson are what’s characterized as “artificial narrow intelligence.” This consists of a number of “single distinct tasks” that are “chained together” in series. There are many examples of that kind of artificial intelligence around today.

Ashton talks about different types of machine learning:

  • “supervised learning” (where humans are involved and provide examples) also known as classification systems;
  • “unsupervised learning” (things are clustered together based on their relation to other things and math is used to sort and figure things out); and
  • “deep learning” (using neural nets that attempt to mimic how human brains work).

He says that one way to think about this is to think of a continuum with supervised learning on one side and neural nets on the other. Supervised learning means you can explain the answers you get, but with neural nets no-one can explain why an answer might be right or wrong. Not a particularly comforting thought.

Artificial intelligence is very useful for predicting things. For example, the likely outcomes of a case or the likelihood of recidivism. There are now “litigation banks” that have analyzed cases and can tell you which will generate the most money, or how many hours it might take to handle a case …

“… based on who’s assigned to it and what’s the level of service of the case and the type of case and the legal issues in the case, and then we’re also trying to predict how many days will this be open. So that actually is starting to get more narrowed down now in terms of being able to predict a capacity of an organization.”

He cautions that algorithm bias remains a problem and he refers to Cathy O’Neil’s book Weapons of Math Destruction as a good source for an overview of the problem. We need to be careful, because data can be inherently biased.

Ashton then provides a few promising examples of what these technologies can do.

He starts with CaseText. This is a “predictive service” where you can,

“… upload a document, a legal brief to it, and it will tell you what the likely case law and arguments from the other side are going to be or what you missed … based on your jurisdiction and based on that you know in this brief that you’ve written. Of all the observations that we’ve seen before people have cited these cases and you didn’t cite these cases; or, you cited these cases and no-one’s ever cited those cases before and then you know [what the] opposite of that is and [that] the response brief is likely going to cite these arguments.”

He also mentions the Toronto based company, Thistoo, “Ontario’s #1 divorce resource.” They’ve analyzed about 58,000 cases and use factors like income, number of children, the length of the marriage, etc., and “based on data from other cases that are similarly situated, tell you what the likely outcome of your case is, how long it’s going to take, and how much it’s gonna cost.”

Another important role that artificial intelligence can play is trying to figure out when a person needs a lawyer. For example,

“[if] you get public housing and you’re being evicted, it’s a lot more important that you get an attorney perhaps than if you have private housing. Because the consequence of losing [is] not only are you out of your apartment but you’re also out of the subsidy that allowed you to pay for that apartment.”

He finishes up by talking about three applications of artificial intelligence and natural language:

  • Natural language processing: e.g. using classification to figure out what the legal issues are. This takes into account whether you’re client or a lawyer and provides that perspective. In dealing with victims of crime cases, for example, algorithms are being developed to identify the type of victimization, e.g. sexual assault, domestic violence, financial fraud, etc. and then determine whether a case is eligible for funding under the Victims of Crime Act (VOCA).
  • Natural language understanding: there are more effective translation algorithms that use entity extraction to better understand the meaning and context of the words in a sentence. Ashton mentions an ear piece that’s under development that will be able to translate a language for you. For example, he says, “getting into a cab in Paris and saying something in English and having the cab driver understand it in his or her natural language and then saying something back and having you understand that in your natural language.”Entity extraction can also identify types of documents: “is this an eviction notice?, is this a pleading?, is this a citation?, is it a charging document in a criminal case…?” and then, “if it’s an eviction notice who’s the landlord in this notice?, what’s the address in this notice?, who’s the the tenant?, how much do they owe?, when do they have to move out?” The implications for this on improving the legal aid intake process is significant.
  • Natural language generation: this is somewhat new; instead of understanding language, i.e. taking unstructured data and trying to apply structure to it, natural language generation does the opposite; the machine takes structured data and tries to create something else, some “writing” that looks like it was written by a human. Ashton says, “there are a lot of examples.”

“You might be surprised to know that a lot of sports articles, written on certain websites, are written by computers. So, when the game ends a computer generates a narrative that is published online that you might read. And you say like, ‘Oh wow, you know, the Cleveland Cavaliers won a thrilling game against the Chicago Bulls and LeBron James scored 32 and had a buzzer beater,’ or whatever. And you’re reading it as though you would read any sports article, and all that’s being generated by a machine.

…when this hits in the law it is going to completely change everything. Forget about asking the questions, here’s the draft based on, who the judge is, what type of case it is, what we know about this case, this is your best argument we think, and that computer will have written it and then you’ll edit it, and the efficiencies gained by that …”

Ashton concludes by suggesting we might do well to think of artificial intelligence as “intelligent assistants, or an intelligent assistant, something that is going to make us better at what we do, make us more efficient and hopefully allow us to serve more people, because that’s what we’re ultimately after.”

Comments are closed.