Who Dunnit? Artificial Intelligence and Unauthorized Practice

OK, I’m going to talk about AI and unauthorized practice in just a second, but first…

Who can resist those stories with the teen genius? The wunderkind trope. That Dutch teen with the Boomy McBoomface contraption setting out to heal our polluted oceans. That Mark Zuckerberg fella circa 2004, with the other face thingy.

Who is not in awe of an uncalloused mind lit by bedazzling precociousness and disarmingly naive ambition?

Take Joshua Browder, for instance. He’s surely that kid—our teen wonder—for legal automation. He taught himself to code at age 12 and first came to glory two years ago (at the age of 18) when he launched a website to help UK motorists fight parking tickets. Here was a kid who made a simple website tool. All you had to do was enter your name, a penalty charge number, click the least unlikely Hail Mary excuse from a list of 12 possible reasons/ways why/how you didn’t park your car against a fire hydrant or whatever, and his website generated your appeal in a matter of seconds.

Would it work? Maybe… even probably? (how’s 64% odds?) Would it cost you to try? Nope!

If he wasn’t the Spartacus of his generation, he was at least the Spartacus of Camden Council’s by-law enforcement division, and later other cities. He vanquished millions of dollars in fines. And then, looking more like an actual crusader, he set his sights on fighting homelessness and evictions in UK.

Since that time, the young Londoner has hopped the pond, enrolled at Stanford, and brought—in a stroke of dramatic irony—the revolution to the colonies. New York and Seattle have since fallen to the automaton fury of his DoNotPay chatbot. So too Equifax come under siege. The robot lawyer will even automatically sue Equifax for you in any of 50 states… or at least fill out some of the forms.

It may not be a “panacea“, but Browder’s bots have begun to take a bite out of, or at least show teeth to, a big problem. You know, that Access to Justice one?

The problem suffered by folks who can’t or won’t pay lawyers?

This is where we come in, Dear Reader. Folks with that monopoly to practice.

I sometimes write about artificial intelligence, and muse on its impact on the legal services sector. Last year I left some questions hanging in a post called “Of Cybernetic Shysters, AI and Guardians of the Rule of Law”.

One question that I had in mind back then was “Can robots even practice law?” Sure in an existential, but what about even a mere definitional sense?

But the better question still: “Even if a robot can practice law, is that even prohibited by our regulatory scheme?”

So when The Vancouver Sun ran the following headline last week:

Entrepreneur launches Robot Lawyer chatbot in four Canadian cities

… I took notice.

On its surface the article is about how Browder plans to unleash an army of cybernetic shysters to eat the backlog of unprofitable legal problems that we jealously hoard here in Vancouver and elsewhere in Canada. Sounds scary, right?

But at the root of the article—I see it’s actually a story about what the Law Society of BC plans to do about it. Or better yet, what they even can do without changing the law around unauthorized practice.

Actually, I think we might be in trouble.

Currently in BC, Part 2 of the Legal Profession Act says this (emphasis added):

Authority to practise law

15 (1) No person, other than a practising lawyer, is permitted to engage in the practice of law, [except the discreet list of those who are not lawyers but can anyway]

[…]

(6) The benchers may make rules prohibiting lawyers from facilitating or participating in the practice of law by persons who are not authorized to practise law.

You will notice I liberally emphasized the words “NO PERSON“? It doesn’t say robots or computers.

Well, perhaps the ambit of the term “practice of law” might save us?

No. Earlier in the definitions, the LPA explains what is and what is not meant by “practising law”. It does include:

(a) appearing as counsel or advocate,

(b) drawing, revising or settling [the usual legal docs]

(c) doing an act or negotiating in any way for the settlement of, or settling, a claim or demand for damages,

(d) agreeing to place at the disposal of another person the services of a lawyer,

(e) giving legal advice,

(f) making an offer to do anything referred to in paragraphs (a) to (e), and

(g) making a representation by a person that he or she is qualified or entitled to do anything referred to in paragraphs (a) to (e).

It does not say:

(h) installing or causing to be installed a computer program that is capable of doing anything referred to in paragraphs (a) to (g), unless [carefully worded exceptions]

What I’d ask readers to reflect upon are some questions, namely:

  1. Do you agree that the LPA has no hope of shoehorning a chatbot, i.e. some computer code processing inputs, within the plain and ordinary meaning of the word “person”?
    Because if so, we have a problem.
  2. Has anyone considered something like the above addition to the definition of practicing law?

Paul? I’m asking you especially!

– Find Nate Russell on Twitter

Comments

  1. For now, as Browder’s focus is on traffic tickets – I assume minor infractions such as speeding and parking – this would appear to affect paralegals more so than lawyers. Curious as to whether this technology has had any effect on the paralegal profession and/or market as they are a lower rate profession such effect should be evidence of possible effect for lawyers as the technology develops.

  2. Liability for the results of automation presents a number of potential theoretical issues. But AI and robotics aren’t (yet) self-initiating. People put these systems into place and use them (or offer them) to perform functions. As a practical matter, I don’t think courts will hesitate to impute legal responsibility to those people for the the functions performed (including, in appropriate circumstances, the unintended consequences of those functions).

  3. On Keith’s point that a court would hold responsible the person who controlled/programmed/made available the software for any damages arising from bad chatbot advice… I agree.

    But tort liability, and especially the possible ways common law could be shaped via the courts to address bad results from AI, is different from regulatory scope.

    Here I was thinking about the limited powers of the regulators who enforce the Legal Profession Act — specifically their statutory powers to tackle persons “engaged in” the listed acts that amount to “practice of law”.

    Obviously “engaging” is one concept that needs to be clarified. And obviously I can use a special tool to engage in an activity. If you call me and your phone makes noises that sound like legal advice, and if I’m making those same sounds with my mouth on the other end… it’s still me engaging in the practice of law. It’s not the phone.

    But how remote are we prepared to get? And in AI, who is driving the intelligence? Right now, the specialist AI systems are seeded with actual human expertise and data, so maybe we can say it’s those people and companies (IBM, Google, etc.) who are “engaged” behind the machine’s actual “doing of the thing.”

    But what are we (I mean regulators tasked with protecting the monopoly on legal services) going to do when the next-gen DeepMind AI bypasses the step of requiring human knowledge to research an opinion? What happens when the learning algorithms achieve superhuman performance with no human input?

    I mean, at some point the concept of a “person” being “engaged” just like any tool-user simply won’t apply to the applications of AI.

    So back to the 7 positive acts listed as “practice of law”. I think we need an eighth that describes the programming or installation or configuration, etc. of AI to do the other seven.

  4. When AI becomes truly autonomous and begins interacting autonomously with humans, it will require legal personhood and human oversight into order to integrate effectively with society at large. The corporation already provides an entire framework for assigning this autonomous AI with a legal personality (for the purposes of liability, I’m thinking one corporate entity per autonomous AI) and human oversight (in the form of a board of directors).

    In any event, any AI will consist of software that runs on hardware that is the property of some person (individual or corporate) who will bear the risk of legal liability in the usual, age-old way.