Of Cybernetic Shysters, Artificial Intelligence and Guardians of the Rule of Law

“Here I make an intelligent being out of a bunch of old wires, switches and grids, and instead of some honest advice I get technicalities! You cheap cybernetic shyster, I’ll teach you to trifle with me!”
And he turned the pot over, shook everything out onto the table, and pulled it apart before the lawyer had a chance to appeal the proceedings.

– The Cyberiad

Happy Monday! Like F. Tim Knight, I am getting back on the “blogwagon” this morning with an overdue post… also about AI following the session I was a panelist on at the recent Canadian Association of Law Libraries Conference a few weeks back.

Legal industry artificial intelligence is about to explode. While some lawyers are heads down waiting for the fallout, more are oblivious. Just sitting at their desks, right beside those big old windows. Sadly, only a handful of legal types are even in the bunker, or anywhere near the trigger switch.

While global leaders in the AI movement proactively research AI safety—and in fact explicitly recognize that this entails “many areas, from economics and law to technical topics“—the leaders from the legal establishment trail behind. Go to a legal tech conference and you will notice conversations about AI are not so much proactive, as paralytic. They pertain to anxieties around job losses among lawyers. If there is any debate at all, it tends to be between skeptics—who doubt AI will achieve capabilities equivalent to a human lawyer in anything like the near future—and those with more terrifying visions of grim disruption. Neither of these conversations resemble planning.

I sensed the fear at the Legaltech conference in Toronto last year when one presenter introduced the audience to Ray Kurzweil’s law of accelerating returns—which allows for a time not so long from now when thinking machines outperform human intelligence. His tone was far from fear-mongering, but it is hard to talk omelettes without, you know, that part about the eggs.

At the ABA TECHSHOW earlier this year, Michael Mills, co-founder of Neota Logic, (who Tim also mentioned in this morning’s post) veered away from cautionary tales about strong AI to talk modestly about expert systems which live farther down the AI food chain. Perhaps this decision was born of courtesy given the discomfort AI wake-up calls tend to cause, but if you read Mills’ very informative three-part piece on the state of play in artificial intelligence and law you will notice that he is not that bullish on the “medieval master craftsman model” for legal services compared to whatever might emerge from the AI experimentation going on right now.

From experience—having presented on this topic at a couple conferences here in my backyard at last year’s Pacific Legal Technology Conference and a couple weeks ago at the CALL/ACBD Conference—I would say law folks are at best pessimistic of their role in the AI revolution, and at worst in utter denial that it is even happening. I would also say that while the doubters are in decline, the conversion to camp fear is not always pretty.

Media coverage of AI relies heavily on fearful tropes from science fiction. A better alternative, I am beginning to think, would be that the legal establishment start assembling its stakeholders to get a hand on that trigger switch before the whole thing blows. Or the better analogy, to help steer the AI rocketship towards goals and in accordance with safeguards that support the Rule of Law and other public interests. The legal profession, after all, is rewarded its monopoly to sell legal services not because it is inherently privileged and entitled to live without fear of disruption, but because the democratic social contract has said that a monopoly is part of the bargain struck in the best interests of society at large to have an ethically principled, independent, guild of knowledge workers protect the raft of civil rights, personal freedoms, and democratic principles that keep law and order afloat.

Fear of the Cybernetic Shysters 

A somewhat humorous example of the legal profession’s angst around AI is from a market research report released last year. The report triggered a somewhat hyperbolic article about how “artificial intelligence is on its way to, well, killing off lawyers.” The backstory is that, Altman Weil, the consultancy who ran the survey, had asked influential law firm leaders back in 2011 if they could imagine a law-focused AI replacing human practitioners in five to ten years.

46% of those who answered in 2011 could not see that kind of disruption happening ever, let alone within the decade.

When asked again for the 2015 survey, those skeptics had dwindled to 20.3% of respondents, who represented top brass at large US firms. Put another way, almost 80% now think machines will be doing timekeepers jobs at some point between 2020 and 2025.

Survey Graph Altman Weil

Copyright: Altman Weil 2015

3 The Geeks and a Law blog responded with a funny post titled Stop AI Madness, which chided: 

Here’s the thing, the question is flawed on many levels, but primarily because none of the answers are correct. It’s just as wrong to believe that any of these jobs will be specifically replaced by computers as it is to believe that they will never be replaced by computers.

The correct answer is: AI will enhance, change, and restructure what it means to work in a law firm. It will change the nature of the work that lawyers and staff do. It may reduce the workload so that fewer individuals are needed, or it may make it possible for more individuals to do that much more work, but it is quite unlikely that people working in law firms in 5 or 10 years will be doing exactly what they do now.

[…]

It’s time to cut the hysteria surrounding artificial intelligence in law.

Cutting the hype is easier said than done, however.

AI-phobia has been a sci-fi staple since the advent of the genre. That excerpt at the top of this post is from a short story by Stanislaw Lem, first published in 1967. But earlier still in the early 1940s, when Isaac Asimov wrote the Three Laws of Robotics, the amorality and otherness of machine intelligence was already a villainous prop. Asimov actually saw it as a clichéd theme that he wanted distance from. The Three Laws of Robotics would allow him to tell a story without playing out the cliché that when humanity plays God to produce intelligent, self aware machines, these creations will invariably “turn stupidly on [their] creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.”

But boy is that cliché sticky. The legal media’s treatment of AI stories shows little has changed. The rather blunt visual metaphors evidenced by this collage of creepy robots and holograms accompanying recent news about ROSS and other advances in AI speak for themselves:

Perfunctory pic of robots that might be smarter than us.

Perfunctory collage of robot lawyers. If you catch an article about the future of AI, make sure to look for the anvil-like subtlety of the stock art.

A self-aware, super intelligent AI lawyer may not have the box-office appeal of a space-crazy Hal 9000 or a nuke-happy Skynet, but the prospect of one elicits a similar dread in the hearts of the legal profession.

Beyond AI’s Faustian Tropes

The foreboding silhouette of machine intelligence is unnerving, especially as it now looms close. But there are more options than to just ignore its presence, or merely wring hands in helpless distress. On the larger stage, titans of science and industry are promoting a more sensible approach. It’s not all that complicated in principle. It’s essentially about spending the time to come up with wise precautions to manage AI before humans lose the initiative.

Stephen Hawking and others got attention with an open letter about research priorities for AI, specifically the need for policy and safeguards to manage risks of growing in AI. Elon Musk has signed the open letter along with thousands of other influencers and donated $10m to the Future of Life Institute in part to fund this research. Bill Gates has openly placed himself in the camp of those concerned that AI needs to be met and managed.

As Jaan Tallinn, the founding engineer of Skype and co-founder the Future of Life Institute, puts it:

“Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.”

As I mentioned above, these leaders explicitly refer to the legal dimension this work requires. One document has even begun to identify some of the law and ethics issues that are important for steering the rocket.

What we have not seen yet is any indication that this call to action has even registered with the legal establishment, even though it is the profession, law societies and the judiciary—not the tech entrepreneur and scientific communities—who are the acknowledged champions of the Rule of Law. If I am wrong, please show me where is that discussion taking place. We certainly cannot steer anything if we do not acknowledge it even exists.

As we move past the I, Robot source material, we must move past the purely existential questions about who AI will replace among us (law librarians or legal researchers? junior associates? senior partners? law firms entirely?), and look to articulate cohesive goals related to the needs of Justice and the Rule of Law, and not just the job types and business grooves of our medieval master craftsman model.

We should ask questions like:

  • If AI could perform all of the various jobs (research, drafting, negotiation, argument, advocacy) of a lawyer, would the Rule of Law still be properly protected?
  • Whose duty is it to identify the interests of/impacts on justice as the AI rocket gains power?
  • What are the goals of justice that we want to steer the AI rocket towards? and, perhaps,
  • Who will steer the AI rocket if the profession/the establishment/guardians of the Rule of Law won’t help do it?

Reports from the CBA’s Legal Futures Initiative represent a good start in this whole debate, but we could do more than just express our anxieties around AI. We could start looking at what our own role requires of us, and maybe start steering the rocket by setting forth goals.

I enjoyed reading Michael Mill’s three-part series from earlier this year, Artificial Intelligence in Law: The State of Play 2016, the third installment of which explains the whole series:

The first post in this series discussed current developments in Artificial Intelligence (AI) generally and its application to law. In the second post, we took a closer look at how legal research and ediscovery are being impacted by AI; and in this final installment, we’ll take a look at other AI tools and companies at work in areas of law today.

IBM’s Deep Blue computer beat chess grandmaster Garry Kasparov almost 20 years ago, and IBM’s Watson bested the human champions of Jeopardy in 2011. Still many doubted the significance of these achievements when it comes to real lawyer work. Machines excel at predictable, objective, positive, mathematical, consequential tasks. But humans will always reign over the realms of normative values, judgment, pluralistic and analogous reasoning, intuition and creative intelligence. Right?

Within the last few months, since the last installment of Mills last post in his State of Play series in fact, two more hurdles have been cleared:

  1. In March 2016 Google’s Deepmind AI (AlphaGo) decisively beat one of the world’s best humans, Lee Se-dol, at the 3,000 year-old game Go, “an ancient, abstract game” that according to one article I read is “of such staggering depth, nuance, and complexity that it’s long been considered impossible for computers to master.” More frightening still, the machine taught itself how to be good.
  2. In May 2016, ROSS, the artificially intelligent attorney based on the newest iteration of IBM’s Watson, landed a “job” with BakerHostetler’s in their bankruptcy practice.

This moves us closer to the age of artificial general intelligence when AI algorithms allow machines to truly learn and improve on their own. This is distinct from artificial narrow intelligence, which is where we have been sitting for some time with video games and now self-driving cars, which are not yet adaptive system so much as reaction machines determined to follow pre-set rules and decision trees.

Robotics automation already transformed manufacturing and assembly lines, for instance—work where the processes are linear and easy to define, and chaotic variables minimal.

But human labour’s monopoly in more complex areas, like the trucking industry, is now up for review. Self-driving trucks could put millions out of work in the US alone. As AI fills the spectrum of automation, knowledge workers should not be too smug. Merely assuming that abstract, pluralistic modes of thought, analogy and argument will remain our domain may not be a solid enough anchor to ride out the storm.

Consider this example. Before Deepmind won four out of five Go matches, its opponent, legendary master Lee Se-dol, was said to be too “intuitive, unpredictable, creative, intensive, wild, complicated, deep, quick, chaotic” for the machine. Lee Se-dol himself said that “there is a beauty to the game of Go and I don’t think machines understand that beauty.”

Is Law too Beautiful for Machines?

This might be a question of faith. It might also be a question of hubris. Michael Mills adds this:

Cognitive technologies in the law are riding a wave of ever-smarter algorithms, infinite scaling of computer power by faster chips and cloud-clustered servers, intense focus by companies led by seasoned experts, and an ever-greater demand from clients for cheaper, faster, better services.

Mills further observes, and I agree, that lawyers are “educated to precedent, alert to their peers, wary of failure and hence reluctant to experiment.” These are not forward looking skills. They are backward looking. Is this why the legal establishment is not engaging in proactive planning with AI? There’s probably a whole study that could be done on the cognitive dissonance that hobbles our professional mindset around change. We are a very hype-resistant species of knowledge workers, not swayed by fads but also (some would say) hidebound by tradition.

The computer scientists, entrepreneurs and innovators leading AI are driving the hype, but they are also dictating change and disruption whether the legal establishment desires it or not. The establishment (lawyers and judges) will need to work contrary to our own programming if we are to see our own way forward, and protect what matters.

It will also need to up its game in the AI debate. We need to define what matters, and why anyone should care.

So, What Would the End of Rule of Law Mean for Me?

The AI talk so far in legal circles has been occurring at the fringes, much of it pessimistic and fraught with self-interested worries like “when will AI come for my job?” Our institutions need to get together to re-examine how the legal establishment’s original mandate—as guardians of the Rule of Law—can be harnessed to meet the existential challenge that AI poses to that mandate, indeed if at all. The Future of Life organization has begun investigating AI safeguards at the broader level. The law and policy shaping the development of AI are monstrously important, and cannot be left to the engineers and entrepreneurs alone.

I am optimistic that when we ask the right questions, and engage in the real debate (i.e. hinged on fundamental topics like the Rule of Law, rather than the palaver over who is going to be fired first) we will still see a strong role for lawyers as users of AI. We may also find a more convincing argument in favour of our continued relevance—even after the machines are able to write our memos for us. I believe if the legal establishment starts this discussion, we will also begin to strengthen our competitive advantage in the areas we are strongest (ethics, conflict of interest, law reform, etc.) and not make the mistake of playing the Future game according to the rules set by others with different skill sets (business consulting firms, accounting and pure tech sector, to name a few).

In a follow up post I will delve into what I think some of our near-term plays might be with AI, from issues ranging from the copyright to specific legal professions legislation.

– Find Nate Russell on Twitter

Comments are closed.