Thursday Thinkpiece: Litigating Artificial Intelligence

Periodically on Thursdays, we present a significant excerpt, usually from a recently published book or journal article. In every case the proper permissions have been obtained. If you are a publisher who would like to participate in this feature, please let us know via the site’s contact form.

Litigating Artificial Intelligence

Authors: Jill Presser, Jesse Beatson, Gerald Chan
ISBN: 978-1-77255-764-0
Publisher: Emond Publishing
Page Count: 368
Publication Date: May 2021
Regular Price: $149 (softcover)

Excerpt: Chapter 17: Coda–Litigating in An Era of Thinking Software [Footnotes omitted]

Artificial intelligence (AI) has migrated from our imaginations into government departments, courtrooms, consumer products, and litigation tools. We are said to be living in an “age of automation,” but this undersells AI’s potential. The word “automation” conjures images of machines engaged in rote manufacturing processes. AI is far more sophisticated. Increasingly, AI systems can learn and adapt without interventions by human programmers. A more apt description is that we are in an era of “thinking” software.

AI is today’s professional reality. Complex tasks that historically required human thinking and judgment are now outsourced to AI. Doctors analyze radiology and dermatology images with the assistance of AI. AI tools assess bank loan applications, university rankings, and job candidates. Government departments and the financial world could not operate as they do today without the assistance of AI. Lawyers are following this trend, and many now use AI to predict court outcomes, uncover patterns in documents, and reveal litigation risks in contracts. Courts and public agencies are experimenting with AI as a resource to inform judgments about criminal sentencing, social benefits, and immigration, among other high-stakes mandates.

To believe that one’s job description is immune from AI outsourcing likely commits the “AI fallacy”: the belief that the only way machines can be intelligent is by emulating the thought processes of human specialists. AI churns out answers, but it gets from point A to point B differently than a human would. Time will tell if AI systems will replace or complement human workers or simply occupy a different niche in the workforce altogether.

Moving forward, AI will permeate all aspects of our lives. It will be more common to live in “smart homes” that record and respond to the behaviours, habits, and preferences of the occupants. Hospitals are becoming technologically savvier, and AI health bots could eventually join hospital and clinic staffs to dispense medicines to patients, take x-rays, and perform ultrasounds. Fully self-driving cars and delivery drones will become the norm. Facial recognition technology will continue to be used by police forces and citizens. Ultimately, we may even see “general AI,” akin to C-3PO from Star Wars or Samantha (AI personal assistant) from Her, that is conscious, communicative, and self-aware. The possibilities of inviting these inventions into our lives are nearly boundless.

As AI advances, we may stop seeing it as “just another tool in the toolbox of IT experts.” AI will only get better at making its own decisions, as opposed to predictably following basic programming. If AI is no longer a direct extension of human will, legal paradigms will need to evolve.

Where do we assign legal and ethical responsibility for harms caused by AI-based products? Courts are far from reaching a consensus on how to allocate liability between an AI manufacturer and a product user where damages occur. To date, legal responsibility has not been assigned to AI systems themselves. Courts will inevitably have to grapple with software that acts autonomously and makes decisions without human intervention.

Some have speculated that AI may eventually be given an elevated status under the law. One of the more radical propositions is granting “personhood” to AI. A range of legal rights flow from personhood. If AI were ever granted legal personality, akin to the legal fiction that corporations are “persons,” this could have wide-ranging implications for legal regimes and principles, as well as for society broadly.

AI is already making impacts upon our world, which is why it is important for us to be alive to how AI functions and “thinks.” Notably, the public sector uses AI to render legal judgments and inform operational and policy decisions. AI can do magical things. But we don’t always know how the magic works. If we want to audit a spreadsheet sum, we can easily review the formula used. However, AI, particularly machine learning, does not leave the same clearly readable footprints. Given the “black box” nature of some AI systems, we might not have the capacity to dissect their results or recommendations. This characteristic of AI may be incompatible with legal principles demanding transparent decision-making.

Canada has been relatively slow in deploying AI in courts, tribunals, and other government departments. This presents Canadian lawyers and jurists with the opportunity to study and learn from other jurisdictions, particularly the United States. Advocates and community members have raised issues about AI systems that determine the allocation of home assistance for adults living with a disability (Idaho); detect children at risk of child abuse and neglect (Pittsburgh); flag unemployment insurance fraud (Michigan); provide analytics to police departments about areas where crimes are likely to occur (Los Angeles); and support judges in making bail and sentencing determinations (Wisconsin). Many of these systems have made errors that resulted in litigation. An issue that is bound to vex courts and governments is how to achieve transparency and oversight of AI-aided decisions.

From a practitioner’s perspective, AI is also transforming what it means to provide legal services in areas such as e-discovery, contract drafting and analysis, and litigation outcome prediction. Lawyers in all practice areas will need to adapt and learn new skills (including data science and data literacy) to remain competent. Codes of professional conduct will likely need to be amended. AI/algorithmic competency may become a professional, ethical, and training obligation.

As AI becomes more sophisticated, some consumers of legal services may eventually bypass lawyers and law firms altogether and go directly to legal technology vendors for legal advice. In such cases, law societies and other regulators may view those who create AI programs as alternative legal service providers (ALSPs). Although ALSPs are permitted in some jurisdictions, in others they are considered unauthorized practitioners of law. On the one hand, it could be in the public interest to prevent or discourage non-lawyers (i.e., software developers) from building systems that essentially act as a lawyer. On the other hand, seeking to prohibit and push out novel options for legal guidance, which can facilitate access to justice, might be viewed as a self-serving attempt by lawyers to protect their monopoly on the provision of legal services.

AI could empower those who cannot afford lawyers or access legal aid. Assisted by software that can digest reams of judicial precedent, court procedure, and other legal texts, self-represented parties could advance their own matters.

Advocacy will be reassessed in our AI world. We offer this book as a road map to assist legal professionals in contending with existing AI and the AI of the future. Lawyers and judges have an obligation to promote an accessible, transparent, and fair justice system. To that end, AI will bring a unique set of challenges and opportunities. For example, AI decision-making threatens to import systemic biases; meanwhile, AI litigation tools are poised to increase access to justice and democratize law. Ultimately, the intersections of AI and law call for nuanced study and analysis.

Disruption from AI will transform and even revolutionize society. Beyond the justice system, the very ways in which we live and interact with each other will be affected. Impacts will be enormous, for good and for ill. We are long past the birth of the information age; we are in an era of “thinking” software. Let’s pay attention to what this means. It is not business as usual. And the stakes are high.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)