Book Review: Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law

Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law
Giuseppina D’Agostino, Aviv Gaon and Carole Piovesan
2021 Thompson Reuters
ISBN 978-0-7798-9871-8

In an age where so many clients that I assisted were implementing machine learning systems and other artificial intelligence (AI) systems this book, Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law, is a welcome resource.

The book is a collection of essays that explore different facets of artificial intelligence and thereby, together, build up a more robust understanding for the reader on many of the practical and policy challenges that these systems provide.

The title promises a toolkit and delivers in several ways. As one might expect in these early days of AI implementation, most chapters address policy issues in a wide range of areas. That will be useful to those seeking policy debate. Most authors are with various institutes or universities and provide useful scholarly perspectives on these policy issues. Many of the chapters use European examples but a number also include US and other sources (including some Canadian) for policy consideration or regulation.

Practicing lawyers may benefit from the excellent first chapter identifying risks and shortcomings of AI and the simplification of AI, the chapter on intellectual property provides a good overview of patentability issues for software enabled inventions and some issues about copyright. Several chapters also address the duty of technological competence which is increasingly part of the Model Code of many law societies. Litigators may benefit from Chapter 7 and 8’s treatment of different theories of liability when AI does harm.

The three editors, Giuseppina (Pina) D’Agostino, Aviv Gaon and Carole Piovesan, set the stage by noting that application of AI systems generate many novel legal and policy issues and require us to rethink many traditional legal and moral concepts.

The book begins strongly with Chapter 1, written by Oleg Brodt, Michael Khavkin, Lior Rokach, Asaf Shabtai and Yuval Elovici. They explore the security and lack of security in AI systems. This chapter is particularly helpful as the authors address jargon and explain concepts such as artificial intelligence, machine learning, big data, and natural language processing. They explain several common machine learning tasks and categorize different types of machine learning. The authors also explore deep learning or artificial neural networks.

Importantly the authors examine a number of ways that AI systems can be tricked such as how the learning phase of an AI systems has been distorted by contamination of the dataset, or interfere with the inference phase with perturbed input. The authors examine many different ways that AI systems can reveal sensitive personal information in several different attacks.

The authors then explore bias in AI systems where misrepresentations of the population on which the model is trained occur. The authors discuss the challenges of explainability and interpretability of AI systems. In each case the authors identify some generally EU or US legal issues. The author’s objective is that the reader can see “AI technologies are no more than fancy computational statistics programs that try to assign probabilities to different outcomes based on the analysis of vast amounts of historical data”.

Chapter 2 is written by Jonathon Penney and provides a high level review of legal scholarship and AI. He identifies 3 key areas of that writing, namely AI in legal process and practice, AI in government and administration and AI in the private sector. The author identifies a tendency of legal writers to treat AI as neutral, objective and unbiased and/or to attribute anthropomorphic approaches and conceptualizations and identifies an emerging more critical trend to avoid some of the harms that may be caused by such tendencies.

Chapter 3 is written by Jordana Sanft, William Chalmers and Maya Medeiros and provides a very high level look at some intellectual property issues for AI innovation. The authors explore patent law rules for patenting of computer implemented inventions in both Canada and the US and follow with copyright and some addition types of IP. They explore the basic questions of whether a non-human can be an inventor or author.

Chapter 4 is written by Ryan Abbott and expands the patent discussion by exploring the issue of obviousness and from a policy perspective looks at whether inventive machines should result in changes to patentability standards. The author argues that we are in a transition from human to machine inventors and introduces a number of approaches that this may result in changes to patentability standards. Extensive US citations are provided.

AI systems require large volumes of data in order to train their model. This may be done by scraping data of others or from public sources. Teresa Scassa wrote chapter 5 addressing the issue of data scraping from websites and explores the kinds of legal issues that arise especially when done for competitive parties. The chapter covers the different roles of causes of action in Canada such as trespass to chattels, or actions based in privacy law, copyright law, contractual restrictions or addressed by criminal law.

Ian Stedman wrote chapter 6 and explores the role of AI systems in the context of ethical legal practice. The author provides an approach lawyers can use to better assess the decision making process to acquire a software system. Finally the author explores many of the questions and challenges to lawyers must address under the duty of technological competence that have been introduced into most Model Codes by most law societies.

AI systems can cause harm. In Chapter 7 Yaniv Benhamou and Justine Ferland look at many issues of attributing liability in the case of damage caused by AI systems ranging from the large numbers of stakeholders, the role of AI’s increased autonomy, lack of explainability (the black box problem), and the lack of foreseeability or predictability. A wide range of liability systems are explored. The authors conclude with policy oriented solutions ranging from granting legal personality to the AI, creating new forms of strict liability of operators of high risk technologies, vicarious liability principles for operators of autonomous systems, extending product liability to services and compulsory insurance schemes. Options presented for current liability regimes include enhanced duties of care and approaches to allocation of liability among tortfeasors.

In Chapter 8 Karni Chagal-Feferkorn examines application of the law of negligence to AI systems building on the work in the previous chapter. The author explores how one might apply the law of negligence to the AI itself, whether the AI can or should owe a duty of care and how a reasonableness standard can remain as a suitable tool for liability assessment. The author also considers how such a standard can be adapted for a reasonable analysis include bother an algorithm reasonableness element as and manufacturer reasonableness element.

In Chapter 9 Greg Hagen introduces smart contracts. These, technically, are not part of AI but decision systems (computer programs) located on the blockchain. The author introduces some reasons for using smart contracts and explores some of the legal issues that can arise from error made in execution by smart contracts,

In Chapter 10 Salil K. Mehra looks at the individual autonomy principle behind contract law and suggests that AI systems will require reformulation of contract law’s foundational rules such as offer, acceptance and consideration.

Maya Peleg, Shai Yom-Tov, Dor Nahshoni and Dov Greenbaum explore applications of AI systems in Chapter 11 including in diagnostics, outcome prediction, healthcare management, image analysis and biopharmaceutical development with a focus on personalized medicine. While not specific to AI the authors explore drug approval and drug patent questions in some jurisdictions.

Application of AI in the field of financial technology (Fin Tech) is explored in Chapter 12 by Moran Ofer and Ido Sadeh. The authors provide an overview of numerous use cases of AI in Fin Tech including decision algorithms in trading and financial advice, finance platforms and fundraising mechanisms, payment systems and cryptocurrencies. They conclude by looking at new risks and approaches to regulation of Fin Tech.

The use of AI systems in the taxation field to address general tax avoidance activities are examined in Chapter 13 by Blazej Kuzniaki and Kamil Tylinski. They propose an AI model to be integrated in the general anti-abuse rule (GAAR) using European sources. While not specific to Canada the authors explore limits to implementation such as the GDPR and European Convention on Human Rights that the reader can consider for his or her own home jurisdiction.

Efforts at the use of AI systems to improve the quality of decision making by regulators is examined by Anthony Niblett in Chapter 14. The author identifies a number of key risks to be considered in the use of AI by regulators in decision making systems.

An overview into the European Union’s efforts to regulate different aspects of AI systems is provided by Ja De Bruyne and Brahim Benichou in Chapter 15.

The implementation of AI systems in the legal profession is introduced by Angie Raymond, Chris Harper and Dakota Coates in Chapter 16. The authors use the lens of the American Bar Association’s Model Rules of Professional Conduct and especially the duties of confidentiality, competence and supervision. These discussion is relevant for Canadian lawyers as many Canadian law societies’ Model Code now also includes a duty of technological competence.

The impact of AI systems on human rights are explored by Vivek Krishnamurthy in Chapter 17. He examines that AI may impact human rights in a number of ways. He focuses on the risk of algorithmic bias where any bias set in the training data set may be perpetuated in the AI model. His second focus is that the training data may violate privacy rights or be used to re-identify anonymized training data by linking with other sources. He urges organizations to conduct human rights due diligence in order to account for and prevent human rights impacts in use of AI systems.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)