Book Review: Standards for the Control of Algorithmic Bias: The Canadian Administrative Context
Several times each month, we are pleased to republish a recent book review from the Canadian Law Library Review (CLLR). CLLR is the official journal of the Canadian Association of Law Libraries (CALL/ACBD), and its reviews cover both practice-oriented and academic publications related to the law.
Standards for the Control of Algorithmic Bias: The Canadian Administrative Context. By Natalie Heisler & Maura R Grossman. Boca Raton, FL: CRC Press, 2024. 108 p. Includes bibliographic references and index. ISBN 9781032550220 (hardcover) $64.95; ISBN 9781003428602 (eBook) $24.95.
Reviewed by Marnie Bailey
Manager, Knowledge Services
Fasken Martineau DuMoulin LLP
Artificial intelligence (AI) is in the news everywhere you look, with a lot of talk about its ability to replace lawyers (and librarians!), or at least streamline simpler tasks, such as summarizing cases or articles. But what happens when AI moves into the realm of decision making? How can we ensure that there is no bias in the AI-based decisions? How do we protect human rights when there are no “humans” involved in making the decisions?
In Standards for the Control of Algorithmic Bias: The Canadian Administrative Context, Natalie Heisler, Managing Director for Responsible AI at Accenture, and Maura R. Grossman, a research professor in the School of Computer Science at the University of Waterloo, review the precursors required to allow machine learning (ML) in automated decision making (ADM) in the administrative realm. They discuss the idea that many uses of ML ADM can cause more harm than good due to its potential for unjustified disparate impact; that is, disparate impact or unintentional discriminatory outcomes “for which no operational justification is given” (p. 3).
The book comprises five chapters. In Chapter 1, the authors compare the application of the European Union’s proposed regulation for harmonised rules on AI under the Artificial Intelligence Act (Procedure 2021/0106/COD) with Canada’s Directive on Automated Decision-Making, which applies to federal administrative bodies possessing decision-making authority conferred by legislation regulating the rights, privileges, or interests of external clients. They then speak to the link between equality rights and ADM, using a case study from Wisconsin’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Used in sentencing since 2012, a later analysis of the COMPAS decisions determined racial disparities that the automated system was supposed to reduce. Longer sentences were found to have been given to Black offenders who were guilty of the same offenses as white offenders. The authors then examine how the legislation will hold up to judicial review. Currently, there is no precedent for ADM determinations, and the process for judicial review is slow. Compliance with standards will help mitigate the algorithmic bias in the programming of the ML ADM process. Heisler and Grossman go on to propose three dimensions of control: mitigation of the creation of biased predictions, evaluating predictions for influence of algorithmic bias, and measuring disparity.
Chapter 2 speaks to the foundational principles of administrative law—transparency, deference, and proportionality—and discusses how they are equally important in ML ADM. The authors use a scenario to review ideas for data standards, such as construct validity, input data, knowledge limits, measurement validity, and accuracy of input data. They further discuss the standards for evaluation of predictions—namely, accuracy/uncertainty and individual fairness—and include a table of proposed standards.
Chapter 3 talks about monitoring decisions as a requirement to determine how much, if any, discrimination is found in the automated decisions. Heisler and Grossman speak to the prima facie test for discrimination in the Supreme Court of Canada’s decision in Fraser v Canada (Attorney General), 2020 SCC 28 and talk about legislative and policy approaches to measuring disparity. They further discuss the types of disaggregated data required to ensure there is no discrimination in the decisions and provide a table of standards for measuring disparity.
Chapter 4 provides an overview of the standards framework and consolidates the standards listed in the previous chapters. Heisler and Grossman discuss how to implement the standards and remind us that this is only part of an agency’s approach to ADM. Stakeholders, social scientists, ethics specialists, data scientists, and quality specialists, as well as legal and data privacy experts, should all be involved, as there is a need for broad and diverse perspectives when creating standards and measuring decisions.
Chapter 5, the conclusion, speaks to the requirement of mitigating disparate impacts of ML ADM, reiterating that it must not infringe on human rights and that standards regulating these types of decisions must be created using both technological and legal perspectives. The quality of predictions must be the focus, and disparity in the ML ADM outcomes must be constantly measured and accounted for. Standards should also be made publicly available. The authors conclude that more study is needed before ML ADM is used.
Standards for the Control of Algorithmic Bias: The Canadian Administrative Context is a concise text that speaks to the potential harms involved with ML ADM. The authors lay out a very logical and simple process that should be initiated prior to any decision making. Nine pages of bibliographic references to resources cited throughout are also included at the end of the book. I recommend this text to anyone interested in the ways AI can be used to automate decisions, particularly to those building the systems.




Comments are closed.