Column

Artificial Intelligence and Law Reform: Justice System

Artificial intelligence (AI) is sometimes thought of as a cure for the complexities of the world, but perhaps even more often as a threat to humans. Stephen Hawking said that “[w]hereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

At a somewhat less general level, a good deal of concern has been expressed about the impact of artificial intelligence on the law, and notably on the criminal justice system. My own musings are here. That article considered the evolution of AI from painstaking mimicry of human decision-making to machine learning, where computers review huge amounts of data and decide what patterns they show and how to achieve specified ends. It also considered some of the problems encountered in doing so, including the usual but still important limitation that a computer can only do what it is told. If there are policy limits to what is an acceptable solution, the machine has to be told.

A dramatic example is asking a computer to increase the number of fish caught, and having it recommend draining the lake.

A lot of people were very enthusiastic about the potential for AI in the justice system, some years ago – enabling the system to base its decisions on better understanding of what actually happens in the system, with less human discretion and thus – it was thought – less susceptibility to discrimination and arbitrary judgments. Then it turned out that the data available sometimes normalized the results of other social problems or justice system problems.

For example, AI might extrapolate – accurately – from justice system data that certain crimes in the system tended to be committed by certain types of people. Based on that extrapolation, it may “find” that such a type in a particular case was more likely to commit such a crime, or be more likely to repeat an offence. However, the data “showing” this tendency might be a result of social or human factors, like police practices or jury bias, that did not reflect the individuals about whom judgments were to be made or even the group to which they belonged.

As a result, many of those who were enthusiastic about algorithmic analysis, say 20 years ago, became much less enthusiastic about its results in practice.

Cathy O’Neil, whose thorough examination of such issues appeared in 2016 under the brilliant title Weapons of Math Destruction, set out four factors producing harmful results, in a 2017 article about whether algorithms could lie.

  1. Unintentional problems that reflect cultural bias, e.g. results that reflect unstated bias in the records.
  2. Algorithms that go bad due to neglect, e.g. scheduling part-time workers in ways that do not give them any opportunity to do child care or further education, or failing to check the quality of results before using the algorithms widely
  3. Nasty but legal algorithms, e.g. targeting poor people for lower-quality goods and services, or raising prices for those who seem willing or able to pay more.
  4. Intentionally nefarious or outright illegal algorithms, e.g. mass surveillance tools that allow targeting of legal protesters, or tools that detect regulatory testing and adjust results accordingly (think of Volkswagen and emissions controls).

A number of the policy challenges and possible ways forward were reviewed in the text Responsible AI with contributions from around the legal world. My view of that text is here.

Among several writers on Slaw who have commented on the issues, it is worth noting the many contributions of F. Tim Knight as recently as 2020, including a number of reports of conferences.

A knowledgeable analysis of the use of AI by law enforcement was published in September 2020 by the Citizen Lab and the International Human Rights Program at the University of Toronto, under the title To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. An executive summary is here.

Recently the law and policy have been reviewed by the Law Commission of Ontario with its characteristic thoroughness. Under the general rubric of Digital Rights, its AI work has three parts: criminal justice, civil justice, and regulatory uses, notably consumer protection. (There is also a good deal of work being done in Canada on AI and privacy, worth a column here on its own. A recent comment by Martin Kratz outlines some issues.)

The first LCO report to appear is on criminal justice. Besides the full report, there is a very useful Executive Summary, along with some background studies and notes on preparatory workshops. (The Executive Director of the LCO, Nye Thomas, did a note on Slaw at the time of the main workshop.) The Executive Summary notes that the particular focus of the report is pre-trial algorithmic risk assessments.

Drawing on the very extensive American work in the field, and such Canadian material as is available, the Commission highlights ten issues for attention, in principle before any such activity is implemented. (It is a bit late for that timing, in some cases.) The overall goal might be described as “technological due process.” (The following texts are excerpted from the Executive Summary.)

Issue #1: Bias In, Bias Out

Because the training data or “inputs” used by risk assessment algorithms – arrests, convictions, incarceration sentences, education, employment – are themselves the result of racially disparate practices, the results or scores of pretrial risk assessments are inevitably biased

Issue #2: The “Metrics of Fairness”

Risk assessment controversies in the US have demonstrated how different measures of statistical fairness are crucial in determining whether an algorithm should be considered discriminatory or race-neutral

Issue #3: Data Transparency

[T]he lack of transparency about data and how these tools work … often are part of a larger “black box” critique of AI and algorithms.

Issue #4: Data Accuracy, Reliability and Validity

… issues such as the reasonableness or appropriateness of a dataset, whether or not a dataset is sufficiently accurate or reliable, and the characteristics selected by developers as most relevant can have important practical and legal consequences. American debates also reveal that questions about data accuracy, reliability and validity are not technical questions best left to developers or statisticians.

Issue #5: Data Literacy: Risk Scores and Automation Bias

… the determination of what constitutes a low, medium or high score is an explicit policy choice, not a statistical or technical outcome.

Issue #6: The Distinction Between Predictions, Law and Policy

In the pretrial context, the developers of decision-making frameworks (sometimes called a “release matrix” before the trial) must consider some of the following issues:

  • Does the release matrix conform with constitutional law, relevant statutes, judicial decisions and practice guidelines?
  • What conditions or recommendations are suggested for high, medium or low risk scores
  • What risk score justifies pretrial release or pretrial detention?
  • How should the release matrix account for local services?

These are complicated, contested and consequential questions of law, criminology, social policy and social services.

Issue #7: Best Practices in Risk Assessments

A consistent theme in these proposals is the need to incorporate the principles of equality, due process, the presumption of liberty, community participation, transparency and accountability into all aspects of pretrial risk assessment.

Issue #8: Public Participation

This participation must include technologists, policy makers, law makers, and, crucially, the communities who are likely to be most affected by this technology.

Issue #9: Algorithmic Accountability

Collectively, these proposals represent a robust regime for addressing legal accountability regarding data, transparency, bias and due process concerns in AI and algorithms in criminal justice.

Issue #10: The Limits of Litigation

Litigation, while obviously necessary to address specific cases, is insufficient to address the systemic statistical, technical, policy and legal issues that have been identified in this report.

The LCO does not conclude that AI cannot be used in criminal justice, or elsewhere. Canada should build on the improving understanding of the limits and the policy options for the management of data through AI. Several such advances are described. Our system should not, however, allow the use of AI analysis in risk assessment before the risks of that use itself can be effectively managed.

If one needed to summarize the recommendations in two sentences, one would say that the product of AI should always be taken as a kind of measurement (of the data, of the possibilities) but never as the decision itself. The decision must be left to human beings with an understanding of the data and the real human society for which decisions must be made.

The LCO advocates comprehensive law reform that avoids the apparently simple data-driven solution to what are complex problems. Litigation challenges to results in individual cases will not suffice. Individual litigants will never have the resources to evaluate and challenge the systemic use of data to their prejudice. (If the use works to their benefit, they presumably will not be inclined to challenge it.) It is therefore an access to justice issue, as well as a societal fairness one.

While reform should not be piecemeal, in the sense of disconnected, it will have to be incremental, in the sense of gradual, as we better understand issues and solutions. The report is a solidly researched and analysed start at such reform.

Comments are closed.