Column

Proprietary Algorithms for Public Purposes

It is now generally recognized that “code is law”: how computers process the millions of on/off, yes/no signals in their binary universe can have legal effects beyond their obvious output. Deciding how computers handle data they receive is a matter of choice, and those choices have consequences. These consequences arise whether or not the software writers, the coders, are aware of their choices or assumptions.

Two developments have brought the coding issue back to the fore in public discussion. The first is the computerization of what used to be purely mechanical devices. The analysis of physical phenomena is done, pursued and communicated electronically. This increases the opportunity for hidden or unconscious assumptions to play a role in the results of the analysis.

The second is artificial intelligence (AI): as computers train themselves, draw conclusions from big data too voluminous for human-powered analysis, and build on the conclusions to ask and answer further levels of question, the assumptions about the data and the conclusions are more remote from the knowledge and control of the systems’ designers.

David Canton foresaw this issue as a key one for 2017 in his new year’s predictions on Slaw.ca:

Another AI issue we will hear about in 2017 is embedded bias and discrimination. AI makes decisions not on hard coded algorithms, but rather learns from real world data and how things react to it. That includes how humans make decisions and respond and react to things. It thus tends to pick up whatever human bias and discrimination exists. That is a useful thing if the purpose is to predict human reactions or outcomes, like an election. But it is a bad thing if the AI makes decisions that directly affect people such as who to hire or promote, who might be criminal suspects, and who belongs on a no-fly list.

And do the algorithms lie?

The present note focuses on the implications for the criminal justice system. The state prosecutes someone based on the output of a machine, such as a breathalyzer or a speed-limit radar. Even if the machine has been u sed the way it was supposed to used, how does it work? How are the input data converted to output data? What are all the factors that can influence the output?

AI is a more recent development. It is used to review and understand masses of data about human social conduct, both commercial and social. Specialized uses are directed at criminal behaviour, to learn patterns of conduct, and – in some places – to compare the background of the defendant with the data to predict whether the person is still a risk to society. The analysis can be used before sentencing, to judge severity, or after, to consider eligibility for parole.

It is arguable that well- known machines that measure only physical inputs, such as breathalyzers and radar guns, can be taken as proven – though as they become driven by computers, that may change. The desire to challenge these processes has led to demands by criminal defendants in the United States to access the computer code of the breathalyzers to review their accuracy. Decisions have been divided on those demands. In some cases the devices are certified as effective by the state, usually subject to meeting regulatory standards that may have to be extended to the source code.

The case is harder when human behaviour is being predicted, as Canton notes. Not only may humans act in unpredictable ways, or overcome ‘statistical’ probabilities by the exercise of will, but the data on which the computers’ predictions may be suspect. Consider crime statistics: the data on which a computer will base its predictions may reflect factors logically not connected with particular defendants: arrest patterns that match or do not match the characteristics of the accused person; the impact of poverty or race on conviction rates of people ‘comparable’ to the accused; hard-to-quantify characteristics of accused or convicted people like educational achievement or religious practices.

What are the computers learning from these data? How are they applying this ‘learning’ to individuals?

The question is further complicated – as we see in this column – by the development of such computers by private enterprises, sold or licensed to governments for public purposes. What are the limits of the private rights and what the public responsibilities for the coding?

United States

The implications of such uses of technology in the USA have been noted with concern here, here and here.

Naturally enough, defendants and inmates subject to this analysis want to know how the machine arrives at its conclusions: what factors does it include and how does it weigh them? What does it leave out?

When they ask, in the United States they are often told that they cannot check the computers’ processes because of private rights in the software. Questions may be asked about how the tests were administered and about the chain of custody of the data, but not how the innards of the devices function. This information is proprietary, it is said, and revealing it in an open courtroom risks opening secret processes to the manufacturer’s competitors.

This secrecy is surely problematic. An accused person has to know the case he or she is facing, and this cannot be done properly with a machine output about which the owner, and the state in the owner’s place, says “trust us”.

Much of the recent debate in the U.S. has focused on the case of Loomis v Wisconsin. Both the trial judge and the state Court of Appeal held that the defendant was not entitled to examine the source code or algorithms governing the private analysis that concluded that the defendant was a bad risk to reoffend. In June 2017, the Supreme Court refused to review that conclusion.

It is worth noting, however, that the courts and the government are not denying all information about the analysis. An amicus brief filed with the Supreme Court (at the Court’s request) by the Department of Justice noted that “actuarial risk assessments” had been used in trials since 1923, and the National Center for State Courts favoured their use. A study was being done for the Center but is not yet complete.

In the Loomis case, the DOJ pointed out that defendant had had a full account of the questions asked by the machine, the answers to which were generally matters of public record. In addition, the trial judge said that he would have arrived at the same conclusion on sentencing based on the ample evidence without the machine. Indeed, the report from the machine cautioned against its being used for sentencing, its strength allegedly being to design treatment programs for offenders. In short, Loomis was not a good case to determine the issue of disclosure of algorithms.

On that general issue, however, the DOJ concluded:

[T]hat is not to say that the use of actuarial risk assessments at sentencing will always be constitutionally sound. Some uses of an undisclosed risk-assessment algorithm might raise due process concerns—if, for example, a defendant is denied access to the factual inputs about his criminal and personal history, or if his risk scores form part of a sentencing “matrix” or establish a “presumptive” term of imprisonment. See Pet. App. D53-D54 (distinguishing the use of COMPAS at sentencing from its use in bond determinations and other settings). As this Court has often recognized, “due process is flexible and calls for such procedural protections as the particular situation demands.”

It is fair to note that some people see the use of these AI processes optimistically, as likely to make it easier for judges to decide not to lock some people up. (The authors are not representing someone accused of a crime, though.)

[The researchers] estimate that for New York City, their algorithm’s advice could cut crime by defendants awaiting trial by as much as 25 percent without changing the numbers of people waiting in jail. Alternatively, it could be used to reduce the jail population awaiting trial by more than 40 percent, while leaving the crime rate by defendants unchanged. Repeating the experiment on data from 40 large urban counties across the U.S. yielded similar results.

As a bonus, gains like those were possible while simultaneously shifting the jail population to include a smaller proportion of African-Americans and Hispanics.

Canada

No Canadian cases similar to Loomis have come to hand. The question of proprietary algorithms would stand to be determined under the Crown’s disclosure obligations under R v Stinchcombe, [1991] 3 SCR 326. Any inculpatory or exculpatory evidence in the hands of the Crown must be disclosed to the defence, whether or not the Crown plans to use it at trial.

The question in this case is whether the algorithms of devices under discussion are in the hands of the Crown or of third parties, the designers, owners or licensors of the devices. The Principles of Disclosure of the deskbook of the Public Prosecutions Service of Canada say this (in section 4.1, “third party information”, footnotes omitted):

Information in the possession of third parties such as boards, social agencies, other government departments, rape crisis centres, women’s shelters, doctors’ offices, mental health and counselling services or foreign law enforcement agencies is not in the possession of Crown counsel or the investigative agency for disclosure purposes. Where Crown counsel receives a request for information not in their possession or the possession of the investigative agency, the defence should be advised that these records are in the possession of a third party in a timely manner in order that the defence may take such steps to obtain the information as they see fit. Even where third party records are physically in the possession of the Crown, disclosure is not automatic. Unless the person to whom the information pertains has waived his or her rights, that person still has a privacy interest in the records.

If the Crown is put on notice or informed of the existence of potentially relevant information in the hands of a third party, including information pertaining to the credibility or reliability of the witnesses in a case, the Crown’s duty to make reasonable inquiries of that third party is triggered. The third party is not obligated to provide them to the Crown on request. Crown counsel must disclose the request to the defence who may choose to bring an application for disclosure of the third party records.

The Supreme Court in R v McNeil 2009 SCR 3 held that the court may order production of records in the hands of third parties. “Ultimately, what is required … is a balancing of the competing interests at stake in the particular circumstances of the case.”(para. 35) A number of limits may apply to disclosure: private interest of unrelated individuals, informer privilege, and others. Commercial interests of third parties are rarely if ever mentioned in this context, but they are likely to be considered, given that public sector access to information statutes invariably mention them.

A discussion of claims of privilege compared to confidentiality is found in Horton, “Is there a difference between privilege and confidentiality?

In any event, one should not give undue weight to the argument that releasing this kind of information to defence counsel will lead to reverse engineering of the devices and thus to unfair competition. The liberty of the citizen should not yield before the needs of commerce. A balance is needed, but Stinchcombe provides the standard for setting it.

Moreover, the law frequently deals satisfactorily with this kind of question. Counsel give express undertakings not to disclose sensitive information to their own clients, for example in intellectual property litigation or international trade disputes. Sometimes information about national security investigation is given to counsel but not their clients. Implied undertakings are routine to limit the use of information acquired in examination for discovery.

The Court in McNeil expressly contemplates just this kind of order.

[T]he court may find it necessary to make a production order subject to redactions or other conditions. In addition, when just and appropriate to do so, the court may well impose restrictions on the dissemination of the information produced for purposes unrelated to the accused’s full answer and defence or prosecution of an appeal. (para. 46)

Conclusion

This discussion raises broader questions of the acceptance of scientific evidence and the use of technology in court. When, if ever, may one stop having to prove the technology itself, as well as its application, in every case? Is there a point at which the reliability of the machines may be a matter of judicial notice? When is it fair for the legislature to step in and decree a technology “settled”?

It is not news that courts and counsel need to know how computers work and the effect of their use, in order to apply existing law to them. This becomes increasingly important as the machines teach themselves ‘things’ that their programmers did not decide or expect.

It may be inevitable that the state will buy limited rights to use the technology it needs, rather than building the technology itself or acquiring exclusive rights to it. The suppliers thus retain legitimate interests in their ability to market the technology. This situation cannot be allowed to impair the rights of accused persons to know the case they are facing.

It appears that authorities in both Canada and the US are aware of the issues – and will be reminded by the press and the legal profession if they forget – and that the law has the means to ensure fair process in both countries.

It may nonetheless take some time for the issues to be properly framed, and to sort out serious requests to explore the capacities of new technology from fishing expeditions.

Comments

  1. Interesting article, as always, John.

    You comment in the conclusion about judicial notice and the ‘reliability’ of machines controlled by software. No judge has ever determined what they mean by ‘reliability’ of a machine controlled by software code, as I indicate in chapter 6 of Electronic Evidence (4th edn, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2017), which is also an open source text http://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-evidence.

    The vignette I have written for this edition, entitled ‘Business Records’ sums up the position with respect to spreadsheets by way of an example.

    If any judge, lawyer or legal academic has written on the ‘reliability’ of machines controlled by software code, I will be delighted to be made aware of such work.

    Stephen Mason

  2. Wow, it is exciting to see how the computing world and the world of law are intertwining! Great article!

    http://butzlaw.ca/

  3. For those who like to take their information from video – here’s an update of the concerns expressed in the note above.

    The Accuracy, Fairness, and Limits of Predicting Recidivism (Harvard Berkman video, 6 March 2018; 56 mins) – Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.

    [This note from MIRLN 4 – 24 March 2018]

  4. The Supreme Court of Canada decided in June 2018 that algorithms designed to evaluate a prisoner’s likelihood of reoffending could not be used for an indigenous prisoner without being specifically designed to apply to such a person. Ewert v Canada.

    Here is a description of and comment on the case by Professor Scassa of the University of Ottawa.