Column

Regulating Artificial Intelligence and Automated Decision-Making

The Law Commission of Ontario has been reviewing the principles and impact of artificial intelligence (AI) in the Canadian justice system for some years. Its three points of focus have been on the use of AI in criminal justice, in civil justice and in government. A report was issued in late 2020 on criminal justice aspects. It was described in Slaw.ca here.

The second report is on government uses, under the title Regulating AI: Critical Issues and Choices. As with the criminal paper, there is a helpful Executive Summary as well.

Regulating AI presents a lot of challenges, starting with defining just what one needs to regulate. Definitions of AI can be very broad, while regulations should be precise, both to be enforceable and to be consistent with legal principle. The LCO speaks of AI and “ADM”, being automated decision-making. The terms seem to be largely interchangeable in the report.

There is a long list of the kinds of actions governments around the world are using AI and ADM for. Besides detection and analysis of criminal activities, one finds the attribution of government benefits, the priorities for access to public services such as housing, education and health, determining eligibility for priority for immigration, and making hiring decisions or evaluating employees.

The report recommends a combination of “hard” and “soft” law for regulation. “Hard” law consists of firm rule with prohibitions and penalties. “Soft” law takes the form of ethical guidelines: “this is what you should do.” The European Commission’s High-level Expert Group on Artificial Intelligence has taken a soft law approach, “intended for flexible use.” But what if those using AI – including governments – are not inclined to be ethical?

Hard law could prohibit, for example, the use of facial recognition software, on the basis that the potential for abuse is just too serious to count on proper restrictions being imposed.

In Canada, the federal government has recently issued a Directive on Automated Decision-Making to make its own uses transparent and fair. The LCO takes a positive view of this Directive as a whole but points out that it applies only to the federal government itself and not to the private sector or other levels of government, and even some parts of the federal structure are exempted. And it notes Professor Teresa Scassa’s observation that a directive gives no private rights and provides for no independent enforcement.

The LCO recommends a mixed model, with some hard and some soft provisions. One hopes of course for a “smart mix” of methods. The hard provisions would touch overall direction and public accountability mechanisms. Guidelines, standards and best practices can have their uses as well, “to supplement or expand upon mandatory legal obligations.”

The content of the regulations will depend on the assessment of the risks presented by various applications of AI. The EU group divided systems into high or low risk. The LCO agrees with other critics that it cites in finding this too simple: a high or low rating leaves too much space in between. The LCO prefers the Canadian federal directive, which has four risk levels but which also “establishes baseline requirements that apply to all ADM systems, regardless of impact [i.e. risk] level.” Some of these requirements are notice to those affected by the proposals, employee training and human intervention in the operation of AI.

Accountability and transparency are essential to proper legal regulation of AI. Accountability is not possible without sufficient transparency. This can be achieved, says the LCO, by a mix of disclosure, impact assessments and procurement policy.

Disclosure is best done by “AI Registers” that identify and document the use of AI and ADM systems by government. The government of Ontario has a public catalogue of “algorithms, tools and systems powered by data across the Ontario Public Service.” It is interesting that some of the better models the LCO finds are at the federal and provincial levels in Canada. The federal system at least is getting some academic attention. The Commission recognizes that the extent of disclosure will depend on the use and impact of a system.

For this reason, designers of AI systems need to do an impact analysis, as data managers have to do for privacy. The federal government’s Algorithmic Impact Assessment includes about 60 questions to expose the risk of a proposed system. The results are to be published. The LCO sets out some key components of such assessments, notably:

  • A clear description of the purpose and objectives of the AI system;
  • Assurances on compliance with the Charter and human rights legislation;
  • A description of how an individual may challenge or appeal from a decision based on the AI;
  • Compliance with best practices in data collection, retention, management and testing;
  • Appropriate public participation in the design, development and evaluation of AI and ADM systems.

Procurement policy enters the picture because of the risk that elements of AI, or whole systems, may be subject to claims of intellectual property that would prevent disclosure and examination of how the systems work. This risk was analysed on Slaw here. The LCO points out that “outsourcing AI and ADM design does not absolve a government from their legal obligation respecting human rights, due process and/or procedural fairness.”

The report then goes on to address fairness in more detail. The risks of bias had already been discussed in the previous report on criminal justice. The LCO notes favourable the Ontario government’s 2019 publication, Promoting Trust and Confidence in Ontario’s Data Economy. This risk may be the most widely recognized of those presented by AI, so a lot of examples exist of how to meet it.

Among them are legislative commitments that AI development will be consistent with constitutional and anti-discrimination laws. One may consider whether such commitments are needed or just window-dressing, given the mandatory nature of the constitution and most human rights laws. One is not allowed not to be consistent with them. The LCO admits that such provisions are “potentially technically unnecessary” but says they would nonetheless “provide greater legal certainty and accountability.” Maybe they are a means of allaying public distrust, but purely symbolic legislation risks undermining the trust rather than shoring it up.

Because the data sets used to train AI’s learning systems have been suspected, often rightly, of containing biased information, the LSO recommends extensive disclosure of a system’s “source and use” of data, including

  • Training data
  • Description of design and testing policies and criteria
  • Factors that tools use and their weighting
  • Scoring criteria
  • Outcome data that validates tools
  • Definitions of what the AI instrument forecasts
  • Evaluation and validation criteria and results.

Regular audits should test systems for accuracy, effectiveness, efficiency and bias.

The federal Directive, according to the LCO, has been praised as constituting “fairness by design” (an obvious echo of the well-known Privacy by Design principle) – the impact being to require fairness to be taken into account at the front end, rather than simply imposed by the courts applying administrative law principles to the results. This is consistent with the LCO’s general preference of systemic law reform to avoid problems to judicial correction after the harm has been done.

That said, the federal Directive comes under some criticism for not stating, for example, a standard of review, a perpetually difficult question in Canadian administrative law. Its internal limits to application may prevent it from bringing useful checks to what are still public applications. It does not apply to criminal justice, where one might think it is dearly needed. And of course, it does not apply to provincial or municipal systems. Whether Ontario’s guideline cited earlier is a good substitute is not clear.

While the LCO advocates a comprehensive mix of hard and soft legislation urging procedural fairness and due process, we may all need to be content with what it thought we would be likely to see happen in its criminal justice study of AI, that is, an incremental development of policy over time.

The report concludes with thoughts on how to design effective oversight of governments to promote compliance with the principles stated thus far. “External experts” are needed, though it is not clear where they might come from or where they would get their authority. There needs, says the report, to be public representation in the cross-section of experts and stakeholders, including those most likely to be affected by the AI system in question.

How such activities might fit within a system already featuring privacy, human rights and integrity commissions remains to be designed. Acknowledging the need for recourse or even a right to recourse does not in itself create recourse. While there are existing means of resisting harmful effects of AI – general administrative and constitutional law, for example – these are not likely to be accessible or effective enough.

The Law Commission is looking for comments and suggestions on the matters raised in the report. Feel free to try out a draft of yours in comments on this Column.

Comments are closed.