Column

Automated Decision-Making and the Civil and Administrative Justice System

Author: Susie Lindsay, The Law Commission of Ontario

The impact of Automated Decision-Making in the Civil and Administrative Justice System requires deliberate and considerate policy and legal guidance.

On December 10th the Law Commission of Ontario (LCO) brought together lawyers, developers, policymakers, academics and community advocates for an informal and collaborative discussion of the issues and implications of artificial intelligence (AI) and automated decision-making (ADM) in Ontario’s civil and administrative justice system.

These issues are important because of the examples of AI being used in civil and administrative government decision-making in the U.S. and Europe. AI technologies are being deployed in the areas of government benefits, public health, education, housing, child welfare, immigration, and numerous criminal justice applications.[1] What’s notable about these examples is that they are the areas of greatest concern to access to justice advocates: “poverty law”, human rights law, child welfare law, criminal law, and refugee/immigration law.

To date, it appears Canadian governments have taken a more deliberate approach to introducing AI and ADM systems. The most notable example in Canada is a system developed by Immigration, Refugees and Citizenship Canada (IRCC) that automates decision-making to assess a “portion of the temporary residence business process by training a model capable of recognizing key factors at play in decision making on visitor visa e-Applications.”[2] This system automatically triages applications and “recommends” applications that should be approved at this step or further reviewed.

The LCO believes that AI and automated decision-making systems represent a new frontier for access to justice. These technologies present significant challenges – and opportunities – to traditional models of human rights, legal regulation, dispute resolution, and due process. It is incumbent on policy-makers, advocates, and justice system leaders to understand the impact of this technology and to act thoughtfully.

The December 10th forum was organized around three major, related issues:

  • How to ensure due process/procedural fairness in the use of these systems;
  • How to address issues of bias and discrimination; and,
  • What is the most effective way to regulate these systems.

The forum started with presentations of three significant cases where AI has been deployed to assist with or replace government decision making: the determination of Medicaid benefits in Arkansas; evaluating teachers in Houston; and to investigate potential benefits fraud in the Netherlands. We then had a discussion with administrative law and human rights experts on the legal issues that arise in this context such as disclosure, notice, transparency, bias and discrimination. Finally, we heard about efforts made to date regarding establishing regulations of these systems. A focus of the discussion was on the the Government of Canada’s Directive on Automated Decision-Making (the Directive)[3] and Algorithmic Impact Assessment (AIA)[4] as an important, concrete initiative designed to address due process/procedural fairness in AI and automated decision-making. We also looked at The New York City Automated Decision Systems Task Force as an important American example of regulatory initiative.[5]

Attendees participated in an afternoon workshop to apply their specific expertise to these issues. Overarching themes developed through all discussions. Concerns with the use of such systems were widespread and all encompassing, though opportunities emerged as well.

Many participants commented on the need for clear disclosure of the use of new AI tools and the rationale and objective for introducing them. Comprehensive participation in the creation and rollout of ADM systems in government decision making was another focus. Specifically, the populations affected by the new tools should be consulted about their development. However, there was some apprehension as to how such participation could best be achieved. The role of the decision maker, the impact of new systems on humans involved in the process, and the need for reasons to understand how a decision was made were all addressed. Data management was another big issue. AI tools are only as good as the data they rely on. Is the data valid? reliable? current? sufficient? Existing data that is bias or discriminatory is going to produce results that are bias or discriminatory. However, this concern could be balanced with the idea that AI tools may offer potential to track and expose bias. Further, introducing a new method of making government decisions introduces an opportunity to scrutinize existing regulatory systems.

The LCO notes significant law reform take-ways include comments such as:

  • Discussions and creation of guidelines for “Ethical AI” and “AI Policy” are not sufficient. New legislation and regulations are needed.
  • The areas most urgently in need of new law include: clarity regarding liability for AI systems; rules about government procurement; regulations around transparency and disclosure.
  • New rules or regulations should focus on government action, not technology. They should include clear definitions and should be focused narrowly so as to be effective.

The LCO is drafting a report about the event and emerging issues which is expected for release in early 2020. For more information, please go the LCO website at https://www.lco-cdo.org/en/our-current-projects/law-reform-and-technology/

 Susie Lindsay
Counsel, Law Commission of Ontario

_________________________

[1] For examples in the U.S. see https://ainowinstitute.org/nycadschart.pdf. For examples in Europe see Automating Society – Taking Stock of Automated Decision-Making in the EU, Algorithm Watch https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf

[2] IRCC, “Augmented Decision-Making @ IRCC”, Presentation to the Symposium on Algorithmic Government (April 24, 2019), online: https://www.canada.ca/content/dam/ircc/documents/pdf/english/services/ai-agenda/cantin-eng.pdf (IRCC Presentation).

[3] Government of Canada, Treasury Board Secretariat, Directive on Automated Decision-Making, available online: https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.

[4] Government of Canada, Treasury Board Secretariat, Algorithmic Impact Assessment, currently published as Beta v.0.7, available through https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html.

[5] See generally, https://www1.nyc.gov/site/adstaskforce/index.page

Comments are closed.