When discussing the modernization of the justice system the conversation can often be about how we adapt the technology to replicate the bricks-and-mortar experience.
But how might the institutions and decision-makers themselves adapt to work with the emerging technology?
Legal scholar Tania Sourdin talks about three primary kinds of technology in the context of the justice system:
- Supportive – things like online legal applications that support and advise people using the justice system
- Replacement – things that replace the role of people, such as e-filing technologies and online dispute resolution
- Disruptive – things that fundamentally alter the way legal professionals work, including artificial intelligence or other decision-making programs that change the judicial role.
Much has been written about the way AI is encroaching on lawyers’ turf, and whether that should be a concern for the profession. Less has been written about AI taking over the role of judges as gatekeepers of justice.
In many parts of the U.S., various forms of AI have been embraced as a way to deal with court backlogs. In Los Angeles, Gina the Avatar helps people deal with traffic citations. Gina is more like an app than an algorithm, but those are also used in some jurisdictions for work such as helping parole boards assess the likelihood of recidivism.
Where AI comes under increased scrutiny is when it is used to supplant the decision-making role of judges and arbitrators. Arguments against using machine-based decisions range from taking a person out of the decision-making process to the lack of transparency in the algorithm: it’s impossible to tell what biases are built into the system. The CBA Immigration Law Section, commented on this issue in 2019.
Consider this: machine learning engineers and data scientists claim that not only can they tell what country a coder is from by the way the code is written, but even the gender of the coder. Each individual coder has their own biases, conscious or unconscious. For example, do a search for images of “professional haircuts” on Google, and the people in the images will be primarily white with straight hair; a search for “unprofessional haircuts” will return images of primarily dark-skinned people with curly hair. Whoever created the algorithm that returned those results associates professionalism with white people and straight hair.
Before we can turn something as important as sentencing or legal decision-making over to AI, we have to find a way to strip this kind of bias out of the process.
“Courts, tribunals and other dispute resolution bodies, like other segments in society, are likely to introduce AI progressively for uses ranging from judgment writing to decision-making,” the Task Force report says. “It is incumbent on us to consider the disparate impact these technologies may have on marginalized groups, in particular systemic racism against Black Canadians and Indigenous people in the criminal justice system. Finding inclusive platforms – and insisting on inclusiveness in technology itself – is the essence as we digitize the justice system.”
What other transformations might we see? Some have suggested that dispute resolution models that encourage a more active or inquisitional role by the decision-maker may be preferable. As the task force report says, “Active management by judges and administrative tribunal members can be more advantageous in the digital age and could palliate to some extent the proclivities of technologies and procedural challenges feared about moving online.”
Either way nothing is static –the justice system too must evolve with the times.
CBA President Brad Regehr and Past President Vivene Salmon are Co-Chairs of the Canadian Bar Association’s Task Force into Justice Issues Arising from COVID-19. The task force will release its report at the CBA’s Annual General Meeting on Feb. 17, 2021.