Column

A Taxonomy of Judicial Technological Competence

Earlier this month, the Canadian Judicial Council published updated ethics guidance for federally appointed judges. The new Ethical Principles for Judges substantially revises a 1998 document of the same name. Among the revisions is a caution that judges must be technologically competent. The section addressing judicial diligence and competence includes the following statement:

3.C.5 Judges should develop and maintain proficiency with technology relevant to the nature and performance of their judicial duties.

This provision on technological competence is a welcome addition to the Principles. Two years ago, I argued in a Slaw column that there should be a formally recognized duty of technological competence for Canadian judges. Since that time, the need for judicial technological competence has only increased. New, more powerful technological tools have emerged, and the COVID-19 pandemic has radically accelerated the use of some technologies by courts. Including technological competence in ethics guidance for judges is long overdue. The Principles mark an important step forward in this regard.

Now that an obligation of judicial technological competence has been recognized, there remains the question of what, exactly, this obligation entails. There will be obvious cases where judges fall far short. Take, for example, a New York judge who faced complaints for, among other things, not using his court email account for three years, using paper forms of his own design rather than a required online process for certain court matters, and failing to use a provided computer and software program for financial and case administration (h/t to Bob Ambrogi who mentioned this example in his blog here). But what about the margins? And what about judges who want to proactively reflect upon or improve their technological knowledge and skills to ensure they are meeting their ethical obligations?

The 5As of Judicial Technological Competence

A few months ago, I wrote a column outlining a taxonomy for lawyer technological competence. Here, I do the same for judges, detailing a taxonomy for judicial technological competence. As with the lawyer taxonomy, my goal here is forward-looking: what are the current and emerging technological issues that judges need to be aware of? As with the lawyer taxonomy, I offer an alliterative format: a “5As” taxonomy for judicial technological competence.

(1) Being alert to technologically-based risks

First, modern judges need to be Alert to technologically-based risks. For example, courts are not immune from cybersecurity attacks. As recently acknowledged by the American non-profit National Centre for State Courts, “[i]n spite of good prevention efforts, every court will almost certainly face a cybersecurity incident including data breach or cyberattack.” Just last month, Alaskan courts saw their operations severely disrupted due to a cyber-attack that involved malware being placed on their systems. Although individual judges are not (generally) involved with decisions regarding technical security features of a court’s digital systems, cybersecurity implicates everyone working within the court system: human errors, such as opening malicious links or attachments in emails, are notoriously responsible for the vast majority of data breaches (see, e.g. here). Judges need to be aware of the types of cybersecurity threats that their courts may face and the steps necessary to guard against them.

Beyond the common types of cyberattacks, such as phishing and ransomware attacks, new threats are emerging that require judicial attention. For example, in 2020, as many court proceedings moved online, courts were unfortunately introduced to the phenomenon of Zoom-bombing. Another set of emerging threats comes in the form of deepfakes, which involve the use of sophisticated technology to create video or audio that falsely impersonates an individual. Concerns about the potential use of deepfake evidence in courts are increasing, with at least one recently reported case of doctored audio being presented in a family law case (h/t to Benoît Dupont for this example). The contemporary context of judging is undeniably one that requires judges to be aware of how technology can be deployed in ways that threaten the security and integrity of court proceedings.

(2) Being an ethical “avatar”

Second, the technologically competent judge must maintain an ethical online presence or, in the alliterative language of this taxonomy, must understand what it means to be an Avatar judge. The use of social media platforms by judges has been the subject of commentary for many years (see, for example, this 2013 column in Slaw and this 2015 discussion paper by the Canadian Centre for Court Technology). The recently updated Principles, however, mention social media for the first time. Among other things, the Principles caution that judges should:

  • “be aware of how their activities on social media may reflect on themselves and upon the judiciary and should be attentive to the potential implications for their ability to perform their judicial role”;
  • “exercise great caution in their communications and associations within these networks, including expressions of support or disapproval”; and
  • “be vigilant to avoid inappropriately acquiring or receiving out-of-court information related to the parties, witnesses or issues under consideration in matters before them.”

Although attention to judges using social media is not new, the Principles reflect a change in approach and tone. As recently as a decade or so ago, it was common to find opinions – even from sources otherwise considered to be progressive on technology – suggesting that judges shouldn’t be using social media platforms at all or, at the very least, that they needed to be extremely restrictive in their approach. Implicit in the revised Principles is an acknowledgement that judges do use social media and that doing so is okay, although there are trouble spots to watch out for. Indeed, we are perhaps approaching the point where social media use by judges might be seen as not only inevitable but as productive. The Principles note that it is important for judges not to be unnecessarily isolated from their communities. They also acknowledge that judges are well-placed to educate the public about the courts and the administration of justice. Engaging with social media can be a way for judges to increase connections with their communities and make both themselves and the law more accessible.

In many ways, the ethical concerns around judicial social media use echo those that arise “in real life”. Whether interacting online or in person, judges need to guard against being improperly influenced, engaging in improper activity in their private lives, or inappropriately acquiring information relating to an ongoing matter. The Principles themselves note “social media activities are subject to the overarching principles that guide judicial behaviour.” At the same time, however, social media brings new dynamics – particularly with respect to the ability for things to be easily captured or shared online, and the ways people can become associated with others (through, for example, friending or liking or retweets) – that require judges to understand the mechanics of how their social media platforms work. This is an area where judges can run into trouble. For example, in 2020, a new hearing was ordered in a Wisconsin child custody case after the judge in the matter had accepted a Facebook “friend request” from the mother while the matter was ongoing and she interacted with his social media account on numerous occasions (see here and here for more details). While there is an important upside to judges using social media, ongoing competence training, adherence to best practices, and diligence in monitoring accounts are required.

(3) Ensuring adequate acquaintance with relevant technological tools and issues

Third in the taxonomy is the need for judges to be adequately Acquainted with the technologies and technological issues that arise in their courts. The advent of virtual hearings has made it a requirement that judges have adequate facility and comfort with the tools being used for those hearings, like Zoom. Aside from being able to function in these platforms (like, for example, knowing how to share one’s screen or move between rooms), appropriate caution also needs to be exercised given that moments spent in a virtual platform can sometimes feel more private than they actually are. For example, there have been a handful of cases where judges have made offhand or sotto voce comments about witnesses or lawyers during virtual hearings which were heard by those witnesses and lawyers (see here and here for examples).

Technological competence can also be relevant in respect of evidence that may be tendered in court. In my column about lawyer technological competence, I noted the need for litigators to become acquainted with various types of algorithmic evidence, such as probabilistic genotyping and facial recognition. The same holds true for judges. Aside from evidence that engages these types of advanced technology, there are a whole set of other evidentiary issues that arise from now-commonly used technology, ranging from interpreting emojis to navigating e-discovery technology and practices (see, here, for a survey of American federal court judges where 70% of respondents indicated that they needed more training in the area of e-discovery). These realities reinforce the importance of the statement in the Principles that judges “should develop and maintain proficiency with technology relevant to the nature and performance of their judicial duties.”

(4) Using technology to audit patterns and practices

Fourth, in near-to-medium future, judges will likely have access to new technological tools that will allow them to learn more about meaningful patterns in their judicial behaviour. In this sense, modern judges can be seen as Auditors. In a recent article co-authored with Professor Jena McGill, I canvass the implications of the impending broad availability of judicial analytics tools. Currently available commercial judicial analytics tools already market themselves as being able to identify such things as: the language and precedents that a judge finds the most compelling; the “specific logic” that a judge tends to use; how likely a user’s case is to prevail before a given judge; and how long a judge takes to decide a certain type of case. Such tools also have the potential to identify patterns that suggest bias in judicial decision-making. Earlier this year, LexisNexis launched its Context tool in Canada, which it claims “applies unique, breakthrough technology to reveal the specific phrases persuasive to judges in cases similar to yours.” In our article, we predict that judicial analytics tools will become more powerful and readily accessible in the near-to-medium future, resulting in unprecedented public insight into judges and the work of judging.

In addition to increased public insight, judicial analytics tools may provide opportunities for judges to glean insights on their own patterns and practices about which they might otherwise be unaware. These insights could be used by judges to engage in self-reflection or to modify their judging behaviour, where appropriate. To be sure, current judicial analytics tools have limitations that significantly constrain the extent to which they can meaningfully be used by judges to “audit” their practices. However, as we discuss in our article, such limitations are unlikely to be permanent. As judicial analytics tools become more powerful and accessible, the remaining challenge may be the degree to which the judiciary is receptive to the information provided by judicial analytics tools. However, regardless of whether judges embrace such tools or not, it is likely that the public will be drawing and publicizing its own insights and conclusions. It may be better for judges to engage with these tools proactively and critically, as warranted.

(5) Being attentive to how automated decision-making and artificial intelligence are used in the justice system

Fifth and finally, modern judges need to be Attentive to how automated decision-making and artificial intelligence (AI) are being used in the justice system. In August 2019, the American Bar Association House of Delegates adopted a resolution that:

[U]rge[d] courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.

How judicial systems are incorporating or may in the future incorporate AI tools into court processes has been a topic of considerable discussion in Canada and internationally. The Law Commission of Ontario recently published a report on regulating AI that serves as a great primer. As noted in that report, there are already examples of governments around the world using AI or automated decision-making to, among other things, evaluate access to public services or benefits, determine immigration eligibility, and recommend parole eligibility or conditions.

The issue of potential bias in AI tools was thrust into the spotlight a few years ago when ProPublica published a report detailing “significant racial disparities” in a bail algorithm used in the United States, including the observation that “the formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”[1] Serious concerns also arise in relation to transparency and due process. Understanding how and why an AI tool yielded a particular result can be challenging for a variety of reasons. The operations of AI tools may be opaque (the “black box” issue) or shielded because of proprietary algorithms. AI tools don’t and can’t give reasons for a decision in the same way that judges can and do in court processes. Being aware of the risks (and potential benefits) of using AI and automated decision-making in the justice system is an important part of the judiciary’s role in fostering and maintaining the equal and fair administration of justice in Canada.

Conclusion

The CJC’s recognition of judicial technological competence in its updated Principles is an important and timely development. Indeed, this aspect of the updated Principles has already been lauded internationally, with American lawyer and legal journalist Bob Ambrogi writing in his blog that “[p]erhaps those who set ethical standards for judges in the United States will take a cue from their Canadian counterparts and institute the duty of tech competence. Especially after the past year, no one can argue it is not needed.”

Here, I have suggested a 5A taxonomy of judicial technological competence which argues that modern judges need to be: (1) Alert to technologically-based risks; (2) understand what it means to be an Avatar judge (i.e maintain an ethical online presence); (3) ensure that they are adequately Acquainted with relevant technological tools and issues; (4) use technology to Audit their judicial patterns and practices; and (5) be Attentive to how automated decision-making and artificial intelligence are used in the justice system. How we understand the contents and contours of judicial technological competence will no doubt continue to evolve. It is clear, however, that the issue of judicial technological competence is here to stay. A continuing conversation is needed about what this obligation requires.

___________

[1] As helpfully summarized in a different Law Commission of Ontario report titled The Rise and Fall of AI and Algorithms in American Criminal Justice: Lessons for Canada “The ProPublica analysis was vigorously debated on both methodological and policy grounds. Nevertheless, the ProPublica article ‘supercharged’ the emerging debate on risk assessments and racial bias.”

Comments

  1. The most important sentence in Amy Salyzyn’s post above is the last one, i.e., the vagueness of the several official requirements for lawyers and judges to have “technical competence.” If they include sufficient knowledge of the technology that produces the most frequently used types of evidence for legal proceedings and services, they are unworkable. The technical literature that lawyers and judges would have to read is too vast, fast-changing, and, we don’t have the technical training necessary to understand it.
    And therefore, lawyers’ CPD/CLE requirements to maintain competence cannot adequately provide it. Its fast-changing nature means for example, that the cross-examination challenging a particular electronic source of evidence this year, probably won’t be adequate next year. The advice of technical experts is required, which few clients can afford.
    “Electronic” means software source code, and that has high error rates. As a result, approximately 95% of the pop-up messages we frequently receive stating, “an update has been installed, please re-start your computer to make it operative,” involve corrections to errors in software. And, to reduce costs of production, insufficient development time is now spent eliminating the likely sources of such errors in new electronic systems and devices before they are marketed. Instead, they are taken care of by a marketing strategy alleging, “new editions and improved models” that one is strongly urged to buy; see: Professor Sean F. Johnson’s, Techno-Fixers—Origins and Implications of Technological Faith (McGill-Queen’s University Press, 2020).
    Several of the articles listed on my SSRN Author’s Page deal with these issues and the complexities of the electronic technology that produces very frequently used types of evidence such as business records and cellphone tracking evidence, which page is at:
    https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1398484 .
    The SSRN is an “open source” for freely available downloads.