Column

Risk Management Revisited (Again): Navigating the Frontier of AI Regulation

I am very happy to be writing for SLAW again after 10 years of absence. I ended my time with SLAW in 2014 writing about general practice management issues and return in 2024 with a specific focus on risk management for artificial intelligence. My last column was posted in 2014 and bore the title “Risk Management Revisited”. In that post I briefly discussed the value of risk management for law firms and set out some basic steps that firms could take to begin the risk management process. I also observed at that time that “[u]nfortunately, in my experience, while lawyers play the role of risk managers for their clients, the legal services industry itself has been slow to adopt formal risk management processes”.

In the intervening decade between then and now much has changed in the world of risk management. Among other events, COVID 19 significantly disrupted the flow of commerce and brought stark awareness to the value of planning for unexpected events. Legal service providers who had adopted formal risk management processes were better equipped to deal with the fall out of COVID 19, while those who had not were made aware of the importance of advance consideration of risk.

More recently another significant event has brought risk management into the forefront of thinking for law firm managers, the advancement and regulation of generative artificial intelligence. The release of Chat GPT in November of 2022 revealed to the public the great potential and also significant risks posed by artificial intelligence and governments the world over are now grappling with how to regulate this important technology. In Canada, the Artificial Intelligence and Data Act (AIDA) was tabled as part of Bill C-27, the Digital Charter Implementation Act, 2022. The primary aim of the AIDA is to manage the risks associated with AI systems that could have profound implications for individual rights and societal norms in Canada. At its core, the AIDA imposes a risk management framework on various actors within the AI landscape including those who design, develop, manage and make an AI system available for use. The risk-based approach adopted in the AIDA aligns with similar approaches that are evolving internationally including work in the European Union and within the Organization of Economic Co Operation and Development. The AIDA is a complex framework and I will be discussing specific aspects of it in future columns but for the purposes of this post I would like to briefly summarize the risk management framework being contemplated.

The AIDA is a work in progress and it has already undergone amendments from its original text. In its current form however, the AIDA would require that proactive measures be adopted to identify, assess and mitigate risks. The framework is guided by a set of core normative principles including human oversight and monitoring, transparency, fairness and equity, safety, accountability and validity and robustness. Under the AIDA, certain commercial users of AI will be obligated to develop appropriate internal governance processes and policies. Based on my understanding of the current version of the AIDA, these processes and policies align well with established risk management practices that I and many others in the field have deployed over the past decades.

Many of the AIDA requirements in this regard will only apply to “high impact systems” or systems that have the potential to cause significant harm to individuals or society. The AIDA proposes seven initial classes of systems that would be deemed to be high impact including systems related to employment related decisions, provision of services, biometric information processing, content moderation, healthcare and emergency services, court or administrative body decision making and law enforcement. It is important to note that this is not a closed list and a number of factors are also set out to assess whether additional systems could be considered high impact. Whether the designation of high impact will apply to the internal use of AI systems by law firms remains an open question in my mind and will likely depend on a number of contextual factors.

In its AIDA companion document, the author notes that “[t]he AIDA is one of the first national regulatory frameworks for AI to be proposed.” Accordingly, the process set out involves extensive consultation with a projected coming into force no sooner than 2025. In the meantime, there will be considerable discussion on the relative merits of the regulation of AI in the manner proposed. Personally, I have some definite concerns about certain aspects of the AIDA however I was pleasantly surprised and continue to be supportive of the adoption of longstanding risk management principles to this newly developing field. I look forward to continued dialogue with SLAW readers and contributors on this important topic.

_________

Disclosure: Generative AI was used in the development of this post.

Comments

  1. I always enjoyed your columns Michael. Welcome back!!

  2. Michael Litchfield

    Thank you Kari. Happy to be contributing again and its great to hear from you!

  3. Michael Litchfield

    Thank you Kari. Very happy to be contributing again and nice to hear from you!

  4. Michael Jakeman

    AI use and regulation are exciting topics but challenging to follow. The background you’ve provided is appreciated and I hope that there is more information like this to keep practicing lawyers updated on developments important to the profession. It’s a brave new world.