Column

Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals

In my last column, I moved away from regulatory analysis to explore how artificial intelligence may affect specific functions within the legal profession. In this piece, I return to the theme of risk and broaden the discussion to consider the challenges AI presents across all regulated professions.

The rapid development of generative artificial intelligence has already begun to reshape practice across a wide range of professions. For regulated professionals in Canada, including lawyers, physicians, engineers, and others governed by statutory, ethical, and fiduciary duties, these advances bring both significant promise and considerable risk. However, the legal and regulatory frameworks are not keeping pace. In this legislative gap, professionals remain fully accountable to their existing professional obligations, even as the tools they are expected to evaluate and manage become more complex and less transparent.

This article considers the implications of this regulatory gap, examines the risks associated with the uncritical adoption of AI, and proposes practical risk management strategies for professionals and their regulatory bodies. As innovation continues to accelerate, the need for thoughtful scrutiny at the intersection of technological adoption and professional responsibility becomes increasingly pressing.

The Regulatory Gap in Canada: Professional Duties in an Evolving Landscape

As has been previously discussed in this column, Canada currently lacks a comprehensive legislative framework governing the use of artificial intelligence. The most developed initiative to date, the proposed Artificial Intelligence and Data Act (AIDA), was introduced as part of Bill C-27 and sought to implement a risk-based regulatory regime for “high-impact” AI systems. Although AIDA represented a significant legislative step forward, it did not proceed to enactment before Parliament was prorogued in early 2025 and has not, at this time, been reintroduced.

In the absence of specific legislation, regulated professionals must continue to rely on pre-existing legal and ethical frameworks to guide their conduct. These include statutory privacy obligations, fiduciary and common law duties, and the professional codes administered by self-regulatory bodies. The elevated responsibilities imposed on professionals, grounded in the public interest and in the protection of vulnerable individuals, remain fully operative and arguably become more salient as professionals navigate the uncertainties of emerging technologies. While most regulatory bodies have begun to examine the implications of AI, many have yet to issue detailed or binding guidance. As a result, professionals must act with heightened caution and independent judgment. The current regulatory gap does not diminish their legal or ethical duties; rather, it increases the importance of deliberate, defensible, and accountable decision-making in the adoption of AI.

Why Regulated Professionals Must Exercise Heightened Caution with AI

Regulated professionals occupy a distinctive position of trust and accountability within Canadian society and are subject to ethical, statutory, and fiduciary obligations that exceed those of non-regulated actors. These obligations are designed to protect clients and patients, uphold the integrity of professional services, and preserve confidence in self-regulating systems. When professionals adopt new technologies, especially those that are novel, powerful, and unregulated such as generative artificial intelligence, they must assess the associated risks in light of their heightened professional responsibilities. The fact that a tool is widely accessible or commercially available does not absolve professionals from the obligation to meet their elevated standard of care.

Unlike private enterprises that may deploy AI systems within the bounds of general legal norms, regulated professionals remain personally responsible for the outcomes of their work, including results generated or influenced by AI. This includes an expectation that professionals will understand the capabilities and limitations of the tools they use, preserve the confidentiality of sensitive information, and ensure that independent professional judgment remains central. Generative AI tools, especially those that operate on opaque or proprietary platforms, present significant risks. These risks include the potential generation of false or misleading information, inadvertent disclosure of confidential data, and the uncritical automation of decisions that require human oversight. In professional settings, such outcomes may result in disciplinary proceedings, civil liability, reputational harm, or injury to clients and the broader public.

Recent incidents make clear that the risks associated with AI use in professional practice are not being taken as seriously as they should. Within a week of writing this article, two high-profile examples of AI-related professional misconduct have been reported in the media. In the first, global consultancy Deloitte faced public criticism after a report prepared for the Australian government was found to contain factual errors attributed to the uncritical use of generative AI. In the second, the Alberta Court of Appeal addressed concerns about a lawyer who had retained a third-party contractor to prepare a factum that seemingly contained AI generated errors. The Court affirmed that, regardless of delegation, lawyers remain fully responsible for materials filed under their name, stating: “…if a lawyer engages another individual to write and prepare material to be filed with the court, the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents…” Reddy v Saroya, 2025 ABCA 322 at 83.

As public awareness of AI-related harms grows, it is likely that regulators will take an increasingly rigorous approach to oversight of AI use within the professions. Accordingly, regulated professionals must approach AI-assisted work with the same diligence, care, and scrutiny that govern all acts performed under the authority of a professional licence.

Risk Management Strategies for Regulated Professionals and Oversight Bodies

In the current environment, characterized by rapid technological change and challenges in regulatory oversight, regulated professionals and their governing bodies can rely on well-established risk management principles to guide the ethical and responsible integration of AI into practice. Risk management in this context refers to an ongoing process of identifying, assessing, mitigating, and monitoring risks associated with the use of AI in professional environments. The overarching objective is to ensure that AI strengthens, rather than compromises, professional competence, client or patient trust, and the protection of the public.

For individual professionals, effective risk management begins with a thorough evaluation of the AI tools under consideration. This includes understanding the tool’s capabilities and limitations, reviewing its outputs before relying on them, and implementing internal safeguards that maintain confidentiality and uphold professional judgment. Clear usage protocols should address data protection, transparency with clients or patients, and documentation of AI-assisted decisions. In some cases, informed consent may require disclosing the use of AI. Maintaining records of these practices supports professional accountability and provides a defensible basis in the event of an inquiry or complaint.

Professional regulators also have an essential role in facilitating responsible practice. Although updated regulatory frameworks may take time to develop, interim measures can be adopted. These may include the issuance of practice advisories, updates to codes of conduct, and the development of ethical guidance specific to AI use. In jurisdictions where multiple regulators confront similar challenges, this may also include collaborative efforts to share model policies and risk management strategies.

By anchoring the adoption of AI in proven risk management principles, both professionals and regulators can respond to technological change in a manner that is practical, proportionate, and aligned with the core values of the professions. This approach does not require deferring action until legislation is enacted; rather, it calls for the application of existing ethical and regulatory tools to a new and evolving set of circumstances.

Conclusion

In the absence of comprehensive legislation, the responsibility for ethical and legally defensible AI use falls squarely on the shoulders of individual professionals and their regulatory institutions. The allure of efficiency and innovation cannot outweigh the foundational obligations that define regulated practice: competence, confidentiality, accountability, and the protection of the public interest. It is incumbent upon professionals to scrutinize the tools they employ and apply the established principles of risk management to this emerging field. Likewise, regulators must provide timely guidance to assist professionals in navigating the complexities of AI within existing professional standards. In an era where technological innovation exceeds the pace of regulatory and ethical oversight, professionals must recognize that caution is not merely advisable, it is a professional imperative.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)