Over the past decade, many commentators, myself included, have argued that lawyers should have a duty of technological competence. This duty now exists: in October 2019, the Federation of Law Societies of Canada amended its Model Code rule on competence to include explicit reference to technological competence. Several provincial and territorial law societies have incorporated this amendment into their respective codes, and more will hopefully soon follow suit.
The fact that there now exists a formal duty of technological competence raises the question of what, exactly, does this duty entail? What does this duty require from lawyers? In a strict sense, these questions will only be answered if and when Canadian law societies issue specific guidance or bring public disciplinary proceedings against lawyers for alleged breaches of the duty of technological competence. In the meantime, I’ve been thinking about how we might frame our understanding of a lawyer’s duty of technological competence. This column offers an initial, 6-part (alliterative!) taxonomy for thinking about technologically competent lawyering.
First, the need to be an Automated Lawyer. Here, I am referring to the use of technology by a lawyer to work more efficiently and, in some cases, to generate better results. This category requires lawyers to be proficient in using things like practice management tools to assist with billing and conflicts checks and the use of automated forms. There is an obvious connection between a lawyer automating some tasks and long-standing obligations to provide diligent and efficient services to clients (Model Code r. 3.2-1, r. 4.1-1) and to only charge fair and reasonable fees (Model Code r. 3.6-1). While using such tools is not yet a mandatory part of legal practice, it will likely be considered obligatory in the future, especially given the added Model Code commentary noting that “a lawyer should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities.” The rise of “computerized” legal research as an integral part of competent legal practice is a helpful example of how lawyers must shift what tools they use as different tools become available and more ubiquitous. As noted in a 2010 Alberta Court of Queen’s Bench decision:
In some of the older cases, computer-assisted research was treated as an alternative to hard-copy legal research and it was held that counsel should be responsible for the cost of choosing this alternative…
With great respect to those decisions made at an earlier time, I think that the view of computerized legal research as a mere alternative is no longer consonant with the reality of current legal practice. Such research is now expected of counsel, both by their clients, who look to counsel to put forth the best possible case, and by the courts, who rely upon counsel to present the most relevant authorities. Indeed, it might be argued that a lawyer who chooses to forgo computerized legal research is negligent in doing so. This is particularly so given that many law firms and indeed governments are now cancelling hard copy subscriptions to legal resources in favour of the electronic versions. The practice of law has evolved to the point where computerized legal research is no longer a matter of choice.
In 2020, there are surely other technological tools which use automation and are available to lawyers that should be considered “no longer a matter of choice.” Many lawyers now automate aspects of their practices and have done so for some time.
Second, the duty of technological competence should include an obligation to be an Alert Lawyer. Here, we shift away from thinking about the potential benefits of using technology and towards potential risks. Technology-based risks can flow from malicious or benign sources. With respect to malicious sources, there is an increasing awareness among lawyers of the need to guard against third parties using electronic means to inappropriately access or take confidential client information or trust funds. Nonetheless, we continue to regularly hear about law firms suffering cyber-ransom or phishing attacks. On the benign front, while there is more awareness about the need for lawyers to avoid things like the inadvertent disclosure of metadata and protecting client information when crossing the border with electronic devices, it is not clear how many lawyers actually follow best practices. The area of technology-based risks also has clear connections to long-standing lawyer obligations: this time, the obligation to protect confidential client information (Model Code r. 3.3-1). Although we do not (yet?) have any law society regulations that mandate specific technical safeguards, there is surely an existing obligation on lawyers to be alert and to safeguard client information and trust funds against technology-based risks.
Third, the technologically competent lawyer must be cognizant of what it means to be an Avatar Lawyer, which is meant to refer to a lawyer’s online presence. For many years, one of the main focuses in this area was lawyers and social media. While social media is still an important topic (see, for example, the references to social media in the Advocates’ Society’s recent civility and professionalism guidance), the issue of digital delivery of legal service also requires significant attention. The digital delivery of legal services and its attendant benefits and risks have been on the legal profession’s radar for some time, but the COVID-19 pandemic has moved the issue up the ladder of importance. What constitutes appropriate professional practices in relation to virtual court appearances, remote commissioning and notarizing as well as client identification and verification are now firmly on the agenda of courts, governments, and law societies across the country.
Although questions remain about which current practices will extend to the post-pandemic context, it is clear that we will not be returning to an entirely pre-March 2020 world when it comes to digital legal services. Indeed, some changes have already been permanently codified (see the discussion, here, for example). Going forward, lawyers will have to adjust to doing more of their work over technological platforms. For many, this will mean developing new skills and complying with a new obligation to keep closely abreast of technological changes relevant to the digital delivery of legal services.
These first three categories have fairly firm connections to what it means to be a technologically competent lawyer in 2020. How courts and law societies address lawyers who fail to meet the obligations that fall within these categories will cement the boundaries of lawyers’ contemporary duty of technological competence. The next three dimensions of technologically competent lawyering discussed below are more nascent, in my view, and relate to the increasing use of artificial intelligence in the legal domain.
The fourth type of lawyer in my proposed typology is the Augmented Lawyer. In using this label, I borrow from a recent article that explores how AI will reshape the work of lawyers (h/t Richard Moorhead). In using the term “augmented lawyering” I mean to refer to lawyer-use of AI-enabled tools that perform what might be termed “judgement-based” tasks that lawyers would have previously done themselves. An example is e-discovery tools that use predictive coding to deploy “complex algorithms to mimic the document selection process of knowledgeable, human document review.” In the area of legal research, there is an increasing array of AI-powered tools that do things such as generate answers to legal questions inputted by users or offer predictions about how a court would rule on a particular set of facts. Additionally, there are several AI-powered contract analysis tools that are commercially available, which can quickly analyze contracts in order to highlight pertinent information such as particularly relevant terms or missing clauses (here is one example).
There are a variety of emerging ethical questions about lawyer-use of AI-empowered tools. Will the use of such tools eventually be necessary in order to meet the standard of technological competence? Is there a risk of inadvertent sharing of client information if client information is embedded in the data inputted into legal AI tools? What are the requirements of a lawyer’s duty of supervision if they “delegate” legal tasks to an AI-empowered tool, the operations of which may be opaque (the “black box” issue) or shielded because of proprietary algorithms? These ethical questions are fairly new but are relevant to tools actually being used by some lawyers and which are likely to see increasing use in the near to medium term. I’ve outlined some initial thoughts on these questions in a chapter on AI and Legal Ethics that will be published soon. Also, an excellent overview of ethical and regulatory issues which arise in relation to lawyer-use of AI can be found in the Law Society of Ontario’s November 2019 Technology Taskforce Report.
Fifth in the taxonomy is the need to be an Acquainted Lawyer. Here, I am thinking of the increasing need for lawyers to be knowledgeable about emerging technologies in order to effectively represent their clients. This category is inspired by a “Litigating Algorithms” panel that was part of the Law Society of Ontario’s 2019 Special Lectures on Innovation, Technology and the Practice of Law, which included discussion of “how criminal and civil litigators should think about algorithmic evidence (e.g., probabilistic genotyping, risk assessments, predictive policing, facial recognition), including what disclosure to request, what experts to call, how to conduct an admissibility voir dire, and what remedies to ask for” and news of a forthcoming book of the same title. As Jill Presser has observed, a variety of challenges “arise when lawyers want to ensure clients can meaningfully test the reliability of AI tools”, including challenges relating to technological literacy:
[O]ver time, the law society is going to require that we acquire greater and greater technological literacy, but even with advanced technological literacy for justice system participants we’re going to need to know enough, as sort of a baseline threshold, to be alive to when there are issues that need further inquiry and enough to know when it’s time to call in the experts. It’s going to be essential for us to call in those experts when the time comes.
The context of litigating algorithms is an example of how lawyers are going to increasingly need to become acquainted with complex technological tools and their outputs in order to do their jobs properly. Not only will lawyers need some baseline technical knowledge, but they will also need to know how and when to bring in necessary expert assistance.
Sixth, and finally, a technologically competent lawyer is an Attentive Lawyer. In August 2019, the American Bar Association House of Delegates adopted a resolution that:
[U]rge[d] courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.
This resolution points to a responsibility on the part of lawyers in relation to AI that extends beyond using technology in one’s practice or knowing about a technology that might impact a client’s case. In recent years, increasing attention has been drawn to how uses of AI and algorithms by police, administrative decision-makers and courts can reinforce and even generate systemic biases and discriminatory practices (see here and here for recent Canadian reports). The need to be attentive to these issues does not simply reside with government and policy officials who determine when and how such tools should be used. Lawyers also have an ethical duty to “try to improve the administration of justice”, which includes “a basic commitment to the concept of equal justice for all within an open, ordered and impartial system” (Model Code r. 5.6-1 and associated commentary). Being aware of the risks (and potential benefits) of using AI and algorithms in the justice system will soon be part of the legal profession’s obligation as champions and caretakers of the equal and fair administration of justice in Canada (if it isn’t already!).
The Canadian duty of technological competence for lawyers is only one year old. What exactly this duty encompasses and demands of lawyers remains to be seen and will, no doubt, continually evolve. I’ve suggested that we might think of lawyer technological competence through a “6As” taxonomy – modern lawyers need be Automated, Aware (of technological risks), operate as Avatars (i.e. competently deliver services digitally), use AI to Augment their legal practices, be Acquainted with emerging AI technologies and be Attentive to how AI in being used in the justice system. What do you think?
 Charles Yablon & Nick Landsman-Roos, “Predictive Coding: Emerging Questions and Concerns” (2013) 64:3 S.C.L. Rev. 633 at 633.