Column

Trust No AI? : Updating the Duty of Competence for the Modern Lawyer

Author: Ana Qarri, JD/BCL Candidate (2021) at McGill Faculty of Law

The fear of being replaced by “robots” is not unique to our profession. Automation is predicted to impact even highly skilled workers. But the legal profession is well placed to ride the waves of artificially intelligent systems with confidence rather than panic.[1] We should not be concerned about being replaced—it should be our A.I. assistants that should concern us, particularly those marketed as case or litigation prediction tools.

The legal profession should embrace AI tools that improve efficiencies, access to justice and results for our clients. However, we must set thoughtful norms about how new and old lawyers alike should engage with these tools. Most pressingly, our duties to our clients should be clear: When should we be allowed to use these tools? When should we double-check their results? Where does liability fall if a mistake is made that harms our clients?

There is a growing trend of professional regulatory bodies adopting a duty of technological competence into codes of conduct. This duty expands on the well-established duty of competence.[2] These norms, as they stand, are insufficient to protect both lawyers and clients; they were designed with information technologies in. AI tools pose novel risks—as client-facing professionals, we have a duty to engage the implications of AI tools in our practice.

Among other types of predictions, legal analytics tools aim “to anticipate the chances of success of a case or for example the amounts of compensation in civil proceedings”.[3] I offer an example from Canada: Blue J Legal’s Tax Foresight and Labour and Employment tools predict the likely outcome of cases in these areas of law based on facts entered by the user. Practitioners in Canada’s Department of Justice have been among the tools’ early adopters.

Other tools—like Court Analytics (built by US-based Loom Analytics)—claim to predict the likelihood of success before a judge. Software like Lex Machina and Motion Kickstarts provide predictive insights on “opposing lawyers, law firms, parties, judges” and identify “which arguments … are likely to be most successful” before a specific judge.[4] It is worth noting that most predictive tech does not meet the promises of the “futuristic headlines” it inspires.[5] While a AI assistant is not a staple for the average lawyer, predictive tools are increasingly leveraged in legal practice.[6]

Some argue that artificially intelligent tools “become more accurate, lawyers may not only come to rely on [them], but may be legally required to consult these tools as part of their due diligence” [emphasis added].[7] This underscores one aspect of the duty of technological competence—that lawyers must adopt technologies that are essential to the competent representation of their clients.

It would be absurd, however, to suggest that a doctor must use a new technology but fail to inform themselves of its risks. The same imperative exists in legal practice. The integration of predictive tools in legal counseling must take into account that—like in doctor-patient relationships—the adoption of a predictive AI must account for existing information asymmetries, power dynamics, and vulnerability on the part of the client who relies on our competent representation.[8]

AI tools sometimes risk amplifying existing patterns of inequality.[9] Consider guilty pleas as a concrete example. Many accused defendants plead guilty to avoid expensive and time-consuming trials. While a black box AI model picks up on this correlation, it cannot tell us about what scholar Rich Caruana calls the “unknown unknowns”, like the fact that some groups could be more likely to plead guilty due experiences that heighten their aversion to engaging with the justice system. This is a problem of bias built into the algorithm, often unintentionally,[10] and an issue of explainability in artificially intelligent systems.

Legal professionals that use AI tools will also face concerns of automation bias: the tendency of humans to defer to machine decisions.[11] Lawyers may not be able to easily “second guess” the suggestions that AI tools make or notice answers that inadvertently perpetuate existing biases. If the lawyer is not able to view the model an AI tool is based on, should they be responsible for determining whether the inferences are reasonable? Even if one has access to the model, should they be required to understand its inputs and outputs on a technical level?

Demanding AI expertise from lawyers would be absurdly counterproductive to the introduction of the tools in the first place. In many cases, it might be functionally unrealistic or impossible for “lawyers to check whether the software ‘gets it right’.”[12] The responsibility then shifts to professional oversight bodies to regulate the use of predictive AI in legal practice in a realistic manner.

The United States and Canada have implemented a duty of technological competence. The American Bar Association’s Model Rule 1.1 [Comment 8] requires lawyers to keep abreast of the “benefits and risks associated with relevant technology”. The New York Rules of Professional Conduct interpret this duty to apply primarily to “technology the lawyer uses to provide services to clients or to store or transmit confidential information”. This suggests a concern with information technologies that poses risks to solicitor-client confidentiality.

Canada’s Model Code mirrors this approach—Commentary 4A of Rule 3.1-2 requires awareness of “the benefits and risks associated with relevant technology, recognizing … [the] duty to protect confidential information”. Neither the Canadian, nor the American duty of technological competence, has been applied to AI tools.[13] More broadly, the European Union does not appear to have an equivalent duty.

It will become increasingly relevant that lawyers understand the risks of AI tools, and their shortcomings and limitations based on the data that trains them and the humans that have designed them.[14]

The information revolution challenged our duty of confidentiality; AI will challenge our duty of competent representation. Bringing the regulation of predictive tools in line with the already shaky concept of competence will require an ongoing commitment to responsive implementation. However difficult the task, it is at the core of the profession’s fiduciary duties to clients, whose right to competent representation should not be diluted as a result of technological complexity.

_________________

 

[1] See e.g. Daniel Martin Katz, “Quantitative Legal Prediction—or—How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry” (2013) 62 Emory LJ 909 at 941.

[2] Alice Woolley, “The Lawyer as Fiduciary: Defining Private Law Duties in Public Law Relations” (2015) 65:4 UTLJ 285.

[3] Fabrice Muhlenbach and Isabelle Sayn, “Artificial Intelligence and Law: What Do People Really Want?” Paper Presented at ICAIL, Montreal, QC, on June 17-21, 2019.

[4] Mark K Osbeck, “Lawyer as a Soothsayer: Exploring the Important Role of Outcome Prediction in the Practice of Law” (2018) 123:1 Penn State Law Review 41 at 86.

[5] Frank Pasquale and Glyn Cashwell, “Prediction, Persuasion, and the Jurisprudence of Behaviourism” (2018) 68:1 University of Toronto Law Journal 63 at 67.

[6] Kevin D Ashley, “A Brief History of the Changing Roles of Case Prediction in AI and Law” (2019) 36:1 Law in Context 93.

[7] Benjamin Alarie, Anthony Niblett & Albert H Yoon, “How Artificial Intelligence Will Change the Practice of Law” (2018) 68 UTLJ 106.

[8] Claudia E Haupt, “Artificial Professional Advice” (2019) 18:3 Yale Journal of Law & Tech 55 at 58.

[9] Ignacio N Cofone, “Algorithmic Discrimination is an Information Problem” (2019) 70:2 Hastings LJ 1389 at 1399; Daniel L Chen, “Judicial Analytics and the Great Transformation of American Law” (2018) 27 Artificial Intelligence and Law 15.

[10] Ignacio N Cofone, “Algorithmic Discrimination is an Information Problem” (2019) 70:2 Hastings LJ 1389 at 1399.

[11] Claudia E Haupt, “Artificial Professional Advice” (2019) 18:3 Yale Journal of Law & Tech 55 at 71.

[12] Mirielle Hildebrandt, “Law as Computation in the Era of Artificial Legal Intelligence” (2019) 68 University of Toronto Law Journal 12.

[13] Jamie J Baker, “Beyond the Information Age: The Duty of Technology Competence in the Algorithmic Society” (2018) 69:3 South Carolina Law Review 557.

[14] Ignacio N Cofone, “Algorithmic Discrimination is an Information Problem” (2019) 70:2 Hastings LJ 1389 at 1409.

Comments are closed.