Trust Fractures: AI, Law, and the Small Cracks Worth Watching
Discourse on AI and the law often centres on the prospect (or lack thereof) of catastrophic injury to existing legal institutions and structures. Will AI tools decimate the legal profession, replacing all the lawyers? Or will lawyers who use AI simply replace lawyers who do not? Will our courts be overrun by hordes of robo-judges? Or is human decision-making essential and here to stay? These debates have proved remarkably resilient. Versions of them have bounced around for years, shifting shape as the technology progresses and our anxieties evolve.
I’m all for looking at the big picture and asking big questions, particularly when done in a nuanced and informed way. But competing visions about AI’s potential for catastrophic injury to our status quo and opinions about whether or not that is a good thing are in generous supply. What about the potential for small cracks, the stress fractures formed under repeated pressure that, over time, can quietly compound? Are we looking enough at those?
The smaller impacts that AI is making to existing systems are diverse and contested. I’ve been thinking a lot recently about one specific aspect: how AI—and in particular, generative AI—can put, and is putting, stressors on trust in the legal context. These stressors, which I’ll call “trust fractures” (building on the stress fracture metaphor, in case anyone missed that!), are often out of the limelight.
Some examples:
- Trust in the fact-finding function of courts and tribunals is under strain as the public becomes increasingly aware that AI tools can now produce highly convincing but wholly fabricated audio-visual depictions of people and events. We are already seeing a small but growing number of self-represented litigants claiming that evidence brought against them has been “deep-faked”. Even where such claims are unfounded, they introduce a new layer of uncertainty into proceedings. Over time, this can erode confidence not just in specific pieces of evidence but also in the public’s trust in the evidentiary process itself.
- Trust in the legal analysis of courts and tribunals is also at risk. The more that lawyers and self-represented litigants indiscriminately plunk AI-generated authorities into court submissions, the greater the chance that made-up cases or wrongly described law will sneak into a reported decision. This has already happened in courts outside of Canada. And, when this happens, it is not just the affected litigants whose trust is shaken. Public confidence in the courts as competent and reliable arbiters of the law suffers, too.
- Trust in the legal profession is at stake. At the macro level, every headline about a lawyer being reprimanded for filing AI-generated fake cases undermines public trust in lawyers. As Jordan Furlong has said, “bringing lawsuits that cite non-existent cases leads the average person to regard lawyers as lazy, credulous, or completely indifferent to the truth.” At the micro level, lawyers are increasingly reporting that clients are arriving at their offices with pre-packaged AI analyses and that clients sometimes push back, distrustfully, when told that the analysis is incorrect or incomplete. The AI tool they are using is often giving them very convincing and legal sounding, albeit incorrect, material.
- Trust among colleagues in legal workplaces is under pressure. Supervisors worry about, and sometimes see, “shadow AI” (i.e. unauthorized use of AI tools) that opens the door to liability. On the flip side, I’ve also heard of junior lawyers being pressured by senior colleagues to use AI in ways the junior lawyers view as clearly inappropriate. Such requests don’t seem to stem from ill intent but rather from a lack of understanding of the technology’s limitations and a desire to capitalize on the perceived efficiencies of AI tools. In both cases, colleagues are questioning each other’s judgment around AI in ways that can strain workplace trust.
We ought to take seriously the trust fractures wrought by AI. Just as professional athletes take preventive measures to build resilience against stress fractures, justice professionals (that is, lawyers and judges) should consider what we can do to build resilience in our work. I’ll offer three categories of suggestions. None of these are ground-breaking, but stress fractures are rarely prevented by dramatic interventions; they’re prevented by consistent, deliberate care.
First, education. Many of the trust fractures above result from failures to understand the basic nature of generative AI tools and then using these tools inappropriately or using the wrong tools. This is, for example, the primary driver behind the continued — and accelerating — stream of cases in which lawyers and self-represented litigants file problematic AI-generated authorities with courts and tribunals. It isn’t enough to know that “AI can make mistakes”, to quote a fine print disclaimer that one popular AI chatbot includes. Understanding how generative AI works can help build intuitions about what type of mistakes AI might make and empower users to look for and avoid them.
Second, dialogue. Some trust fractures could be mitigated by more deliberate communication. For example, at the beginning of an engagement, lawyers ought to have conversations with clients about AI use. This includes disclosing the kinds of AI tools the lawyer proposes to use (or not use) but also developing a mutual understanding of how, if at all, the client plans to use AI. This could include a candid discussion about the value and risks of the client presenting pre-packaged AI work to the lawyer, including cost and confidentiality implications, and whether such an arrangement serves the client’s interests. In legal workplaces, open dialogue about AI expectations and boundaries is essential, particularly between senior and junior colleagues, where power dynamics can make it difficult for concerns to surface organically.
Third, processes. Resilience is bolstered by having robust processes in place. For legal professionals, this includes frameworks to assist in thinking through AI’s accuracy and confidentiality risks in order to steer towards safer uses. It also includes having meaningful verification systems that go beyond simply telling someone to double-check AI outputs. Further, having AI policies in place – both in legal workplaces and courts – is important so that expectations around acceptable uses are established before problems arise, not after. In the case of AI-generated evidence and public trust, courts will need to embrace practical but meaningful measures to address litigants’ concerns. As Justice Jones observed in the recent R. v. Medow decision, “[c]ourts must seriously consider how to assist an economically disadvantaged self-represented accused person who disputes the authenticity of digital evidence to ensure a fair trial, without compromising their independence.”
None of this requires us to stop asking the big questions about AI and the future of law. The case that I am making here is that we should be watching out for and acting on the small cracks too, before the fractures have a chance to deepen. The legal system depends on trust, which is far harder to restore once lost than to proactively protect.




Start the discussion!