Column

Keeping Hold of the Reins When Using AI

What many of us in law, legal education, and other fields still want to know at this point is: what is AI really good for? What does it do reliably well and better than we could do on our own? And when we use it for those purposes, what risks do we take on?

In the early days of ChatGPT, those risks were clear. AI hallucinated authorities and generated biased output grounded in its training data. But as models have improved and we’ve learned to guard against these problems, those concerns have become more manageable.

A different and more subtle issue has now come into view.

Having discovered some of the things AI is good at — supporting research, drafting, and editing — the main concern is not just whether its output is accurate, but when effective use of the tool crosses the line into harmful over-reliance.

When a lawyer or a self-represented litigant cites cases that don’t exist, they aren’t over-relying on AI. They’re misusing it. Over-reliance entails something else. It overlaps with automation bias — the tendency to defer uncritically to a system’s output — but is not reducible to it.

We over-rely on AI not just when we accept its output as true without question, but when we allow it to perform work we shouldn’t be delegating to it at all — even if it’s work that AI can do well.

But precisely what should we not be delegating to AI? Here, we’re in new terrain.

For certain forms of writing — a personal email, an essay, a court decision — most of us have a strong intuition that relying on AI to do the drafting is wrong, even if the result is fluent and technically sound. These forms of writing are tied to deeply seated ideas about identity and reflection. Automated prose, however polished, leaves us cold. It may be correct but it’s inhuman.

Yet in many cases there’s nothing wrong with relying on AI. Using it to transcribe an interview or summarize a case on CanLII to decide whether it’s worth reading closely can sometimes feel magical.

The trouble that many experienced users of AI are now encountering is that as these tools become more capable and we become more adept at using them, it becomes easier to slide into patterns of increasing delegation. And the more we do so, the more AI begins to encroach on doing the critical things we should be doing ourselves.

It becomes tempting, for example, in the course of a chat with AI to let it carry you from a brainstorm to an outline to a first draft, because it all happens so fast. The model can seem uncannily in sync with where you want to go. Prompts often end with suggestions for next steps, making it feel as though the system is always a step or two ahead of you. It can be hard to resist letting it take the lead.

Increasingly, in my conversations with colleagues about AI, the question is not what creative uses they are making of it, but what limits they are drawing around its use.

Clear lines to be drawn

Institutions are grappling with this problem by drawing formal lines. Newspapers, universities, and courts have adopted policies specifying when and how AI may or may not be used. The aim is not to ban these tools but to foster responsible and accountable use.

Should we, as individual users, do the same — commit to rules of thumb in advance?

We do this with other technology, from cell phones to social media. Taking a principled approach to using AI can help us avoid discovering, only after the fact, that we’ve over-relied on it — when we can no longer unsee an outline or draft a model has placed before us that now guides our thinking and crowds out other ideas we might have explored.

My rules won’t be the same as yours. They depend on the kind of work you do. I do mostly academic and journalistic writing. Different considerations apply in teaching and in the practice of law.

I’ll share a few of the rules I’m trying to follow, but preface them by articulating the overarching theme: make wide and varied use of AI, but use it with restraint and self-awareness.

My own rules of thumb

For research, I use tools like ChatGPT and Perplexity to gather and briefly summarize sources — but no more than that. I don’t want to rely on AI to interrogate those sources, rather than delving into them directly myself.

When it comes to writing and editing, I try to be even more cautious. AI is exceedingly good at producing outlines or supporting arguments. For that very reason, I avoid it at this stage. I would rather have the structure of a piece emerge organically from my own thinking, even if that process is slower or less efficient.

I might present my own outline to a language model and ask for further ideas or angles I may have missed. But I want to do the hardest part myself: shaping the argument.

Using AI to generate a first draft of anything but the most routine writing, such as a brief factual summary or a short email pitch, doesn’t work for the kind of writing I do. This is partly because it risks passing off AI-generated prose as my own expression, which is not what readers expect. More crucially, it allows the model to pre-empt my own voice. And discovering what I want to say is a big part of why I write in the first place.

When I edit with a language model, I ask it to do so “lightly rather than aggressively.” I want suggestions for how to improve a draft, tighten the odd sentence, or catch typos. I don’t want it to transform my writing into something that no longer sounds like me.

Lawyers and litigants

Some lawyers are comfortable using language models to “generate first drafts of contracts, pleadings, memos, and correspondence.” As one lawyer explains, when drafting, she will give the model “samples of my work and then my ideas. That way, the first draft is a lot further along than if I just gave it generic instructions.”

Even the Commissioner for Federal Judicial Affairs contemplates using AI to draft submissions, and one access-to-justice group touts AI’s value in helping to draft pleadings.

Much of this may be fine if lawyers “supervise and review all outputs generated by AI”. But I would single out using AI to draft court submissions. Even if they’re reviewed for accuracy, I don’t think relying on AI here is appropriate. Doing so, I believe, runs a real risk of breaching a duty of competence, given the professional judgment needed here to make choices about relevance, tone, and strategy — judgment that AI can’t replace.

Self-reps are another kettle of fish altogether.

But what’s the point?

So then, why bother using AI if it can so readily do more harm than good?

Because even when used cautiously, it’s still enormously helpful.

Even if I limit my use of AI in research to gathering and briefly summarizing sources on the open web, it is still a quantum leap more powerful than doing searches on Google or dedicated databases. Many of the sources surfaced in a search using ChatGPT or Perplexity will be unhelpful. But more often than not, one or two will contain a wealth of relevant material (details, footnotes) that map out the lay of the land so that I can choose where to go from there.

Language models may be over-eager writing assistants that need to be closely supervised. But for light editing or feedback on a draft, they can be indispensable. When I use AI in this way, it feels less like a replacement for my thinking and more like a demanding but helpful reader.

Easy for you to say

Will it be easy for people with weaker writing skills to use AI with restraint? Probably not.

In law school, undergrad, or high school, the temptation to rely on AI to summarize readings or complete assignments students should do themselves is obvious. It poses a real threat to their development.

Does my enthusiasm for AI rest on the fact that I have decades of reading and writing in law behind me? Is it easy for me to urge restraint because I already possess the skills that AI threatens to displace?

Perhaps.

But this, I think, is where we all now find ourselves, regardless of experience. AI has made everything from learning to research and writing both easier and harder. If you want to learn without being hindered by AI, you’ll need to learn restraint. And if you want to go on writing in your own voice, you’ll need to do the same.

I’m still using AI a lot. But I’m trying to use it cautiously and deliberately, with an eye to what I’m gaining and what I may be giving up. I’m trying to remind myself constantly: keep hold of the reins!

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)