Column

The Case for and Against Co-Authoring With AI

In recent posts, I have been skeptical about using AI to generate certain kinds of legal writing. I’ve drawn a distinction between using AI to edit or revise a document and using it to create one from scratch.

I take the view that even if you can avoid hallucinations, using AI to create a court brief is likely to raise issues of competence. And I’m not convinced it is well suited to drafting opinion or demand letters, because it leads to writing that comes across as flat and robotic, verbose, and overly formalistic.

But there’s another view out there that says I’m only seeing AI used poorly or ineffectively. On this account, AI can produce authentic writing so long as the user prompts it extensively and with enough specificity as to be responsible for the ideas, arguments, stylistic choices, and so forth.

A few lawyers have shared examples of this approach with me — opinion and demand letters, emails — asking effectively: isn’t this good enough? Look how much time I saved!

In March, New York lawyer Zack Shapiro made this case in a lengthy essay that went viral on X. As he notes in the essay itself, he wrote it with AI.

So am I just being a Luddite? Maybe. But I remain skeptical.

The ethical questions are important, but I want to set them aside and focus on quality and efficiency. My view is that if you use AI to write by prompting extensively, you won’t get a better document. You’ll get a different one. And people who care about writing will notice.

It matters most, I think, in scholarship, journalism, and court briefs. But what about opinion letters, client emails, or other lower-stakes writing where the case for efficiency is stronger?

Here too, I’m not entirely persuaded. I’ll take up two arguments for co-authoring with AI and explain why, in most cases, I think claims about quality and efficiency are overstated.

Shapiro’s beef with Klein

The best argument I’ve come across for co-authoring with AI appears in Shapiro’s essay. He frames it against Ezra Klein’s concern that writing with AI bypasses the hard part of writing — the struggle of figuring out what you think.

Shapiro’s reply is that Klein is wrong to assume that thinking disappears when you co-write with AI. The real cognitive work, he says, can happen before the prompt. You do the planning, analysis, strategy, and judgment yourself, and then use AI to execute on that thinking or show you what you missed. That, he says, is what he does in his own writing.

The choice is not simply between writing that is “human-made” and “AI-made,” but a third category in the middle: writing produced by a human who has done the thinking and used AI as a collaborator. For Shapiro, the real divide is between the slop produced by “someone feeding a one-sentence prompt into a chatbot” and “work where a human showed up with something worth saying, and used AI to say it better, faster, or at a scale they couldn’t reach alone.”

The point is not to rein in your use of AI for certain things, as I have argued, but to learn, when doing those things, to use it more thoughtfully.

Shapiro goes further. Doing this well requires more than a single careful prompt. It involves building reusable systems. He points to Claude’s “skills” feature, which allows background instructions to be run with every prompt — ones you refine over time through trial and error, so that more of your own judgment gets encoded into the system.

After enough repetition, the result is a kind of personalized method for producing drafts. In theory, no form of writing is off the table.

Or is it?

Cracks in the model

It is certainly an interesting model, and many lawyers are experimenting with it. Some are persuaded that what they produce by co-authoring with AI is as good as or better than what they would have done on their own in the same amount of time.

I’m not so sure.

We are clearly at an inflection point. The AI demand letter, court submission — even court decision! — is now a fact of life. But the practical question remains. When you go all in on co-authoring and have AI generate a draft, is the output really as good as what you could produce yourself? And is it really more efficient?

For longer documents, like Shapiro’s essay on X, I don’t think it works. I began skimming after a few paragraphs. I found the writing hollow and verbose. It has all the hallmarks of AI writing (it’s not this, it’s that, etc.) and no personality or warmth.

I get the same impression reading opinion letters lawyers have shared with me, student papers, and online writing I suspect was produced with AI. It feels wordy, mechanical, formalistic, and inhuman.

So my response to Shapiro’s claim that “you’re doing it wrong” — that AI used well can lead to some new form of human-machine prose that is just as good but more efficiently produced — is that it isn’t true.

Even the best language models can’t produce prose with the colour and quirkiness that a human voice conveys. And given the way they work, they likely never will.

But why does this matter?

The co-authoring model Shapiro contemplates wouldn’t work in scholarship or journalism because it doesn’t get around the problems of staleness, predictability, inhumanity, and so on.

But Shapiro and others who embrace the model suggest that law is different. Maybe what matters here more than warmth or personality is whether AI gets the job done.

Again, I’m skeptical.

I’m part of an earlier generation of lawyers for whom writing a good letter or submission is a point of pride, much like making an eloquent oral argument. Writing well helps build a reputation, and writing poorly can do the opposite.

A document that sounds like it was substantially produced by AI might suggest to the other side, the client, or the judge that you didn’t take the task seriously enough to do it in your own voice. It implies that you believe mechanical-sounding prose is an adequate substitute for you.

But maybe you didn’t have time. Maybe the letter you wrote with AI — in a busy legal aid family or criminal practice — is a letter that you would otherwise not have written.

This is a second argument for co-authoring with AI. Don’t compare it to what I would have done on my own; compare it with the oral communication that would have had to suffice.

This is a powerful argument. AI is especially good at gathering and summarizing a mass of detail, and for that it’s invaluable. What’s wrong with slotting some of that into a letter or submission?

How about at least some reliance on AI for drafting? Yes, in some cases, I think that might work well.

Closing thought

I end with this question. If you were to read a file and simply sit down and dictate a short, less detailed letter, or briefly outline the gist of your submission, would it really take more than an hour? And would it not produce a better, more concise distillation of the key points you want to convey?

The same question applies to Shapiro’s broader defence of AI writing. If the goal is to produce something in your own voice, shaped by your own ideas and free of AI’s staleness and verbosity, would it not make more sense simply to write it yourself?

Maybe the question to ask is not: could I write this more efficiently with AI? But instead: if I commit to writing something in a short space of time, what do I really want to say?

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)