The AI Future of Law Is Already Here — It’s Just Not Evenly Distributed
Michael Geist had a lawyer on his Law Bytes podcast recently to talk about how AI is radically transforming his practice. For this long-time listener of one of the best law podcasts out there, the episode with New York lawyer Zack Shapiro was among the two or three most interesting and informative episodes I think Geist has ever done.
As someone who follows developments in legal AI closely, I found Shapiro’s insights into how to make the best use of AI outstanding. This is an episode that anyone interested in where law is headed — and concerned with not being left behind — can’t afford to miss.
Shapiro had three core insights to impart that, when combined, give you, in his words, “superpowers of the kind we have not seen in the law yet.” And he paints a vivid picture of how these powers have transformed his practice at a small transactional firm focusing on tech startups and investors. (Shapiro has conveyed some of these ideas in a few essays that have gone viral X.)
I will briefly sketch his insights, which I think are entirely valid and empowering. But my larger aim here is to make the point that, as exciting and inspiring as this glimpse into the AI future may be, it’s a partial vision. It leaves a lot out.
It’s not unlike a news clip of a crowd protesting in front of city hall, which looks large close-up, but, as the camera pans out, is revealed to be not that big.
To be clear, the buzz and excitement here is real. Some superpowers are now attainable. But much of this is localized. It’s clustered in some areas of practice, for some things.
First, the good news.
Shapiro’s three insights
Stop using bespoke legal AI
In the past three months, he says, we’ve reached a tipping point. Off-the-shelf, frontier models like Claude and ChatGPT are so good that it’s time to stop using AI tools tailored for law, such as Protégé, Westlaw, Harvey, and so on.
Shapiro sees these as unnecessary “wrappers” around a language model, with buttons and controls meant to assist you but that only get in the way. Remove all the clutter and learn to work with the best models directly.
Why? Because there’s something you need to be doing — constantly — that Harvey and Westlaw won’t let you do. You want to be using AI to teach you how to use it better, and you want to build a library of background instructions that run with every prompt, so as to make your output ever more responsive and accurate, and to automate more steps in your process.
Use AI to create a virtuous cycle of greater efficiency
Shapiro singles out a key feature of Claude that I don’t believe any other AI provider has a true equivalent of yet. For a while, all the big platforms (ChatGPT, Gemini, etc) have allowed you to upload sources to the model and to create ‘projects’ that can have specific instructions for all chats within it.
Claude has something unique called ‘skills,’ which are files you can create that contain plain-language instructions for how you would like certain things to be done or things Claude should keep in mind with every prompt (your style, voice, formatting preferences, etc). They can also contain code for complex operations, like opening a Word file, making requested changes, and creating a new file.
How does this help?
You can get Claude itself to make skills for you, critique the ones you’ve made, amend them, troubleshoot them, and so forth. You can also use them for more technical things, like moving files in a directory, comparing documents, etc. It’s not clear that other models can do this yet.
Shapiro describes a process in recent months of working with Claude to build and refine an elaborate set of skills that have automated stages in the creation, review, and revision of complex contracts using a host of special instructions and technical shortcuts. What used to take him four or five hours to generate a draft of a contract now takes minutes.
The quality of the output has only been improving, he says, as he harnesses Claude to make and revise skills, based on what has worked well and what hasn’t.
Your prompts need to be longer, much longer
The point is often made that the quality of output with AI depends on the length and specificity of your prompt. Shapiro is emphatic on this point. He says that the average length of his prompts is 2,000 words. In a Word doc, that’s roughly 8 pages double-spaced.
You need to load your prompts with detail. Everything should be in there: your client’s multifarious concerns, what the other side is likely to accept or not accept and other sticking points, prevailing law, key terms you think should be included, and so on.
Shapiro makes the general point that if you take the time to craft prompts that are extensive and nuanced enough, the quality of the output can and often will be comparable to what a mid-level or senior associate would produce. And once you begin working with a sufficiently large library of skills — or other background instructions — you’ll be producing at a pace never before possible, with no discernible loss in the quality of your work.
What this picture leaves out
I’m all over Claude and building a library of skills to generate a recursive cycle of greater efficiency.
But outside of a practice focused mainly on writing contracts and a few other areas, even the most bullish embrace of AI will result in productivity gains that are far less dramatic.
The fact is that for many other forms of practice — for lawyers who do research-heavy work, who spend most of their time negotiating, or who litigate in certain areas — AI will play a more limited, often peripheral role.
It was telling, I thought, that Shapiro gave only two concrete examples of how he uses AI in his practice: to generate contracts and to write opinion letters. (Near the end of the episode, he also described how he uses AI to help draft social media posts.)
I’m in touch with a litigation lawyer in BC who is experiencing something similar to Shapiro’s miraculous epiphany. He reports saving countless hours in recent months using AI to review documents in his employment practice, to generate demand and opinion letters, and to draft settlements. He also notes that his prompts tend to be extensive, in many cases well over a thousand words — and that he often gets AI to help him formulate better prompts.
So there is certainly an argument to be made that Shapiro’s insights apply to litigation and can lead to a significant boost in productivity.
But not in every kind of litigation, and not in every kind of law.
Over here in criminal law land, its use has been far more limited. We get tons of disclosure from the Crown, lengthy documents, often terabytes of data on whole hard drives. We grapple with whether we can upload these documents to any language model hosted in the cloud, bespoke or otherwise, given the uncertainty around privacy. And no one is sure whether sensitive client information is safe with AI.
The larger narrative in litigation is that lawyers relying on AI to draft court submissions has been a disaster. I don’t think it’s well suited to writing submissions — or opinion letters for that matter — even if a lawyer confirms that law is summarized accurately. These are things, I’ve argued, that lawyers need to do on their own, to ensure they’re done competently.
And although I’m seeing progress in using AI to assist with legal research — including the possibility of using Claude to look things up for you on CanLII — the efficiency gains there are significant, but not as striking as in Shapiro’s case.
And of course, for lawyers who spend a good portion of their time dealing with clients and negotiating with opposing counsel or appearing in court, AI will not bring about a radical transformation of their practice.
The AI revolution is coming, but it will be jagged and uneven. Some parts of the future will look distinctly like the past.


Start the discussion!