Column

What if Legal AI Doesn’t Need Legal Data?

A few recent data points about AI and the law, along with one bracing conclusion.

  1. At the end of February, American lawyer Zack Shapiro published an article on Linked titled “The Claude-Native Law Firm.” It described how his two-person firm is powered by customized “skills” that capture and encode his legal frameworks and judgment into Anthropic’s Claude AI, enabling Claude to deliver legal outputs rapidly and transferably across the firm. This interview with LawDroid’s Tom Martin relates what Shapiro is doing and why it’s potentially momentous: It suggests that properly and thoroughly instructed general-purpose Gen AI might prove as effective for legal work as more expensive legal-specific Generative AI (e.g., CoCounsel , Protégé, Harvey, vLex).
  2. In early March, Canadian lawyer Robert Diab published an article in the CBA National describing how he compared Claude Cowork’s legal research ability with that of the far more expensive LexisNexis Protégé. He assigned a series of advanced legal research question to both programs, setting Cowork loose on the CanLII database and asking Protégé to examine its own data. Protégé’s performance was spotty, missing some cases and misinterpreting others; Cowork was nearly flawless. “Using an agentic tool like [Cowork] with CanLII strikes me as the most accessible and powerful way to do legal research using AI,” Diab wrote. “Given the cost, it is a tool well worth using.”
  3. In mid-March, American corporate counsel Laura Jeffords Greenberg published an article on LinkedIn detailing her own comparison between Claude Cowork and Thomson Reuters’s CoCounsel on an employment law question. CoCounsel provided a summary and walkthrough with little elaboration, whereas “Claude told me the current rule, what’s changed in the last 14 months, what to watch for when drafting, and why it matters. It did the interpretive work. … In-house counsel asking a legal question needs a practical answer with citations for verification; not a summary of everything that’s ever been true.”

Now, we always need to be careful when assessing claims like this. Three data points is a microscopic sample size from which to draw conclusions about wider trends across the legal sector. Feel free to discount these claims to the extent your own skepticism warrants.

But there’s a difference between skepticism and outright denial. At the very least, these examples suggest we could be underestimating just how powerful a legal tool general-purpose AI can become when it’s been deeply customized, carefully instructed, and connected to authoritative public legal sources.

That possibility is reinforced by recent gains in the advancement of Generative AI itself. Gen AI analyst Ethan Mollick reported in March that “AI systems are increasingly being used to build better AI systems, creating a feedback loop… If you make models that are good at coding and good at AI research, you can use them to build the next generation of models, speeding up the loop.”

It might turn out that general-purpose Gen AIs like Claude Cowork really can, with enough focused preparation and effort, match or even outperform far more expensive legal-specific Gen AIs in at least some settings. I’m not saying that for certain. Dramatic claims demand dramatic evidence, and three data points that crossed my desktop hardly prove that Cowork can beat Protégé or CoCounsel across the board.

But we need to pay close attention when credible users report that general-purpose models are outperforming expensive legal-specific tools on practical legal tasks. Because if that proves to be the case more broadly, the implications would be enormous. Legal data would still matter to any AI system hoping to compete in the legal market; but the absence of that data might no longer be disqualifying. And that alone is something few of us would have believed when GPT-4 arrived just three years ago.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)