Column

Should We Restrict the Use of AI in Law School?

In a prior post for Slaw, I argued that law schools should make AI more central to the curriculum. We should teach how to use AI effectively rather than resist it or pretend it isn’t there. To do this, we need to take a different approach, which might entail permitting the use of AI on some assignments and exams.

In this post, I want to address a strong counter-argument: encouraging law students and young lawyers to use AI too much, too soon will prevent them from developing the skills they need to do their jobs effectively—or even to be any good at using AI itself.

The core of the argument is simple. If you have mastered something, using a machine can help you do it better. A good writer can turn out better work, faster by writing with a computer.

But often, when automation takes over a task that humans are accustomed to doing manually, they suffer what is called “skill fade.” As Nicholas Carr explains: “When skilled pilots become so dependent on autopilot systems that they rarely practice manual flying […] they lose situational awareness, and their reactions slow. They get rusty.”

But if you lack a skill to begin with, automation prevents you from developing it altogether. In early 19th century Britain, Carr points out:

Skilled craftsmen were replaced by unskilled machine operators. The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all. Take away the machine, and the work stops.

He sees a similar trend unfolding in high schools and undergrad, where many if not most students now use AI to generate their essays and term papers. Skipping to the end of the process, they miss out on the real learning to be gleaned from research and writing.

Is there a parallel here with law?

False analogy

It’s tempting to think there is.

If students avoid the hard work in law school, especially in first year, of reading cases and grappling with how to apply law to facts, they won’t gain a firm footing in the core areas of law. And without this, they can’t act effectively for a client, because they won’t be able to spot key issues or important subtleties.

But before the advent of ChatGPT in 2022, is this what students did in law school? Or is this how many of us wish we had spent our time in law school if we could go back and do it again?

The threat AI poses

One might concede that there were always shortcuts to getting through law school, like using other people’s cans or relying on concise overviews of law, but that AI presents a threat of a different order.

To use a can on an exam, you have to glean its content, read the case summaries, to know which ones to apply where. AI circumvents this. Load the can into the model, the exam question, and boom, you’ve got your answer.

It’s even worse for papers and presentations. Tools like Open AI’s Deep Research can generate an entire paper or PowerPoint presentation, with sources. It does an even better job if you give it the primary materials. With a few key cases, AI can produce a paper on a doctrinal point in a few seconds.

How can we expect law students to learn anything if we give them free reign with AI?

The reality of the situation and the choice

I concede that using AI in this way—to pass off its work as your own—would hinder learning altogether. But we need to be realistic about what students were really doing in law school, what they were learning and how, before ChatGPT came along.

The unfortunate reality is that far too many students stop reading cases by January of first year; many of them prepare for finals by using someone else’s can; and over the course of three years in the program, students get hardly any feedback on exams or assignments.

Much of the work they do in law school is done last minute: cramming a few days before the exam or paper deadline. Much of it is forgotten a few days later.

The choice is not between an ideal picture of the perfect student reading every case assigned to them in law school and writing five practice exams in the month leading up to the final—or an AI zombie apocalypse, where everything handed in is coming straight from OpenAI.

The choice is to leave students to their own devices to try to figure out how to make effective use of AI—and hope they don’t misuse it—or to meet them where they’re at and try to help them foster good over bad uses of AI.

How students might use AI to help not hinder learning

A student who has stopped reading cases may well tune out altogether and hope they find a good can in the final week. But using AI to summarize cases might help them keep up with weekly reading by making it more manageable. They might ask AI for a concise explanation of a point of law, or pose other questions, using AI as a tutor.

AI can also help prepare for exams by making fact patterns and critiquing answers, or it can provide feedback on a paper draft. The possibilities are endless.

But isn’t AI famous for hallucinating? How can you counsel law students to rely on AI when every other week, courts across Canada are chastising lawyers for using it to prepare their submissions?

Yes, AI hallucinates. But the frontier models (Perplexity, GPT 4.5, Claude 4) have all come a long way since 2022. All of them do remarkably well at answering a discrete legal question in Canadian law. They’re not always correct in every detail. They provide a made-up case or two in a list of several accurate sources. But the fact is that they have now become consistently and stunningly good at quick overviews of discrete issues—by drawing effectively on the wealth of good summaries on the web.

What we should aim to teach

It’s unrealistic to hope that most students come out of law school with a strong grounding in doctrinal law. That comes later, after a few years of practice and a lot of dedication.

But it is realistic to hope that students can build the skill of using AI to help them learn new law, navigate novel issues, and tackle the challenge of legal writing—without misusing it.

Hoping they’ll avoid AI and go through law school like it’s 2005 is neither realistic nor prudent. We faced similar challenges adapting to the internet at school, but we managed. We can do the same with AI.

Comments

  1. Great piece Robert. Certainly agree regarding the potential that AI could erode critical thinking if overused or abused. That said there are IMHO several additional (somewhat related and somewhat more positive ) aspects of law student AI use that may be worth discussing:

    1. There is the very likely reality that being a good lawyer 5 years from now will require the use of the much more sophisticated and specialized AI we will have then. And 30 years from now, when today’s students are at the peaks of their career, that will almost certainly be absolutely true. Using future AI’s skillfully will be a very important skill unto itself. So law students using AI, even crudely, is at least something that may help their ultimate legal literacy…

    2. More significantly, students getting involved with AI will start designing AI processes for the best results. This flows from point one and I will dare to predict that in the future designing AI law processes for individual cases or types of problems will be a prized LEGAL skill.

    3. Am pretty sure that when the typewriter came along, many lamented the loss of penmanship as a skill. Some probably even went so far as to suggest that penmanship style captured the soul of the letter writer. That may and probably was, true. But technology moves on and we find other ways of showing our soul through those emerging technologies, often for the better and with more power and effect. Likewise students and lawyers will find other ways of showing their legal skills with AI and what lies beyond. And dare I say, the same cream will rise to the top either way IMHO. Maybe I should prompt CHAT GTP to make me an image of Clarence Darrow (my hero) using CHAT GTP. LOL.

    Hope this is a helpful prompt towards your next great column on the subject ;)

  2. Excellent article! Perhaps this can be the jolt that helps reshape the archaic law school model. AI has the potential to completely change how students learn the law. Instead of sticking to traditional cold calling and endless case readings, imagine leaning through real interactive exercises that stimulate life scenarios – like negotiating a deal, responding to a client, drafting pleadings/ contracts, or figuring out how to approach a tricky evidentiary issue. With AI, students could avidly practice applying the law in a way that feels grounded and relevant.