Column

How Profs and Students Are Using AI in Law Schools Around the World

Law schools everywhere are confronting the same issue: how to use AI to help rather than hinder student learning.

In an earlier column, I speculated on ways we might help law students foster good over bad uses of AI. A paper published this summer by Dutch law professor Thibault Schrepel surveys the growing literature on experiments with AI in legal education. His overview provides a more concrete sense of what better uses of AI might entail.

These applications all have potential pitfalls, but these too can be harnessed as part of the learning process. To begin with the most obvious use:

1. Summarize and explain lengthy and convoluted cases in simple terms. 

AI summaries can be wrong in many ways. They can hallucinate facts, misstate ratios, or confuse dissenting and majority opinions. But some profs seize upon this by getting students to verify summaries for accuracy. In other cases, AI summaries provide a starting point for tackling dense material, allowing students to focus on higher-order tasks like evaluating the reasoning or thinking about how to apply a holding to new facts.

2. Come up with new fact patterns for students to test how a rule might operate. 

AI can generate a brief scenario or role-play a lawyer-client simulation to illustrate how a concept might be encountered in practice. Some profs encourage students to have AI critique their performance or advice.

3. Have AI play the role of Socratic tutor, asking follow-up questions when a student states a rule. 

This can be done by uploading a case and prompting a model to generate a Q&A session about it, acting as (a kind and encouraging) professor or articling principal.

4. Have AI produce a summary of an area of law—line of authority, doctrine, or principle—and have students critique its answer.

Is it accurate? What is it missing? How can it be improved?

5. Design a custom GPT trained on course materials to provide students with an interactive tool for questions.

While a student could email a prof for clarity about a point covered in class or feedback on a practice exam, a model drawing on course materials could provide an instant response and “effectively turn solitary study into an interactive session.”

6. Use AI to brainstorm arguments for mooting briefs or papers.

Many of these won’t be usable. But AI can help students overcome fear of the blank page and get started, moving on to higher-order analysis sooner and more efficiently.

7. Have AI edit writing for style and clarity.

Legal writing is notoriously hard to teach and labour-intensive. AI is infinitely patient and can show its work by bolding suggested changes and explaining the rationale. It can show how a passage might be rewritten to be more formal or offer a high-level critique.

8. Give students an AI-generated first draft of a memo and have them revise and improve it.

This can help students refine their ability to explain rules, structure arguments, and cite law correctly. It also helps develop editing as a distinct and vital skill.

9. Draft an outline for a factum or paper and have AI critique it. 

AI can flag flaws and inconsistencies and suggest alternative structures. Some profs ask students to submit a log of their prompts and responses as part of the assignment, encouraging reflection on the process and the questions asked.

10. Use AI to assist with legal research by suggesting key cases or statutes on a given topic, and possible arguments.

Results will be hit or miss, but AI is improving at providing at least “an initial roadmap, akin to brainstorming with a reference librarian or senior colleague,” as Schrepel notes.

Broader concerns with greater reliance on AI

These applications all aim to use AI as a tool for learning rather than a substitute for it. But as Schrepel points out, there’s a major potential problem lurking beneath all of this: over-reliance on AI.

As students become more adept at using AI for research, outlines, and editing, the temptation to let AI do the work for you—to cross the line into “AI-giarism”—is strong. AI is already capable of drafting papers and exam answers that professors can’t detect. (Schrepel cites a study where law profs grading a criminal law exam in Australia failed to identify answers written entirely by AI.)

The challenge for law profs and principals is to foster awareness of when reliance becomes over-reliance. AI might appear capable of complex writing and research on its own, but it makes frequent and subtle errors. More crucially, it lacks the judgment to weigh competing considerations or assess credibility. It can overlook broader ethical issues hovering above a seemingly straightforward legal question.

But this, I think, is why the uses of AI outlined above matter. They treat AI as a starting point for critical engagement, not an endpoint.

Students who use AI to generate fact patterns get better at issue spotting. Verifying AI summaries develops their close reading skills. And critiquing AI-generated arguments can sharpen their analytical judgment.

The profession that students are heading into is one where AI will be ubiquitous. Our goal should not be to avoid AI but to teach students to recognize its limitations and to improve its output with their own expertise—to use AI effectively and responsibly.

Because however the profession may change with AI, human judgment is likely to remain central.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)