Column

From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter

Generative AI is nearly impossible to avoid as a law student. Over the past few years, it has been embedded into many of the products commonly used for legal work (See e.g., proprietary research platforms, Google, Microsoft products, etc). Whether welcomed or resisted, generative AI is now part of the legal information environment.

There are many questions remaining about how to prepare students for the use of generative AI during their legal education for their future practice. While technological competence ≠ generative AI, we know that use of generative AI systems is a technical skill the profession is anticipating. Anecdotally, much of the discussion I’ve engaged in about student use of generative AI in law is rooted in speculation rather than evidence. We speculate about student behaviour, levels of understanding, and risks of overreliance and misuse. Empirical evidence of the student experience could improve these discussions.

Law schools are actively trying to respond to the rapid inclusion of generative AI. Several law schools offered courses addressing technological competence and the integration of technology into legal practice prior to generative AI’s public launch in 2022, and many have developed them since. These courses, however, tend to be small, upper-year electives with limited enrolment. Meanwhile, generative AI is shaping how students approach core academic and legal tasks earlier in their education, including conducting legal research, drafting written assignments, and orienting themselves in unfamiliar areas of law. This requires that discussions about the use of legal technologies employing generative AI occurs earlier in legal education and at a broader level.

From my own teaching perspective in Legal Research & Writing, this gap is increasingly difficult to ignore. I am a staunch advocate for a medium-agnostic approach to legal research and writing but also recognize there is an obligation to teach students how to use the technologies they will encounter in practice. My own classroom experiences, informal conversations, and engagement with content on the topic suggest that many students already incorporate generative AI into their research and writing processes. Much of the Canadian conversation about students’ use of generative AI in law continues to unfold in the same way: through classroom anecdotes, faculty discussions, conference panels, blog posts, and LinkedIn threads. These forums are immensely valuable, but imagine those discussions being supported by findings specific to the Canadian law student experience. This data could help teachers of law, law schools, and workplaces anchor their conversations instead of guessing which tools are being used and how, what is understood about their limitations and responsible use, and whether existing guidance is fit for purpose.

Contextualizing these discussions with empirical data matters for several reasons. Legal education is cumulative, and research and writing habits formed early may persist into practice. Students are currently navigating a wide range of expectations about use and disclosure across courses and employers. Attitudes toward generative AI also vary widely, from ethical refusal to enthusiastic experimentation, often alongside unspoken assumptions about the level of technological competence recent law school graduates should possess. Without evidence, policies and assessments risk responding to imagined behaviours rather than real ones. Empirical insight can help move the conversation beyond simplistic binaries of “AI good” or “AI bad,” or “use” versus “misuse,” toward a clearer understanding of how generative AI is impacting the learning process and critical thinking through its current applications.

This is the motivation behind Beyond the Books, a national survey supported by the CBA Law for the Future Fund. The study examines Canadian law students’ and recent graduates’ use, preparedness, and perceptions of generative AI in legal education and early work experiences. The project aims to gather evidence that can support more effective teaching, clearer guidance for students, and realistic expectations across legal education and practice.

The survey, Beyond the Books: Law Students’ Use, Preparedness, and Perceptions of Generative AI, is open until 30 January 2026. The response so far has been fantastic, with students and recent graduates eager to share their perspectives.

If you are a Canadian JD student or a member of the 2024 graduating class, please consider participating to contribute to the conversation! Your voices are essential to any serious discussion of generative AI in legal education and work experiences.

If you teach or work with Canadian JD students or recent graduates, please consider sharing the survey with your students or new colleagues.

Link to survey: https://surveys.dal.ca/opinio/s?s=81977

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)