Another One on AI: Teaching Legal Citation With ChatGPT
Many important questions on the use of generative AI for legal research remain unanswered. Legal citation is the focus of this post for three primary reasons: 1) ChatGPT has been trained on materials that were published prior to September 2021 and the format of the main citations used by law students (case law, legislation, books, and articles) have not changed in this time, nor are they likely to in the future; 2) it provides a low-cost opportunity for students to interact with ChatGPT and better understand how it interprets prompts; and 3) legal citation is a skill typically taught through rote learning that does not require as much critical thinking as the process of legal research that has already been addressed through traditional algorithms. It is, in essence, following a system of defined rules and algorithms excel with these types of tasks.
Since the first iteration of ChatGPT was launched in late Fall of 2022, the use cases for generative AI in the practice of law have been widely discussed. Six months later, it seems obvious that generative AI will have an impact the landscape of legal research and writing in particular. AI features have been launched or announced as forthcoming on dominant platforms, such as Lexis+ AI, Westlaw Edge, and V|lex Vincent AI. New platforms like Jurisage have honed in on the potential of AI as well.
However, students entering law school typically do not have knowledge of platforms and tools designed specifically for legal research. They will have heard of ChatGPT and are likely to have interacted with it in some manner, whether academic or not. As I begin to look ahead to the fall semester (sorry, too soon?) it would be as naïve to ignore ChatGPT as it is to suggest to students that they should not use Google or Wikipedia. While the research process should not be conducted solely using ChatGPT, it could be a valuable supplemental tool for legal citation.
Despite promising advantages, generative AI for legal citation is far from a perfect solution. ChatGPT has been reasonably and responsibly criticized as unreliable for generating Bluebook citations. This is due to, “… the lack of text formatting capabilities and the tendency to create incorrect or non-existent citations.” Both of these criticisms apply equally to the Canadian Guide to Uniform Legal Citation (the “McGill Guide”). However, the McGill Guide has fewer stylistic formatting requirements than the Bluebook and ChatGPT is now programmed to be more cautious when asked legal research questions as a result of citation hallucinations being submitted to the US District Court in Manhattan recently.
Students are generally encouraged to make use of citation and reference management tools such as Mendeley, Zotero, and RefWorks which do an adequate job formatting citations following APA, MLA, Chicago, etc. While these tools have the added value of assisting with the tedium of supras and ibids for footnotes, the McGill Guide has never been well-suited for these traditional algorithms. As pointed out by Queen’s Law librarians’ in their Legal Citation Management with Zotero LibGuide, citations for case law and legislation tend to be incorrect, presumably due to the variations of abbreviations across provincial and federal legislatures and courts. ChatGPT will also be imperfect, but with well-informed prompts it will generate these abbreviations better than traditional algorithms.
When asked, “What information do you require to format citations that adhere to the Canadian Guide to Uniform Legal Citation?” ChatGPT states that it requires the following source type information for it to generate a citation that adheres to the McGill Guide:
- Case law:
-
- Case name (including v. for “versus” if applicable)
- Volume number and abbreviated title of the law report series
- Page number where the case begins
- Year of the decision
- Court name and jurisdiction
- Legislation (statutes, regulations, etc.):
-
- Full title of the statute or regulation (italicized or underlined)
- Jurisdiction (e.g., federal, provincial, etc.)
- Year of the statute or regulation
- Chapter or regulation number (if applicable)
- Pinpoint citation (specific section, paragraph, etc.)
- Books and textbooks:
-
- Author(s) or editor(s) name(s)
- Title of the book (italicized or underlined)
- Edition (if applicable)
- Publisher name
- Year of publication
- Page number(s) (if applicable)
- Journal articles:
-
- Author(s) name(s)
- Title of the article (in quotation marks)
- (Year) Volume number Journal name Page number
I have tested it with cases, legislation, books, and articles using the guidelines provided above and the citations are satisfactory. I provided all of the necessary and correct information two ways. First, as list format. Second, as a citation with the information in the incorrect order with unnecessary additions such as periods, commas, and brackets. As long as the necessary information was provided, ChatGPT generated a revised citation that followed McGill Guide formatting. The only notable issues were the inability to format italics and errors in adding pinpoints when providing the information in a list format. These shortcomings may be the result of my own lack of ingenuity in designing the prompt.
Introducing ChatGPT as a tool to assist with basic legal citation still requires that students identify all the necessary details required to generate a complete citation (who, what, when, where) while also testing their ability to develop prompts. Advanced knowledge and skills are essential for reviewing and assessing the generated legal citations. This emphasizes the value of foundational legal research skills and mimics how more complex legal research tasks should be approached using generative AI as well. AI can supplement, but not be a substitute for, legal research.
Legal citation could be an introduction on how to safely integrate and explore the potential of generative AI tools for legal research. Engaging students in an exercise with ChatGPT in a low-cost way also creates space for larger conversations about the potential pitfalls of generative AI in the legal research process. Particular emphasis should be placed on identifying issues with formatting, citation hallucinations, the materials ChatGPT and other models have been trained on, and why complex international citations are less likely to be formatted correctly, anthropomorphising technology.
Learning the basics of legal citation is an invaluable research skill as citations communicate the essential components for locating a source, the authority of a source, and profession specific knowledge. While it should not be relied upon without human intervention, the rote teaching and learning of legal citation could reasonably be supplemented by ChatGPT.
I’m still getting regular requests in my library for help finding fully FAKE CASES that were generated by Chat GPT. These requests come from a variety of patrons including lawyers and large law firms who troublingly don’t believe me when I explain that Chat GPT has misled them. I can appreciate how it might be useful in formulating a citation but I’m far more concerned about its unreliability and the faith users are putting in it.
I’m sorry you are experiencing this! I hear you and I completely agree. Blind faith may be the path of least resistance, but it’s not a path that is going to lead anywhere good.
To clarify, using ChatGPT to format – not produce – legal citations was suggested as a valuable, low-cost entry point for law students specifically because it identifies several difficulties (e.g. formatting, how differences in prompts can impact outcome, etc.) and clearly identifies the need for pre-existing knowledge on a topic.
A legal citation activity using ChatGPT would aim to teach transferable skills that can be applied as students expand their attempts to generate productive legal research using AI tools. It would, hopefully, result in law students/lawyers who do not blindly trust any citation that is given to them.
I agree with Em – when it comes to predictive AI, let’s worry about professional integrity and content first, and then citations. That being said, there is a new, 10th edition of the McGill Guide out this week. It does have some different citation formats from the previous edition. These obviously wouldn’t yet be picked up by ChatGPT. Contact your local law library for more info!
Also worth mentioning is the new Practice Direction released by the Manitoba KB last week which indicates that *any* court submission that has been drafted or created using AI must have a disclaimer indicating this. Tables of Authorities would definitely fall under this category.
Hi batgirl & Em,
Thanks for reading! I’m sorry you’re dealing with these issues. I had posted a reply to Em’s comment, but it hasn’t appeared so I will attempt to address both your concerns again.
I echo your sentiments that professional integrity is something we need to emphasize. We are at a point of significant change in legal research and I think that addressing appropriate use of AI for legal research is incredibly important, as it would be a naïve to assume that students (and lawyers and other legal professionals) aren’t already using it.
I was fortunate to be able to contribute feedback that was incorporated into the recently released 10th edition of the McGill Guide. My post explicitly states that it only applied to common citations (cases, legislation, books, and articles) that were unlikely to change with a new edition of the McGill Guide, as I was anticipating its imminent publication.
The post also states that this exercise was intended to teach foundational citation skills while opening up room for a discussion on use of AI for legal research generally. It explicitly says that it is a low-cost entry point for introducing AI to law students specifically because it would require identifying the relevant elements to produce a citation and then require them to reviewing and assessing those AI generated citations. Having students use AI in a manner akin to a reference or citation management tool like Zotero or RefWorks is different than having them use AI to conduct legal research during which hallucinations may occur. I am suggesting the former, not the latter. This activity creates space for discussions on these larger issues of professional integrity that you both mention.
Finally, the Practice Direction released by the Court of King’s Bench of Manitoba does not provide a definition of AI. This will likely result in more questions for the court. There are many problems to address and it will take years to figure them out. In the meantime, addressing the complexities with law students early in their education should, hopefully, reduce over-reliance and misuse of AI in legal research.