Anticipating AI-Generated Law Journal Submissions
Updating Law Journal Submission Guidelines
I recently mentioned to local law journal editors that their Submission Guidelines may need to be updated to address the implications of AI.
Surveying the Options
First, I think it is important for journal editors to have it on the radar that AI generated journal submissions are a distinct possibility.
This will require editorial boards to have a proactive conversation about how to handle such submissions.
In my mind, there are a few options available. Here is some language that editors might consider:
Option 1: Submissions shall not contain any AI generated content.
or
Option 2:
Submissions should contain no more than ___% AI generated content.
Such content should be thoroughly reviewed by the author to ensure accuracy and proper attribution of all source materials.
(Further details might need to be contemplated, so that the user understands what is an acceptable use and what isn’t. If using this option, then I’d highly recommend adding an additional requirement, as follows:)
Add-on to Option 2 / Option 3
If submissions contain AI generated content, provide in a footnote for each AI-generated paragraph, the following information:
Name of AI | , version if available | . Prompt. | (Translation of prompt if applicable.) | (Developer | : host if different than developer | , date or date range of response). | URL if conversation publicly saved | (description of any additional prompts.)
Example:
ChatGPT, 3.5. Response to “why do citations of united nations documents include meeting information as well as a document number? It seems redundant.” (OpenAI, 12 January 2024). https://chat.openai.com/share/a41ec2d3-0362-4282-b15b-71654fb5302b (Further prompts to request sources of information provided in response.)
This citation format is borrowed from the draft Canadian Open Access Legal Citation Guide Group, Canadian Open Access Legal Citation Guide (CanLII, forthcoming 2024) (reproduced with permission).*
This final Option 3 could be used as a stand-alone guideline. It essentially does not restrict AI usage in any way, but instead requires attribution only.
Final Thoughts
For law librarians and others who are working with authors and editors, it will be our new responsibility to point out areas where AI generation may have impacts.
To some who are reading this post, this may all seem inconsequential and bookish, however I would point out that our courts do frequently cite secondary material when finalizing their deliberations. Accordingly, having a fulsome discussion about the potential for gen-AI to infiltrate legal scholarship needs to be had in all circles.
By the by, in reviewing the various journal submission guidelines, I also recommended (as appropriate):
1. Moving to the 10th edition of McGill Guide for a citation standard (if an older version was required).
2. Adopting the use of permanent URLS (such as those generated by Perma.cc) (which is recommended by the McGill Guide).
3. Removing personally identifying information from referee comments in a Word document by using “inspect document” option.
If you have other ideas for how to approach these issues and other similar issues that may arise in our workplaces, please comment below!
*My thanks to @James Bachmann, UBC Law Library and the COAL Group for allowing us this sneak peak. The COAL citation guide contains dozens of further elaborations and examples of how to attribute AI generated content.
I understand the concern here. But just like court rules requiring lawyers to disclose AI use, I think your specific recommendations are misguided.
It’s a bare minimum of academic integrity to expect authors not to use “dead-end” citations (I take it that was your friendly way of saying fake, i.e. fraudulent). A footnote tells the reader: “I’ve read this source. I believe it stands for the proposition I say it stands for.” No author should submit an article unless that’s true of every footnote. And it plainly can’t be true for hallucinated footnotes an author can’t possibly have verified.
Submitting an article with fake footnotes is a misrepresentation to anyone reading the paper. Journals shouldn’t hesitate to blacklist anyone who does it. If necessary to state the obvious, journals might consider spelling that out for anyone making a submission.
Beyond that, what does it matter whether someone uses generative AI to turn out an initial draft? If a tool like ChatGPT helps with phrasing, brainstorming, outlining, structure, or anything else unrelated to the academic substance of the paper — who cares? Even if an author DOES use generative AI for substantive research (a bad idea, given the limitations of current tools), what does it matter as long as they review every word, double check every footnote, and endorse the accuracy the final product?
Thanks for this Dylan! Glad to have some feedback on this and I’m wondering what others think?