Mastering AI Prompts for Legal Professionals Practical Strategies and Tools
This submission is part of a column swap with the American Association of Law Libraries (AALL) bimonthly member magazine, AALL Spectrum. Published six times a year, AALL Spectrum is designed to further professional development and education within the legal information industry. Slaw and the AALL Spectrum board have agreed to hand-select several columns each year as part of this exchange.
Tips for drafting effective AI prompts.
On November 30, 2022, ChatGPT’s release marked a pivotal shift in the information world. Despite early explorations by legal research companies like Casetext, generative AI (GenAI) was unfamiliar to most. Fast forward to 2024, and a significant number of major legal information and legal services companies have integrated GenAI into their products. Legal professionals, from librarians to attorneys, now need proficiency in using GenAI. Yet, many ask, “How do I use it?”
No need to be intimidated anymore! This article demystifies AI prompt drafting or “engineering,” guiding you on creating effective GenAI prompts. Planning before typing is crucial.
Pre-Planning: Know Your System and Its Limitations
Whether you are using one of the mainstream GenAI models, like ChatGPT or Claude, or a specialized GenAI product built into another system, like Lexis+ AI or Westlaw Precision AI, it is best to know your product. Questions you should ask include:
– What information is available to the GenAI model?
While a list of exact documents in the training is unlikely to be provided, you are often able to find out what types of information and/or subject areas have been entered into either the training data or an alternative way of accessing data, such as Retrieval-Augmented Generation (RAG). The legal research GenAI providers will often publish this as part of their advertising material.
Once you know the general scope of the materials, you can better customize your prompt to the available data and predict your possible results. Lack of understanding can result in adverse results, such as irrelevant or incorrect responses. For instance, a group of Stanford researchers tested several legal research GenAI products. One of the products tested was Thomson Reuters’s Practical Law AI, which is limited to practitioner-oriented secondary source material. It does not contain topics such as constitutional law nor does it contain primary law, such as Supreme Court opinions. The researchers asked Practical Law AI the question, “Why did Justice Ginsburg dissent in Obergefell?” The response was incorrect because the underlying data was not available. Asking a question of a GenAI product that does not have access to the material requested will not produce quality results, and it will likely return information that is either irrelevant or incorrect.
– Are there limitations on how I interact with GenAI, such as prompt length or number of follow-up interactions that may affect the result outcomes?
You will need this information to properly draft your prompt within the limitations. Many of these limitations will be described in the vendor-provided support and instructional materials. For instance, ChatGPT allows you to draft unlimited follow-up prompts, but it slowly forgets the original question. To combat this issue, Lexis AI+ initially limited users to five follow-up interactions to preserve the original question as context for the responses. Practical Law AI material clearly specifies that its product can only answer questions; it is not a drafting product.
Figure 1: Sample prompt
Figure 2
– What type of prompting is your system using?
Currently, there are three types of prompting:
- Pure user-created prompts are open prompts written by the user with no other information requested by the system, e.g., Claude and Practical Law AI.
- Guided prompting where the user provides answers to specific questions, e.g., LawDroid’s Copilot.
- Hybrid prompting contains some questions from the system and an open prompt from the user, e.g., Lexis+ AI.
Knowing which type of prompt your system uses will help you plan your prompt.
– What happens to the information from my prompt?
Some systems use your information to further train their GenAI systems or to evaluate
the prompt to make sure you are not violating their use policy. Any proprietary or confidential information is at risk and should not be included in a prompt in these situations. Check the terms of service to find the answer to this question.
– Who owns the information provided to me in the results and how can I use it?
This is a gray area that may be answered by one or more provisions in the terms of service. For instance, ChatGPT’s terms of use (January 31, 2024) state that you own the output, but that the same or similar output may be given to other users. Careful readers may see some issues with that statement. And before using the output in a class or a court, please check the classroom policy or local rules.
– How do I know the results are accurate?
Most of the GenAI models discussed in this article are large language models, which means that they have been trained on a large amount of data. All of these models currently hallucinate which is the polite industry term for making up information. If you want to learn more about hallucinations, Rebecca Fordon has published an excellent blog post on AILawLibrarians.com.
There are ways to mitigate the hallucinations, such as the use of RAG which points the GenAI model to specific information to incorporate into its output. To utilize RAG, vendors create a specially formatted database of additional information, such as primary and secondary legal resources. The GenAI product will take a prompt and run a search in this database, then use those results to generate a response that is statistically likely based on the training data. The results from the search in the vendor database are often listed separately from the generative output, which helps researchers verify the output’s accuracy.
Just as a law student’s response becomes more accurate and sophisticated the further they progress in their legal education, so too does the GenAI product’s output with more legal information. This improvement mitigates hallucinations. Mitigation of hallucinations, however, is not synonymous with elimination of hallucinations. This was shown when Creighton University School of Law professor Paul McGreal shared a Lexis+ AI response with case citations to cases from 2025 and 2026 on February 27, 2024, and the output mischaracterized existing cases.
Planning: What Is Needed to Create an Effective Prompt?
An effective prompt generally contains the following elements:
1. Tone of Output
The tone of your output will likely differ based on your planned use and the audience for the output. For instance, the tone of a legal pleading is generally much more formal and professional than that of a social media post. The tone selected will affect word choice, sentence structure, and even grammar and punctuation. The GenAI model will understand the difference between asking for a professional tone versus “explain it to me like I am five,” which in this author’s experience begins with “Hey, little buddy!” Most legal GenAI products default to a professional tone, which users may wish to modify in the prompt depending on their audience.
2. Role
GenAIs generally come into the conversation with a blank personality. Like humans, they adjust their response if you tell them to play a role, such as prosecutor or defense attorney. It helps provide the system with context to what you expect in an output.
Some of the mainstream systems have shown odd quirks in this area. For instance, ChatGPT provides better mathematical responses if you ask it to pretend that it is a Star Trek character. Depending on your output requirements, you can also ask the system to pretend to be a legal writing editor/professor; an expert foreign, comparative, and international law librarian; or any other title that would be an expert at creating the desired output.
3. Output Format
Common formats include answering a question with no particular format, drafting a legal memo, creating a social media post with pictures and hashtags, writing an email or letter, creating a pro and con list, condensing a bio to 100 words, and similar outputs. Be specific in what you are looking for and do not hesitate to add additional details, such as word length, citation format, or file type, to custom.
4. Purpose
While not officially necessary to create results, this context helps customize the output. For instance, explaining that a client is trying to intervene in another proceeding, such as the adoption of their child, gives further context to the GenAI model when generating arguments for the output.
5. Issue, Including Jurisdiction
This is necessary because GenAI is providing information based on what it predicts you want. The more information, the better the response. For instance, if you ask GenAI to complete this sentence, “The man breached the ____ ,” the sentence will dramatically change if you have provided context about a landlord/tenant dispute, which may result in “warranty” or “contract,” rather than medieval warfare, which may replace the blank with “castle” or “wall.”
A useful way to think about this is to compare a GenAI system to Sheldon Cooper of Big Bang Theory fame sitting in the HR office; he has all the knowledge one could hope for, but none of the human context. If you want a good result, you need to provide that context. You may also need to break complex or multi-issue questions up into separate prompts or follow-up interactions to your original prompt. Completely different issues are usually best handled with separate prompts, while complex issues could be handled with follow-up prompts. The results will be best if the GenAI can focus on one thing at a time. Some systems may force a separate prompt, such as Lexis+ AI originally requiring a separate prompt for additional jurisdictions.
If you are using a mainstream non-legal GenAI, please note that they are not trained on legal documents, so you are better off including the law you need in the prompt to draft a document. The mainstream GenAI models are not legal research databases and are much more likely to hallucinate legal cases and statutes, as we have seen in the Mata v. Avianca, Inc. (S.D.N.Y), Park v. Kim (2nd Cir), and many other cases where attorneys turned in hallucinated case citations. If your GenAI output provides you with law and legal citations, please verify the existence and subject matter before citing them! See Figure 1 on page 28 for a sample used in a drafting prompt that incorporates the law.
6. Potential Refinements/Interactions
If you do not get the outcome that you hoped for in the first attempt or if you want to expand upon the response, you can modify your initial request in a follow-up interaction. For instance, you may need to refine the format. Alternatively, you may ask GenAI to serve as a devil’s advocate and provide counterarguments to those that it just provided.
Finalizing Your Prompt: Weaving It All Together
Most GenAI models do not have a set prompt format. You can structure it as separate elements, like the structure above, or you can weave it into a narrative format. See the example in Figure 2 for a narrative legal memo example where the different sections are highlighted by color; jurisdiction is noted in a separate box.
If you would like an easy way to walk through the prompt drafting process, there are two downloadable, fillable worksheets available—one for legal GenAI systems and one for the mainstream GenAI models.
AI prompt engineering sounds like a complex process that should be restricted to specialists. In reality, legal researchers have been employing a similar process for years. You need to stop and think about what you are trying to create and what your specific needs are before you type in that tantalizing box. The maxim “garbage in, garbage out” applies just as much to prompts as it does to general search strings. If you stop and take a few minutes to plan, you will come away with much better results than throwing the kitchen sink in the box and crossing your fingers as you start the search/generation process.
______________________
– Jennifer L. Wondracek
Director of the Law Library & Professor of Legal Research & Writing
Capital University Law School
Columbus, Ohio
Start the discussion!