Column

Is It All About the Prompts? Experimenting With Gen AI to Develop Public Legal Information

I recently challenged myself to explore various Gen AI tools to improve my plain language skills and efficiency. As noted in my previous post, Exploring Gen AI Opportunities for Plain-Language Writing, the impetus for this challenge was in part from the encouragement in the Law Society of Saskatchewan, my home jurisdiction, for lawyers to engage in continuous learning about AI and its implications for legal practice.

My usual process for creating public legal content is as follows:

  1. Research: Research is conducted on the topic to create a draft framework. If a legal process is being described, the steps are outlined with any requirements to complete each step identified. I rely upon existing credible websites or resources that can include applicable legislation. Internal documents may also be provided by the client.
  2. Validate: Subject matter experts (SMEs) are identified, and sessions are conducted with the SMEs to verify the accuracy of the draft framework. Inaccuracies are corrected and gaps are filled. I also dig into the common questions they hear from the public and what they wish the public knew about the topic/process.
  3. Refine: The draft content is refined and confirmed with the SMEs until they are satisfied the content is accurate. This can require several sessions with revisions in between.
  4. Audience: Once the content is accurate, it is evaluated for grade level and adjusted for the audience. For many of my projects, this is a grade 7 level. I then again confirm with the SMEs that accuracy wasn’t sacrificed for the sake of plain language.

Based on the guidance provided by law societies and the principles developed by the Centre for Public Legal Education Alberta (CPLEA) noted in my previous post, I understand that Gen AI can’t be blindly relied upon to generate content. A human should always be in the loop to review for accuracy. As noted in the Law Society of Saskatchewan Guidelines for the Use of Generative Artificial Intelligence in the Practice of Law:

Generative AI tools should be treated as equivalent to nonlawyer assistance, and their outputs should be reviewed for accuracy and conformity with the lawyer’s professional obligations.

However, I did experiment with Gen AI (Chat GPT-5 specifically) to determine if my process described above could be streamlined or eliminated altogether. The topic I selected was the Saskatchewan Assessment Process, that is, the process for challenging a lawyer’s account. This topic was selected because there is existing credible online information available in order to easily confirm accuracy.

As noted in the recently released guide, Using Generative Artificial Intelligence (GenAI) Tools to Obtain Legal Information, by the Saskatchewan Access to Legal Information project (SALI):

How a question is worded can impact the accuracy and quality of AI-generated information. Well-structured prompts (questions) may reduce, though not eliminate, the risks.

The SALI guide offers several tips to produce clear and specific prompts that guided my inquiry, such as defining the area of law and the jurisdiction. I initially used 10 different prompts, including several in an attempt to narrow down the information or fill in gaps. General prompts included:

  • How do I challenge a lawyer’s bill in Saskatchewan?
  • My lawyer’s bill is too high. What do I do in Saskatchewan?
  • How do I apply for assessment in Saskatchewan?
  • What do I do if I’m concerned about my lawyer’s bill in Saskatchewan?

More specific prompts included:

  • Please provide more information about the assessment process in Saskatchewan.
  • What are the steps involved and the cost to challenge a lawyer’s bill in Saskatchewan?
  • How do I apply to the court for an assessment in Saskatchewan?
  • Do I have to go to court to have my lawyer’s bill assessed in Saskatchewan?
  • What happens if I miss the 30-day deadline to apply for assessment in Saskatchewan?
  • Can you draft a late application or affidavit explaining my special circumstances for not applying for assessment within 30 days in Saskatchewan?

Overall, the content generated following each prompt was accurate but not complete, even when more specific prompts were used. The basic steps to the process were clearly outlined and sources, like the Law Society of Saskatchewan, were provided. This was a vast improvement from when I tested the first generation of ChatGPT on the same topic. At that time, ChatGPT generated a lovely but fictious process that involved a fake Billing Review Committee by the Law Society of Saskatchewan rather than the correct process handled by the Registrar of the Court of King’s Bench.

To ensure it wasn’t my inability to properly instruct that was resulting in inaccurate or incomplete information, I conducted additional research on how to construct a prompt. The Prompt Engineering guidance from the Queen’s University Library offers the following parameters, using the acronym PROMPT:

Following PROMPT, asked ChatGPT:

I am a client looking for steps to reduce a lawyer’s bill in Saskatchewan in clear, plain language.

This prompt did provide slightly more complete information, but gaps were still evident.

Based on my experiment and those of others like CPLEA, the content generated by ChatGPT cannot be the only step involved in developing a legal information resource for the public. The steps outlined above still need to be followed. However, it may assist in streamlining those steps. At this stage of my experimentation, I see benefit for Step 1: Research, particularly if good online content does not currently exist. With good prompts, a general framework can be generated through ChatGPT to be expanded upon with additional research and validated by SMEs for inaccuracies and gaps. It is a good starting point, not the end product.

As a next step, I will continue to experiment with ChatGPT to determine if Step 3: Refine and Step 4: Audience of my process can be streamlined.

Comments are closed.