How AI Can (And Can’t) Enhance Practice Efficiency
I use AI to assist with social media posts, writing outlines, idea generation, initial research, image generation, etc. AI acts as my administrative assistant to create efficiencies in my inbox and remind me of important tasks I need to complete. A few weeks ago, AI reminded me that I had a blog deadline and that I should get writing. Given the promise of AI, I decided to run a limited experiment in my holiday downtime.
I wanted to understand if AI could help my practice management by creating an efficiency, but that experiment utterly failed and I wanted to conduct a root cause analysis to understand why my experiment didn’t work. My conclusion was that the failure occurred because of one (or all) of these factors: the specific AI-tool, the prompting, general AI limitations, or me.
The Productivity Promise
Stanford Professor, Jeremy Utley, is an academic who provides practical guidance on how AI can be leveraged to create productivity. In a video called, “Stanford’s Practical Guide to 10x Your AI Productivity”, his advice is to focus on the pain points of your role and try to find productivity gains in those areas.[1] For me, it is the administrative side of my practice. I find it tedious and I would love to off-load some of those tasks, but I also know that they’re necessary and important.
While I use the LSO practice management guidance, tips and precedents,[2] at the end of the year, I try to set aside some time to review my internal policies and processes just to check that they’re current and touch on trending issues or regulatory changes. When it comes to policies, procedures or other internal guidance, AI can be incredibly helpful to:
- Provide an outline of important laws, regs or relevant issues to consider
- Help with brainstorming and guidance to get started
- Draft or re-draft paragraphs (or your entire corporate policy)
- Identify outdated information and provide a checklist of areas of focus
- Initiate research into required updates by providing helpful summaries, resources, and guidance
So, when I came across Professor Utley’s videos, I thought to myself, “10x my productivity and get rid of the administrative part of my role? Sign me up!”
The Problem and the Experiment
Problem: “Can AI create practice efficiencies by helping me to update my internal corporate policies?”
Sample Size: One.
AI Tools Used: One.
Comparative AI Tools Used: None.
Methodology: Draft a SMART prompt which provided AI with context and expectations for the task; Upload the reference documents which included an outdated corporate policy and information about my practice; Refine SMART prompt and re-run
Results: AI spit out a checklist of common sense things that I should look for like, “review all references and check for updates”, “update outdated information,” “check for changes,” “confirm URLs”, “confirm if this reflects current marketing channels or strategies in use”.
Next Steps: Ended AI chat.
Conclusion: Experiment failed.
Analyzing The Results (& Why It Failed)
Once I got the results and realized that my brief AI-experiment failed, I ended my experiment and updated the policy page-by-page using my brain. Professor Utley warns us about this. He suggests, in simple terms, that many people ask AI a question and then when a valuable answer isn’t forthcoming, they give up on the tool.
In an article in the Guardian, author Sophie McBain contemplates whether AI is creating the “golden age of stupidity” and to support this position, she relies on an example provided by Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School. Professor Gerlich found that students who used AI more frequently scored lower on critical thinking using a candle comparison:
“I always use the example: imagine a candle. Now, AI can help you improve the candle. It will be the brightest ever, burn the longest, be very cheap and amazing looking, but it will never develop to the lightbulb,” he says. To get from the candle to a lightbulb you need a human who is good at critical thinking, someone who might take a chaotic, unstructured, unpredictable approach to problem solving.[3]
According to Professor Gerlich, while AI can make us, “cleverer and more creative”, the way that most of us use AI produces “bland, unimaginative, factually questionable work.” When companies use AI in their internal practices without offering AI training, the result may be that innovation is stifled. He suggests that these organizations, “risk producing teams of passable candle-makers in a world that demands high-efficiency lightbulbs.” In my situation, it is possible that I was the untrained candle-maker and so my experiment was bound to fail because I didn’t know how to ask AI to conduct the administrative task.
I’ve seen countless examples of lawyers “vibe coding” and using AI to automate routine processes like completing forms, initial drafting, summarizing documents and synthesizing information from multiple sources. And, I’ve seen (and used) some impressive legal practice tools. There is no doubt that AI creates efficiency and automation in the legal practice, but not always. In using AI, it is important to use the right tool and acknowledge that it doesn’t always lead to the best results in every situation. When it comes to practice policies or any other administrative efficiency, often contextual nuances that apply only to you and your organization, can only be understood by AI when it is effectively integrated into your workflow. Training AI and becoming familiar with what it can and cannot do takes time. Maybe I did not spend enough time to work alongside the AI and teach it what it needed to know to provide the best results in my experiment?
As we’re all getting to know, very rapidly, that lawyers have to be continually weary of fabricated and miscited caselaw.[4] When it comes to the administrative part of a practice, while the risks may be lower, they do not disappear and advice given by AI may not align with LSO Regulations, By-Laws and practice guidance. The reality is that editing and revising AI-generated content can even be as time-consuming as a manual draft anyways and manual intervention is almost always needed, particularly for any edits or rewrites. Instead of being certain of the content, it is important that we continually question whether the information is accurate or whether the AI has produced a hallucination. The quality and relevance of AI recommendations are not always reliable which also makes organizational AI-reliability questionable. It is possible that I was unrealistic about the amount of engagement that would be needed after AI produced its solution – manual intervention would have been needed regardless, so perhaps the issue is that while I may have saved some time, the 10x goal was never achievable in the first place
Concluding Thoughts
My AI experiment was a resounding failure and though it ate up my time and energy (reminding me that I might need better hobbies), I still felt that the failure was worth sharing because, ultimately, the use of AI is a series of ongoing learning experiments. It is natural to have high hopes for what technology can offer, but AI may not always be up to the task. It may not achieve the efficiency you were looking for, but choosing to experiment, even when facing potential failures, setbacks and wasted time, helps us to remain part of the conversation rather than being left behind with the Luddites. My suggestion: use technology thoughtfully, rely on your judgment and continue experimenting with new approaches – you may learn much more from the process (and failures) than from the outcome.
_____________________
[1] Jeremy Utley, “Stanford’s Practical Guide to 10x Your AI Productivity,” YouTube (EO Global) (25 August 2025), online: https://www.youtube.com/watch?v=yMOmmnjy3sE.
[2] See these LSO resources: Technology Resource Centre (https://lso.ca/lawyers/technology-resource-centre) and Practice Supports & Resources (https://lso.ca/lawyers/practice-supports-and-resources).
[3] Sophie McBain, “Are We Living in a Golden Age of Stupidity?” The Guardian (18 October 2025), online: https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology.
[4] See for example, Ko v. Li (2025 ONSC 2766).





As I’m reading your thoughtful post, I had the following thought:
We now have over 75 reported cases where lawyers or self-represented litigants have been admonished by the courts for submitting hallucinated cases. See my Guide to AI Regulation in Canada for the ongoing list: https://uwindsor-law.libguides.com/AI/Regulation#s-lg-box-16992223
But those are the situations where someone is “watching” what is going on. How many contracts and other agreements are being drafted behind the scenes where NO ONE is “watching” what is going on?
Presumably, those are going to hit the courts in 5 – 10 years’ time when the parties realize that the terms were not drafted properly. Will anyone attribute the errors occurring back to the proliferation of AI in legal practice? Somehow we need to train clients to demand that their lawyers reveal in the text of a written document – that AI was used, so that the lawyers can be held accountable down the road as well.