AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers
Over the last few columns, I have focused primarily on the regulation side of my work in artificial intelligence (AI) risk and regulation. That focus has reflected, in part, my concern about the current regulatory patchwork surrounding generative AI in Canada and the very real dangers of unregulated implementation of AI into our daily lives. That discussion will continue at a later date, but for the next few articles I plan to shift the focus to the research and perspectives on the risk management side of the equation.
The risks associated with AI implementation are not hypothetical. Many readers will be very familiar with issues such as hallucinations, bias, and overconfidence in generated results. These risks are already manifest across sectors and reflect only the leading edge of the AI risk landscape from my perspective. The coming risks of AI implementation will be acute, such as a physical injury caused by reliance on inaccurate medical output. They will be chronic, such as the slow erosion of professional judgment as routine tasks are increasingly handed over to automated systems. They will be individual, such as a client receiving incorrect legal advice based on flawed AI drafting. And they will be systemic, such as the embedding of discriminatory patterns into institutional decision-making.
The risks will also shape the future of our profession in a significant way and as I write this column during a brief summer vacation, I wanted to take a more relaxed approach to opening a discussion of which tasks and roles in a lawyer’s daily work may be most impacted in the coming years. This month’s post is lighter in tone and deliberately informal in method. It offers a discussion-oriented exercise: a simple mapping of Canadian legal practice areas against a recent Microsoft Research study that examined where generative AI is already being used successfully in the workplace. The results are open to interpretation and I may not even entirely agree with some of them but I found the process useful and thought-provoking, and I hope that you do as well.
Microsoft’s Research
In July 2025, Microsoft Research released what may be the most concrete, data-driven snapshot to date of how generative AI aligns with real work. Unlike much of the commentary in the AI space, which relies on surveys or speculative opinion, this study analyzed more than 200,000 anonymized interactions from U.S. users of Microsoft Copilot in 2024. Each of these conversations was classified using O*NET’s framework of “work activities,” and then used to develop an AI applicability score for a wide range of occupations.
The researchers used three key indicators to measure AI’s effectiveness for a given work activity:
- Coverage — how frequently a particular activity appeared in Copilot usage;
- Completion — how often Copilot appeared to complete the task successfully; and
- Scope — how broadly that activity contributes to the core functions of a given occupation.
The research can be found here: https://arxiv.org/pdf/2507.07935.
The research offers a view into where AI tools are already being used with meaningful effect. Not surprisingly, the systems align most strongly with knowledge-based and communication-heavy work including information gathering, summarizing, writing, and explaining. In contrast, AI systems appear less effective in contexts that are heavily physical, highly interactive, or involve fine-grained interpersonal nuance.
At a more aggregated level, the highest-scoring occupational groups were in Sales; Computer & Mathematical; Office & Administrative Support; Community & Social Service; Arts/Media; Business & Financial Operations; and Education/Library. By comparison, legal roles sit toward the bottom of the ranking because legal work is hybrid: much of it is desk-based research and drafting, but a great deal involves live advocacy, client relations, and discretionary judgment that are harder to capture as discrete, automatable tasks. In this post, I use the study’s top 40 occupations as task analogs to assess legal sub-tasks, not to suggest that legal roles themselves are currently top 40.
While the research is grounded in U.S. data and organized around American occupational taxonomies, it provides an informative reference point for our own context. In the next section, I offer an informal Canadian adaptation of this work, with the express goal of fostering conversation about the evolving AI risk landscape in the legal profession. This exercise is not about predicting specific outcomes or prescribing staffing decisions.
A Very Simple (and Entirely Non-Scientific) Methodology
To create a Canadian counterpart to the Microsoft Research work, I’ve conducted a light-touch mapping exercise. The goal is not to produce a rigorous or replicable model of occupational displacement, but to stimulate dialogue about how generative AI tools may differently affect legal tasks and practice areas in Canada.
Step 1: The Legal Roles
The starting point was the Canadian Bar Association’s list of National Sections, which serves as a practical proxy for common areas of legal practice in Canada. I limited the scope to substantive law sections only, excluding career-stage, identity-based, or structural sections (such as “Young Lawyers” or “In-House Counsel”). The list can be found here: https://www.cba.org/Sections.
Step 2: The AI Impact Scores
The anchor data came from Microsoft’s recent research, which highlighted 40 occupations where generative AI tools are already being used with meaningful effect. These rankings reflected a blend of three factors, as described earlier: how frequently a task appeared in Copilot usage (coverage), how often Copilot seemed to complete it successfully (completion), and how central the task was to the occupation as a whole (scope).
Occupations with strong communication and drafting elements, such as writers, customer service representatives, and technical specialists, appeared near the top of the list. Others, like archivists and data scientists, fell more in the middle range. Together, these provided a useful set of analogs for mapping against legal practice areas.
Step 3: The Mapping
For each legal practice area, I selected three occupations from Microsoft’s top-ranked list that resembled common tasks in that area. To keep the comparison balanced, I looked at one example from each of three categories: drafting and editing, analysis and advisory work, and client or process-oriented roles. From there, I generated a simple ranking across the CBA sections. The underlying scoring exercise is deliberately basic and not predictive in any formal way and is meant only to provide a rough sense of where AI tools might enhance, support, or (less commonly) replace tasks in Canadian legal practice.
Limitations and Cautions
Before turning to the results, a few important caveats are in order. This is not a study of AI reliability. It is a snapshot of where AI tools are being used successfully in live environments. Microsoft’s methodology focuses on applicability: tasks where AI has shown measurable traction in practice, not where it can or should be relied on without supervision in legal contexts.
Indeed, the researchers draw a clear line between user goals (where AI supports or accelerates a human task) and AI actions (where the model appears to carry out an activity independently). Our mapping carries forward that foundational caution: applicability does not equal sufficiency.
A few additional limitations deserve emphasis:
- The method is deliberately simple and therefore debatable. For each CBA section, we selected three proxy occupations from Microsoft’s Top 40 list (one in drafting/editing, one in analysis/advisory, one in client/process). The final score is a straight average. Different analogs would yield slightly different scores. Selection of analogs from the Top-40 list also introduces an anchoring bias towards higher applicability and therefore a broader comparator set could narrow the spread or shift the rankings. Accordingly, this exercise is meant to invite discussion, not to predict job outcomes or staffing models.
- This is U.S. usage data applied to Canadian legal work. The underlying data comes from U.S. users of Microsoft Copilot, linked to the O*NET occupational taxonomy. We assume that core legal work activities such as research, writing, form generation, and client communication are broadly comparable across borders.
- The AI frontier is moving rapidly. This research reflects the state of play in mid-2025. As underlying models evolve, and as platform integrations change, patterns of use will shift. This is a snapshot, not a forecast.
- The structure of the scoring favours communication-heavy tasks. Microsoft’s top occupational groups include Sales, Office/Admin, Arts/Media, and Business/Finance. These are areas characterized by written, repeatable, or structured communication. Legal work that mirrors those traits scores higher. Legal work grounded in live advocacy, interpersonal negotiation, or physically grounded procedures scores lower because it resists automation.
- Ethical and regulatory considerations are outside the scope of this ranking. This model does not consider legal privilege, confidentiality, model hallucination, professional responsibility, or evolving court directives on AI usage.
What the Mapping Suggests: Key Observations
With the methodology and caveats behind us, what does this mapping tell us about the potential near-term impact of generative AI on legal practice areas in Canada?
To keep things digestible, I’ve highlighted only the five areas that appear most aligned with current AI capabilities and the five that appear least aligned. The ranking reflects relative applicability, not disruption or risk per se, but it provides a useful prompt for reflection about where today’s tools may fit most naturally into legal work.
Top Five Practice Areas (Most Aligned with Current AI Capabilities)
- Labour & Employment
- Family Law
- Child & Youth Law
- Elder Law
- Dispute Resolution (ADR)
While I personally found the list to include some unexpected practice areas, what these areas share is a strong orientation toward drafting, advisory memos, and client-facing communication, often directed at non-specialist audiences. That mirrors the kinds of tasks Microsoft’s study found AI handling most effectively: gathering information, summarizing, writing, and explaining. These areas rise to the top not because they’re “easier,” but because they’re more structurally compatible with the strengths of generative AI today.
Bottom Five Practice Areas (Least Aligned with Current AI Capabilities)
- Municipal Law
- Taxation Law
- Real Property
- Pensions and Benefits
- Charities and Not-for-Profit
At the other end of the spectrum, these lower-ranked sections often involve form-driven but nuanced workflows, regulated interactions, or compliance environments where automation gains may be offset by interpretive complexity or jurisdiction-specific variations. They may also include tasks less frequently represented in Microsoft’s usage dataset which limits AI applicability as currently measured.
When considering this data, 5 trends can be observed:
- Drafting and advisory-heavy roles rise to the top. Where legal practice centres on written communication, particularly where lawyers are translating complex issues for non-lawyers, AI is more applicable. These are tasks that blend synthesis, explanation, and tone management: areas where large language models are increasingly active.
- Portal and form-based sub-tasks are “automation-ready.” Tasks that involve structured documentation and predictable procedural steps tend to align well with the kinds of work AI is already supporting. These sub-tasks often mirror roles characterized by process coordination and high-volume document handling. As a result, we are likely to see faster and more consistent AI uptake in these areas, even within practices that otherwise involve bespoke legal work.
- Litigation shows a bifurcated impact: assist high, automate low. Tasks like issue-spotting, legal research, and first-draft briefing are amenable to AI support. But live advocacy, evidentiary analysis, and credibility assessment remain deeply human and context-driven. Microsoft’s study observes a similar trend: AI is far more likely to assist than to perform in complex domains.
- Public law practices benefit from their communication burden. Areas like constitutional law, aboriginal law, international law, and municipal law all carry a significant outward-facing role. These roles often draft position papers, engage stakeholders, or frame regulatory narratives. These functions sit squarely within the current AI comfort zone.
- Data-heavy work benefits from support, not substitution. Tax, competition, insurance, and environmental law benefit from AI’s ability to summarize, extract patterns, and compare documents. But the underlying analytical work remains legal-core.
Taken together, these observations align with Microsoft’s broader conclusion that the current AI frontier is built on knowledge and communication tasks, not abstract reasoning or discretionary judgment. In legal practice, that suggests augmentation more than automation, and risk exposure that varies not just by practice area, but by sub-task.
Key Takeaways: Reading Between the Lines
The purpose of this mapping exercise is not to sort winners from losers, nor to suggest which legal roles are “safe” or “at risk.” Instead, it offers a directional glimpse into how current-generation generative AI tools align with the structure of legal work and where that alignment may prompt us to think differently about risk, readiness, and regulation. A few final key reflections emerge:
- Applicability does not mean obsolescence.
A high score signals that AI tools are already being used to assist with similar tasks, not that the underlying legal function is disappearing. In practice, AI may draft the first version, but the lawyer still decides what is accurate, persuasive, and ethical.
- Sub-task sensitivity matters.
No practice area is uniformly automatable or uniformly insulated. The risk profile depends less on the practice label and more on the mix of tasks within it. Document drafting and issue-spotting may attract tools more quickly than oral advocacy or discretionary judgment. Firms may benefit from a granular understanding of task exposure, particularly when building internal policies or evaluating new tools.
- This is primarily a story of augmentation.
Most legal use cases today are about supporting the lawyer, not replacing them. This suggests a need to reframe how we talk about legal AI, not in binary terms of risk and safety, but in terms of professional responsibility in augmented environments. For regulators and educators, this raises important questions about training, supervision, and competence.
- Regulatory responses must evolve alongside tools.
As AI becomes more deeply integrated into mainstream legal workflows, traditional sources of professional guidance such as ethical rules, court practice directions, client engagement norms, will need to adapt. Risk management in this environment means moving beyond caution toward intentional design: building policies and protocols that align with core legal obligations.
- Structured, good-faith mapping exercises support professional dialogue.
This methodology here is intentionally simple and openly debatable. But even imperfect tools can help surface questions that need asking about issues such as risk exposure, professional identity, and how we govern emerging technologies in a complex service profession. If this post helps prompt those conversations, even if you strongly disagree with the methodology or conclusions presented, it has done its job.
Conclusion
This mapping exercise is, at its core, a way to think out loud about how generative AI is beginning to intersect with legal work. While the rankings themselves should not be over-interpreted, they highlight patterns that merit further discussion: the strong alignment between AI and communication-heavy legal tasks, the growing presence of AI in form-based processes, and the continued resistance of courtroom and advocacy work to automation.
What matters now is how we respond. AI is no longer an abstract future concern. It is already influencing how legal services are delivered, how clients interact with their counsel, and how legal professionals manage time, risk, and complexity. As these tools become more embedded in practice, the task ahead is not only to assess which tasks and practice areas are AI-compatible, but also to ensure that professional values, ethical obligations, and regulatory frameworks evolve alongside the technology.
Note: Generative AI was used in the preparation of this article.




Comments are closed.