Column

AI and ADR Neutrals: When Should Its Use Be Disclosed? Three Emerging Approaches to Transparency in Mediation and Arbitration Practice

Artificial intelligence is quickly becoming part of everyday professional practice in dispute resolution. As its use expands across the legal profession, questions are beginning to arise about how these tools should be used by mediators and arbitrators.

Until recently, the issue has received little attention within the ADR community itself.

At present, most mediation and arbitration codes of conduct say little or nothing about artificial intelligence.

While much of the discussion about AI in law focuses on lawyers using these tools, far less attention has been paid to their use by mediators and arbitrators. Yet as AI becomes more common in legal practice, questions about its use by dispute resolution professionals are likely to arise.

This raises an important question: should mediators and arbitrators disclose when they use AI tools in a case?

The answer may shape emerging expectations about transparency in dispute resolution practice.

For ADR practitioners, the question is not whether these tools will be used, but how they can be used while preserving the principles that make dispute resolution processes credible: fairness, transparency, and trust in the neutral.

When AI Use May Arise in Practice

In Canada, where mediation and arbitration are widely used across commercial, employment, construction, and condominium disputes, these questions are becoming increasingly relevant.

Mediators and arbitrators may encounter AI-generated chronologies, summaries, or draft submissions before they ever decide to use these tools themselves.

In mediation practice this can arise in simple ways—for example, when a party submits a chronology or summary prepared with AI assistance.

As these tools become more widely used in legal practice, their presence in dispute resolution processes will become increasingly difficult to ignore.

Three Emerging Approaches

Discussion in mediation, arbitration, and legal ethics circles is beginning to coalesce around three broad approaches to disclosure.

Each reflects a different view of how technology should fit within the professional responsibilities of mediators and arbitrators. Despite these differences, they share a common concern: how to integrate new technology without undermining fairness, confidentiality, or trust in dispute resolution processes.

AI as a Professional Tool

One view emerging among practitioners—and currently the most common—is that disclosure is unnecessary when AI is used purely as a professional support tool rather than as a decision-making mechanism.

Under this approach, AI is treated much like other technologies professionals routinely use. Lawyers rely on research databases, document management systems, and proofreading software without disclosing their use to clients or opposing counsel. AI tools, in this view, represent another form of professional assistance.

Applied to ADR practice, mediators and arbitrators might use AI systems for tasks such as organizing issues, summarizing submissions, improving written communications, drafting procedural correspondence, or refining written materials.

So long as the neutral remains responsible for all analysis and decisions, the use of AI in these ways is comparable to other productivity tools. On that basis, disclosure may not be required.

Transparency and Party Confidence

A second perspective holds that some level of disclosure is good practice, even if not strictly required.

ADR processes depend heavily on party confidence in the neutrality of the mediator or arbitrator. Because AI remains a relatively new technology, clarity about its use may strengthen confidence in the process.

Under this approach, disclosure would not need to be extensive or technical. A neutral might simply acknowledge that secure technological tools—including AI systems—may be used for administrative or drafting assistance while retaining full responsibility for all professional judgments.

Such disclosure could appear in a mediation agreement, procedural order, or general statement of practice. The objective would not be to regulate AI use, but to avoid misunderstanding about the role technology plays in the neutral’s work.

When AI Influences Decision-Making

A third perspective supports disclosure when AI tools play a more substantive role in analysis or decision support.

Examples might include situations where AI systems are used to analyze large volumes of documents, identify evidentiary patterns, generate predictive assessments of case outcomes, or assist in drafting arbitral awards.

In these circumstances, the concern is not simply transparency but process fairness.

This issue is particularly significant in arbitration. Arbitrators issue binding decisions and are expected to exercise independent judgment. Several arbitration institutions have emphasized that while AI may assist with research or editing, the arbitrator must remain solely responsible for the reasoning and decision.

Where technology materially influences analysis or recommendations, parties may reasonably expect to be informed.

Emerging Guidance on AI in ADR

Outside the ADR field, courts and legal regulators in Canada have also begun addressing the use of artificial intelligence in legal practice. Several courts have issued guidance reminding counsel of their obligation to verify authorities and submissions prepared with the assistance of AI tools. Law societies have likewise emphasized duties of competence, supervision, and protection of confidential information when using AI in legal work.

Although discussion about AI in dispute resolution is expanding rapidly, formal guidance remains limited.

Several ADR organizations have begun addressing the issue, particularly in the context of arbitration. Guidance and discussion materials have been published by organizations such as the ADR Institute of Canada, the Chartered Institute of Arbitrators, the American Arbitration Association /ICDR, the Silicon Valley Arbitration & Mediation Center, and the International Bar Association, which has issued Guidelines on the Use of Generative Artificial Intelligence in Mediation. In Canada, professional discussion about the appropriate use of AI in ADR practice is continuing as institutions and practitioners consider how guidance may evolve.

As experience with these tools grows, ADR practitioners will inevitably shape the norms that govern their responsible use.

Most of this guidance focuses on arbitration rather than mediation, reflecting the different roles neutrals play in those processes.

Although emerging guidance addresses responsible use more often than disclosure specifically, it consistently emphasizes transparency and professional accountability. Both the Chartered Institute of Arbitrators and the American Arbitration Association stress that arbitrators must remain fully responsible for their analysis and decisions even when technology assists with research, drafting, or document management.

Ethical Considerations

Although ADR-specific rules on AI remain limited, broader legal ethics guidance is beginning to converge around several common principles.

Professionals who use AI tools are expected to understand the capabilities and limitations of the technology, supervise its use appropriately, verify outputs for accuracy, protect confidential information, and maintain independent professional judgment.

AI systems can also produce inaccurate or “hallucinated” information, reinforcing the need for neutrals and counsel to verify outputs carefully before relying on them.

These obligations apply whether or not AI use is disclosed.

A Practical Way Forward

In practice, the distinction for most mediators and arbitrators may be relatively straightforward.

Using AI tools to edit writing, organize issues, or assist with administrative drafting is unlikely to affect the neutrality or integrity of the process. In such circumstances, disclosure may not be necessary.

More caution—and greater transparency—may be appropriate whenever technology begins to influence substantive analysis, recommendations, or decisions.

For mediators and arbitrators, the practical question may simply be this:

Would the parties expect you to disclose that these tools were used?

The use of artificial intelligence in dispute resolution practice is still in its early stages. As experience with these tools grows, mediators, arbitrators, and ADR institutions will play an important role in shaping the professional norms that guide their responsible use.

Technology will evolve quickly.

The principles that sustain trust in dispute resolution should not.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)