Column

Some Thoughts on AI and Conflict Management

In June 2025, Yuval Hoah Harari (Note 1) was interviewed by Poppy Harlow at the ‪@WSJNews Leadership Institute on AI and Human Evolution. It is a fascinating discussion and well worth 25 minutes to watch it.

While Harari acknowledges that AI has enormous positive potential he also warns of some dangerous issues affecting many areas of human life.

Highlights of Harari’s insights:

  • AI is not just another tool. It is an agent capable of making decisions independently, generating new ideas, and learning without human input. A tool like the printing press cannot write a book by itself and decide which books to print.
  • It won’t necessarily follow the “rules”. Even if we take time to train AI to follow certain principles or values, it will do something that will surprise or even horrify us.
  • AI feeds on digital information about human behaviour. It copies behaviour, not the principles it has been taught. If taught not to lie, but is exposed to examples of humans lying, AI will imitate the lie.
  • There may be a long time-lag between the introduction of AI and the recognition of social, economic or political consequences.
  • Information-based fields are most at risk. Finance, for example, is an “ideal playground”. [Legal is also largely information-based]
  • Nothing is off limits. AI is already being designed to replace human leaders in text-based religions drawing on vast amounts of texts and commentary.
  • People are already consulting AI counsellors and relationship advisers. [Is legal advice different?]
  • AI will not solve our own human problems. Trust is collapsing all over the world and AI will not save us. Instead, we need to solve those problems first and then use AI to enhance collaboration.
  • There will not be one AI. There could be millions or billions of AI, each with different characteristics in constant competition.

It is a daunting picture.

What does this mean for those involved in conflict management including legal professionals and mediators?

I am far from being an expert in AI but I am trying to learn about AI and identify blind spots. We are already witnessing how AI is changing the practice of law. AI can be useful for those seeking legal information or advice on their own. But, because AI systems draw exclusively from digital information, it misses much of what matters in human interaction.

As Chris Corrigan observes:

… AI has been trained on the detritus that humans have left scattered around on the Internet. It has been raised on all the ways that we show up online. And although it has also been trained on great works of literature and the best of human thought… Harari also points out that the quantity of information in the world means that only a very, very tiny proportion of it is true…..

AI has no way of knowing that when there are crises in a community, human beings often behave in very beautiful ways. Folks that are at each other’s throats online will be in each other’s lives in a deeply meaningful way, raising money, rebuilding things, looking after important details. There is no way that AI can witness these acts of human kindness or care at the scale with which it also processes the information record we have left online. It sees the way we treat each other in social media settings and can only surmise that human life is about that. It has no other information that proves otherwise.

The same is true in conflict involving legal matters. AI doesn’t see how a skilled mediator or lawyer can help people in conflict work through their differences in a healthy way. AI cannot witness the transformative moments in a mediation room when humans experience “aha moments”, feel their burdens lifted when truly heard, shed tears of relief, or share hugs of reconciliation.

I have so many questions, including:

  • If all AI has are allegations, accusations and demands in digital legal documents (and social media!), how can it guide someone to participate well in an out-of-court process?
  • Even if we try to educate one AI about the magic of developing face-to-face trust and connection as a key part of conflict transformation, will it still operate based on what it actually sees humans doing rather than what we tell it?
  • What about the billions of other AIs without that education, all competing against each other for our attention?

The underlying challenge

Harari suggests that we need to work out our human problems first (including our diminishing trust for one another) and not expect AI to save us. AI will only reflect (and possibly magnify) that dysfunction.

In that scenario, the work of legal and conflict resolution professionals is critical in supporting human capacity to build trust, foster understanding, respect and dignity, and resolve conflict. Every little bit helps.
__________

Note 1: Author of “Sapiens: A Brief History of Humankind” and most recently, “Nexus: A Brief History of Information Networks from the Stone Age to AI”. Thanks to Chris Corrigan for the hat tip about this interview.

Start the discussion!

Leave a Reply

(Your email address will not be published or distributed)