Litigation and arbitration are teeming with experts these days.
There are technical experts to explain what happened. Others to say whose fault it was. And another bunch to quantify the damages.
Almost every sizable case has at least one expert on the witness list. Well, never just one. Each side must have their own expert. And, of course, they never agree.
That’s the problem with experts. Recent studies have shown that people have a very hard time understanding what experts say and giving appropriate weight to conflicting expert opinions. Adjudicators are no different from anyone else.
Derek Koehler, a psychology professor at University of Waterloo, recently wrote in The New York Times about experiments he conducted to assess how scientific debates are reported in the news media.
Media try to give “balance” to their reporting by including the views of dissenting experts. Often the weight of the expert opinion is very strongly on one side of an issue – for example, climate change or the safety of vaccines or certain foods. But the very fact of including the dissenting opinion may give a false impression of disagreement when there is really almost unanimous consensus on a particular point.
In one study Koehler conducted, participants were given a numerical summary of a range of expert opinion on various economic issues. On some issues, a large majority of experts agreed on a conclusion; on others there was more disagreement. (For example, on one issue 93 experts agreed, 2 disagreed and 5 were uncertain; while on another the split was 38/36/27.)
The study found that, when participants were given a written comment from an expert on each side of the question, in addition to the raw numbers, they had much more difficulty distinguishing between the high-consensus and low-consensus opinions. The participants gave much more weight to the dissenting opinions than the raw numbers warranted.
“This distorting influence affected not only the participants’ perception of the degree of consensus, but also their judgments of whether there was sufficient consensus to use it to guide public policy,” Koehler concluded.
What causes this response? According to Koehler:
One possibility is that when we are presented with comments from experts on either side of an issue, we produce a mental representation of the disagreement that takes the form of one person on either side, which somehow contaminates our impression of the distribution of opinions in the larger population of experts. Another possibility is that we may just have difficulty discounting the weight of a plausible argument, even when we know it comes from an expert whose opinion is held by only a small fraction of his or her peers. It’s also possible that the mere presence of conflict (in the form of contradictory expert comments) triggers a general sense of uncertainty in our minds, which in turn colors our perceptions of the accuracy of current expert understanding of an issue.
This is not only a problem for media reporting on public policy issues such as how to deal with climate change, drug or food safety. It has profound implications for expert evidence in arbitration or litigation as well.
What is an adjudicator to think when faced with experts on either side who are well-qualified, articulate and credible in their opinions?
Piling on more experts doesn’t help: “I’ll see your PhD and raise you two…”
This simply adds to the cost for each of the parties and does nothing to resolve the problem.
Perhaps, in situations where there is a strong consensus of opinion on one side or the other, a party can present evidence on that consensus, but I think the situations where that would resolve a disputed issue are very rare.
Usually, the experts are being asked to apply their expertise to the facts of a particular case. So while there may be a consensus on the basic principles that apply, there is none on the final conclusion. There is just an opinion on either side – and each opinion depends on the assumptions, experience and analysis of that particular expert with respect to those specific facts.
Counsel may try to undermine the opposing expert by attacking their assumptions or the methodology they used to arrive at a conclusion. Maybe they can drag up some prior inconsistent research or statement made by the expert. Or perhaps a series of hypotheticals can be presented to the expert which cause him or her to hedge or qualify the opinion to some degree.
None of which really answers the question the adjudicator must face: Which expert is right?
So what does the decision-maker do?
Maybe some of the other evidence in the case is weak. Or perhaps there are credibility issues with other witnesses. So the decision-maker relies on that evidence to come to a conclusion on the critical issues. The expert evidence on one side or the other can then be thrown in to support the reasons for the decision that has been made.
If that’s what really happens, parties are spending an awful lot of money on experts for very little benefit.
But if the case truly turns on the experts, the adjudicator must be given better tools to weigh the conflicting expert evidence.
Counsel need to present the expert evidence in ways that allow adjudicators to better assess when an expert is basing an opinion on a set of facts or principles on which there is a strong consensus and when the expert is expressing a minority or dissenting opinion that is not widely supported.
Having said that, we must also keep in mind that there are cases where the contrarian opinion is the correct one. Many of the greatest scientists fought against the tide of accepted wisdom until their observations, experiments, arguments and theories were accepted and, in turn, became the consensus.
The psychological research shows that we are unduly influenced by a good story, even if it runs counter to the expert consensus on a particular issue. We have difficulty simply determining what that consensus is and what it means. We are also unduly influenced by experts who opinions agree with our own perception or world view.
Adjudicators must always be aware of this and guard against their own biases in assessing expert evidence.