Thinking, Fast and Slow: Avoiding Errors of Legal Judgment
Daniel Kahneman’s new book, Thinking, Fast and Slow, synthesizes his life’s work as a psychologist. The book is about the systematic errors that limit human judgment.
The six-chapter section on overconfidence is particularly instructive for lawyers in helping them to assist clients to make better decisions and to make better decisions themselves. It appears that excessive optimism and overconfidence are part of the human condition. In fact, an expert’s subjective degree of confidence in his or her predictions is irrelevant to the performance of the expert.
Research has shown that, while computers are better than humans at solving problems involving logical intricacy, humans are better than computers at exercising judgment because computers don’t know what they don’t know. However, Kahneman points out that humans have a similar problem, which he calls “our almost unlimited ability to ignore our ignorance.” We choose not to know what we don’t know.
In a chapter called “The Outside View”, Kahneman describes how there are two methods of forecasting, the inside view and the outside view. The inside view is how we normally forecast. We focus on our particular circumstances and personal impressions and we limit our search for evidence to our own experiences. We treat every situation as if it were unique and we don’t seek out the statistics of cases that are similar to our own. Kahneman calls this WYSIATI (what you see is all there is) and he says it often leads to delusional optimism.
This assumption of uniqueness reminds me of an idea that Daniel Gilbert expresses in his book, Stumbling on Happiness: the best way to predict our own happiness in a given situation is to see how others who are already in that situation feel about it. We are each not as unique as we think we are.
Lawyers often take the inside view. Says Kahneman,
I once asked my cousin, a distinguished lawyer, a question about a reference class: “What is the probability of the defendant winning in cases like this one?” His sharp answer that “every case is unique” was accompanied by a look that made it clear he found my question inappropriate and superficial.
The solution he proposes is to get the outside view by obtaining statistics of your reference class and adjusting the baseline prediction using specific information about your case. Only slight adjustments to the baseline prediction are usually required. So, for example, if you have a terminal form of cancer but you are young and otherwise healthy, you may live slightly longer than the base mortality rates would predict but you are unlikely to be cured.
In the litigation example illustrated above, lawyers need to be asking themselves questions like, “How many of these cases succeed in court? How many settle? What are the amounts?” You then would adjust your prediction according to whether your case is stronger or weaker than similar claims.
In another chapter called “Intuitions vs. Formulas”, Kahneman discusses how statistical algorithms that combine a few ratings often outperform clinical predictions. Examples of the superior performance of algorithms include bank evaluation of credit risk and the evaluation of prospects of success for new businesses. These algorithms are successful even when the combinations of features they include aren’t particularly complex. Checklists such as Apgar scores for the level of distress of newborn babies have been developed using these methods. Checklists have made their way from the construction industry into medicine (see Atul Gawande’s book, The Checklist Manifesto) and likely could be used more extensively in law than they have been used so far.
Humans “try to be clever, think outside the box, and consider complex combinations of features in making their predictions.” The weakness of human experts is that they are context dependent and therefore inconsistent in their predictions while formulas always return the same answer when given the same input.
Kahneman has come to the counterintuitive conclusion that, in what he terms “low validity” environments, final decisions should be left to formulas.
In a third chapter entitled “Expert intuition: when can we trust it?”, Kahneman concludes that expert intuitions are likely to be skilled only where the environment is “sufficiently regular to be predictable” and there is the opportunity to learn through prolonged practice. Some areas of legal practice offer this type of predictable environment and others do not.
Darryl- this is a superb summary/review of this book! I am halfway through it and have found every chapter full of insights and relevancy to my field as a consultant and the legal industry in which I practice that I have to keep stopping to scribble down my thoughts and notes.
Thanks for sharing such clarity from your notes, v.helpful indeed.
Kate
Thanks, Kate. I’m glad you liked the review. It’s a great book!
Regards,
Darryl