Column

Accuracy, Precision, and T-Shirts

How precise are the following statements?

  1. The Canadian public debt as of 15 December 2010 was $275,872,478,414.44 CDN.
  2. Canadian hourly-billing lawyers worked an average of 2043.96 hours last year.
  3. Toronto Blue Jays slugger Jose Bautista hit .26 in 2010.

One answer: They are each precise to two decimal places.

Another answer: They are precise to 14 figures, six figures, and two figures, respectively.

I hereby state that I looked up answers to all three items before writing this column. So which of them do you believe? 

Chances are, based on precision alone, you believe one of them. No one knows the debt down to the penny, and who ever heard of a hitter, let alone breakout star, hitting .26, but it’s reasonable to believe a survey might show lawyers who track billable hours billing an average of 2043.96 hours last year.

Indeed, two of the three values are bogus. (I said I looked them up; I didn’t say I transcribed correct answers.)

But the bogosity adheres to the first two. Jose Bautista indeed got base hits in 26% of his official at-bats in 2010, or 0.26. However, we’re so used to seeing batting averages expressed as three digits of precision that the .26 number looks false. If I told you he hit .275, unless you’re absolutely up-to-date on Blue Jays stats, you’d undoubtedly have said that was right. But he didn’t hit .275, he hit .260, which is equal to .26. 

(For the record, the Canadian public debt was about twice the number listed on 15 December, and no one has any idea of the average number of hours without first deciding who’s included in the sample. What about the aging solo practitioner in Medicine Hat who still keeps two occasional clients? The lawyer just starting out in the small BC fishing community of Ucluelet? Count all the individual practitioners and it’s clear the number is significantly overstated.)

All of these numbers had various degrees of precision about them. Likely #1 was so precise that it made you question its accuracy – or at least it should have. If you thought about it, you’d have said the same about #2. And perhaps #3 seemed so lacking in precision it, too, had to be bogus.

Precision and accuracy are very different animals.

What if I’d said instead:

  1. The Canadian public debt is half a trillion dollars.
  2. Canadian hourly-billing lawyers in the 50 largest firms billed 2000 hours last year on average. 
  3. Jose Bautista got one hit in every four at-bats.

They aren’t very precise, but all are reasonably accurate. (For baseball stats geeks, the difference between Bautista hitting .260 and .250 was about six total hits last year.) 

We are attracted to precision in this age of reason, but precision alone is no guarantee of value. It’s possible to be highly precise but wildly inaccurate.

Yet in report after report in the legal world, we are bombarded with data whose precision may obscure questions of accuracy. Hourly billing data, for example, says very little about efficacy; I’ve seen cases where lawyers billed thousands of hours without delivering value because, for example, case strategy was misaligned with business goals. Likewise, is the best attorney the one who bills the most hours?

When looking at data, the first thing to do may be to strip all precision from the data and then test it for accuracy. 

That’s where the T-shirts come in.

T-shirts aren’t sized with great precision; rather, they’re marked small, medium, large, and extra-large, with occasional XS and XXL outliers. Apply “T-shirt sizing” when you first look at data. Is it about right? Does it pass the “sniff test”? 

Data with high levels of precision start to feel like a math test. It’s generally accepted that not everyone in the legal world is a math whiz. However, you don’t have to be any sort of math star to think about precision and accuracy as separate, independent factors. Both can give you insight into value of the number in question.

A quick assessment of accuracy – T-shirt sizing style – can tell you something about the likelihood that the data point is valid. A quick assessment of precision can tell you something about the person presenting the data point.

As for the Blue Jays… well, you don’t need a lot of math to know they’re markedly better than my hometown Seattle Mariners!

Comments

  1. Good article. You’ve described a problem that exists in an area of law that I sometimes think I know a bit about.

    The phrase that describes the level of certainty required for decisions in Canadian civil law is “the balance of probabilities”. (The Supreme Court of Canada said so, in F.H. v. McDougall, 2008 SCC 53: “There is only one standard of proof in a civil case and that is proof on a balance of probabilities.” Who am I to argue, right?)

    In non-statistical terminology, that concept is expressed as “more likely than not”.

    The former phrase, but not the latter, seems necessarily to point to a level of precession capable of being expressed in numbers, because probability is expressed in numbers. The latter does not necessarily point bring a numerical expression to mind.

    However, regardless of who I am, compare this statement of the threshold of required certainty to the American-law phrasing of the same concept: “the preponderance of evidence”.

    Given the fun one can have with numbers, which from of the phrasing is the better? The one less likely to lead lawyers and judges down the path which should not have been taken?

    On the one hand, numerically-focused phrasing such as “balance of probabilities” (so beloved by the BOPpers, or at least the SCC.) It’s a phrase which, as some lawyers and judges know, can easily lead judges and lawyers into starring roles in the legal reality show production of “Dancing with Numbers” in cases where statistical evidence is introduced by one side or the other in their attempts to convince the judge that the burden of proof has or has not been satisfied.

    Or, on the other hand (or the third hand, as a member of a fictional species might say), phrasing such as “more likely than not” and “the preponderance of evidence?