Today

The Friday Fillip: Surprise!

Like most of you, I’m sure, I get momentarily caught whenever I see the word “law” in something I’m reading. Much of the time it has nothing to do with our business and means to describe only some regularity, whether scientific, logical or folkloristic. That’s the way it was when I stumbled across “The Law of Unintended Consequences.” (See, e.g.,the title of this piece in the Economist.) Curiously, this “law” almost never gets framed as such, and from what I can tell amounts to nothing more than the observation that we’re not too good a predicting the future, particularly when we’re trying to make the running. Still, it resonated with me, because we in our profession do know about the unintended consequences of laws — not quite the same thing. So I thought I’d rummage around in the general concept for a bit, looking for some light nourishment fit for a Friday.

“…and may cause death.”

Another, less high-flown phrase for unintended consequences is “side-effects,” something that everyone with a television set that gets US channels knows about, thanks to the pervasive advertising of drugs. At the end of these visual idylls shot through pastel filters the announcer invariably rushes through a long litany of things that can go wrong if you use the product that’s just been extolled. (In case you’ve somehow escaped these bizarre commercials, here’s a YouTube video compilation of some of the broadcast side-effects.) Oddly, perhaps, the FDA has come to the conclusion that because viewers don’t fully grasp the import of the side-effects recital, the best thing to do is to shorten the litany to a brief statement that there are risks:

Our hypothesis is that, relative to inclusion of the full major statement, providing limited risk information along with the disclosure about additional risks will promote improved consumer perception and understanding of serious and actionable drug risks . . . 

The full FDA document is here. Now, these are unintended consequences that we for the most part know about and are prepared, it would seem, to tolerate because the benefits outweigh the costs, given the assertion of the level of risk. But the real surprises come from unanticipated side effects.

Hubris — the -bris that doesn’t cut it.

I have to make clear that not all side-effects are deleterious. Some consequences unanticipated by the actor might be quite positive. One of the classic instances of this — and one of the first modern statements of a corollary of the “law” — might be Adam Smith’s observation that by seeking one’s own advantage in the market — by acting economically — one, all unknowing, promotes the interests of society generally. This is the invisible hand, or one understanding of it, that Smith wrote of. But we note more commonly what you might think of as the invisible spanner in the works. We are more upset when things go wrong rather than right in unanticipated ways. And there are so many reasons for our getting it wrong that it seems to me silly to imagine that there’s anything as neat and tidy as a law about it. Instead, I think of it as the state of human nature to screw up. Just take the trite observation that everything is connected to everything else; complexity is the order of the day, and our hope that some things will stay steady while we whack on others becomes ever more foolish as the scope of our engineering, social or mechanical, increases. Ceteris is never paribus. Introduce rabbits to Australia — prohibit the sale of alcohol — drain your wetlands — and see what happens. And these throw off the straightforwardly deleterious side effects. Some of our efforts — because the gods mock hubris, I suppose — produce perverse unintended side effects, one well-known recent example having given its name to a genre: the Streisand effect, where an attempt to suppress some information only serves to publicize it further. See, too, the Peltzman effect, in which improvements to safety apparatuses — bike helmets, seat belts, ABS — serve to let users increase their risky behaviour to levels that wouldn’t have been possible without the helpful devices. And it’s worth quoting from this refreshingly frank, very recent opinion piece by an economist in the New York Times:

Perhaps the biggest problem with maximizing a social welfare function like utility is practical: We economists often have only a basic understanding of how most policies work. The economy is complex, and economic science is still a primitive body of knowledge. Because unintended consequences are the norm, what seems like a utility-maximizing policy can often backfire.

Approximate futures

The thing about complexity and our ignorance is that they’re not as straight-line “cause and effect” as we might imagine. Some systems have emergent properties, states that are only knowable once they appear. I think of the stock market this way. Even more tricky than one of these “voting” or “summation” situations are those that flow from chaos theory. Weather might be the prime example of this sort of difficulty. One of the originators of chaos theory, Edward Lorenz, described chaos this way:

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

A reason for this failure of the approximate is the heightened sensitivity of some system’s initial conditions: one tiny adjustment, and the whole system winds up in a place that couldn’t have been predicted — the famous butterfly effect whereby the mere flapping of a butterfly’s wing here can have powerful effects on the weather there (though the butterfly motif probably has more to do with the shape of something called a strange attractor than it does with any real world connection between flapping and tornadoes). So it would appear we must learn to live with the approximate and its necessary failures, a cautionary tale if yet another one were needed for those who seek to change human behaviour via laws. Another caution might come from something that provides just the little fillip that the end of a Friday fillip requires: the side-effect effect.

Intending bad

Back in 2003 philosopher Joshua Knobe published a very short report of an experiment he’d run in which he posited something called the side-effecct effect, and which since has become known also as the Knobe effect, “the most famous finding in experimental philosophy,” according to John Turri at the University of Waterloo. Knobe presented two fairly similar scenarios to his two groups of subjects. In each, an actor — the chairman of a board of a company, for instance — decides to implement a program that will be profitable for his company; he knows his program will also produce a side-effect but he doesn’t care whether the side-effect occurred or not. The philosophical question that Knobe asked was whether people would say — we would say — that the chairman brought about the side-effect intentionally. The surprising and consistent result is that test subjects attribute intentionality to the actor if the side-effect produces bad results and not if it produces good results. That is, blame is accorded but not praise. Apart from its being a quirk in our culture (or human nature?), this asymmetry has the power to surprise us — and for lawyers, who must attempt to manipulate the future for their clients, it might be a heads-up that when things break good or bad, even far to the left of the main plan of action spelled out in the retainer, client reaction will differ… irrationally.

Retweet information »

Comments

  1. Fascinating thoughts for us law reform types… thanks as always for the stimulation, Simon.

    I suppose the Knobe effect you finish with might be more explicable for lawyers than for the ‘average’ person, since we’re used to thinking about forseeability in terms of fault. We are likely to recast the question into whether the CEO would be liable for the side effects, and since he foresaw the bad one then clearly yes. For the good one, liability issues don’t arise.

    The ‘average’ person in the study has the same kind of intuition about bad consequences but does not make the lawyer’s distinction between intentional torts and negligent torts (or in this case, probably reckless torts), so calls it all intention, especially if given only that choice by the researcher (rather than, say, a choice between ‘blameworthy’ and ‘praiseworthy’). There is no question (it seems to me) that moral culpability attaches, and we tend to attach moral qualities to actions where the actor had a choice – and it’s easy to think of a choice as an intention.

    P.S. ‘ceteris ARE never paribus…’ surely?

  2. You’re right, John: plural. Mea culpa. And is it the dreaded ablative absolute?

  3. now that you mention it … I think it is. One of the high points of my 1961-62 school year!

Leave a Reply

(Your email address will not be published or distributed)