购买
下载掌阅APP,畅读海量书库
立即打开
畅读海量书库
扫码下载掌阅APP

Chapter Six
THE NARRATIVE FALLACY

The cause of the because—How to split a brain—Effective methods of pointing at the ceiling—Dopamine will help you win—I will stop riding motorcycles (but not today)—Both empirical and psychologist? Since when?

ON THE CAUSES OF MY REJECTION OF CAUSES

During the fall of 2004, I attended a conference on aesthetics and science in Rome, perhaps the best possible location for such a meeting since aesthetics permeates everything there, down to one's personal behavior and tone of voice. At lunch, a prominent professor from a university in southern Italy greeted me with extreme enthusiasm. I had listened earlier that morning to his impassioned presentation; he was so charismatic, so convinced, and so convincing that, although I could not understand much of what he said, I found myself fully agreeing with everything. I could only make out a sentence here and there, since my knowledge of Italian worked better in cocktail parties than in intellectual and scholarly venues. At some point during his speech, he turned all red with anger—thus convincing me (and the audience) that he was definitely right.

He assailed me during lunch to congratulate me for showing the effects of those causal links that are more prevalent in the human mind than in reality. The conversation got so animated that we stood together near the buffet table, blocking the other delegates from getting close to the food. He was speaking accented French (with his hands), I was answering in primitive Italian (with my hands), and we were so vivacious that the other guests were afraid to interrupt a conversation of such importance and animation. He was emphatic about my previous book on randomness, a sort of angry trader's reaction against blindness to luck in life and in the markets, which had been published there under the musical title Giocati dal caso . I had been lucky to have a translator who knew almost more about the topic than I did, and the book found a small following among Italian intellectuals. “I am a huge fan of your ideas, but I feel slighted. These are truly mine too, and you wrote the book that I (almost) planned to write,” he said. “You are a lucky man; you presented in such a comprehensive way the effect of chance on society and the overestimation of cause and effect. You show how stupid we are to systematically try to explain skills.”

He stopped, then added, in a calmer tone: “But, mon cher ami , let me tell you quelque chose [uttered very slowly, with his thumb hitting his index and middle fingers]: had you grown up in a Protestant society where people are told that efforts are linked to rewards and individual responsibility is emphasized, you would never have seen the world in such a manner. You were able to see luck and separate cause and effect because of your Eastern Orthodox Mediterranean heritage.” He was using the French à cause . And he was so convincing that, for a minute, I agreed with his interpretation.

We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters. The first of the problems of human nature that we examine in this section, the one just illustrated above, is what I call the narrative fallacy . (It is actually a fraud, but, to be more polite, I will call it a fallacy.) The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.

Notice how my thoughtful Italian fellow traveler shared my militancy against overinterpretation and against the overestimation of cause, yet was unable to see me and my work without a reason, a cause, tagged to both, as anything other than part of a story. He had to invent a cause. Furthermore, he was not aware of his having fallen into the causation trap, nor was I immediately aware of it myself.

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship , upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense . Where this propensity can go wrong is when it increases our impression of understanding.

This chapter will cover, just like the preceding one, a single problem, but seemingly in different disciplines. The problem of narrativity, although extensively studied in one of its versions by psychologists, is not so “psychological”: something about the way disciplines are designed masks the point that it is more generally a problem of information . While narrativity comes from an ingrained biological need to reduce dimensionality, robots would be prone to the same process of reduction. Information wants to be reduced.

To help the reader locate himself: in studying the problem of induction in the previous chapter, we examined what could be inferred about the unseen, what lies outside our information set. Here, we look at the seen, what lies within the information set, and we examine the distortions in the act of processing it. There is plenty to say on this topic, but the angle I take concerns narrativity's simplification of the world around us and its effects on our perception of the Black Swan and wild uncertainty.

SPLITTING BRAINS

Ferreting out antilogics is an exhilarating activity. For a few months, you experience the titillating sensation that you've just entered a new world. After that, the novelty fades, and your thinking returns to business as usual. The world is dull again until you find another subject to be excited about (or manage to put another hotshot in a state of total rage).

For me, one such antilogic came with the discovery—thanks to the literature on cognition—that, counter to what everyone believes, not theorizing is an act—that theorizing can correspond to the absence of willed activity, the “default” option. It takes considerable effort to see facts (and remember them) while withholding judgment and resisting explanations. And this theorizing disease is rarely under our control: it is largely anatomical, part of our biology, so fighting it requires fighting one's own self. So the ancient skeptics’ precepts to withhold judgment go against our nature. Talk is cheap, a problem with advice-giving philosophy we will see in Chapter 13 .

Try to be a true skeptic with respect to your interpretations and you will be worn out in no time. You will also be humiliated for resisting to theorize. (There are tricks to achieving true skepticism; but you have to go through the back door rather than engage in a frontal attack on yourself.) Even from an anatomical perspective, it is impossible for our brain to see anything in raw form without some interpretation. We may not even always be conscious of it.

Post hoc rationalization . In an experiment, psychologists asked women to select from among twelve pairs of nylon stockings the ones they preferred. The researchers then asked the women their reasons for their choices. Texture, “feel,” and color featured among the selected reasons. All the pairs of stockings were, in fact, identical. The women supplied backfit, post hoc explanations. Does this suggest that we are better at explaining than at understanding? Let us see.

A series of famous experiments on split-brain patients gives us convincing physical—that is, biological—evidence of the automatic aspect of the act of interpretation. There appears to be a sense-making organ in us—though it may not be easy to zoom in on it with any precision. Let us see how it is detected.

Split-brain patients have no connection between the left and the right sides of their brains, which prevents information from being shared between the two cerebral hemispheres. These patients are jewels, rare and invaluable for researchers. You literally have two different persons, and you can communicate with each one of them separately; the differences between the two individuals give you some indication about the specialization of each of the hemispheres. This splitting is usually the result of surgery to remedy more serious conditions like severe epilepsy; no, scientists in Western countries (and most Eastern ones) are no longer allowed to cut human brains in half, even if it is for the pursuit of knowledge and wisdom.

Now, say that you induced such a person to perform an act—raise his finger, laugh, or grab a shovel—in order to ascertain how he ascribes a reason to his act (when in fact you know that there is no reason for it other than your inducing it). If you ask the right hemisphere, here isolated from the left side, to perform the action, then ask the other hemisphere for an explanation, the patient will invariably offer some interpretation: “I was pointing at the ceiling in order to …,” “I saw something interesting on the wall,” or, if you ask this author, I will offer my usual “because I am originally from the Greek Orthodox village of Amioun, northern Lebanon,” et cetera.

Now, if you do the opposite, namely instruct the isolated left hemisphere of a right-handed person to perform an act and ask the right hemisphere for the reasons, you will be plainly told, “I don't know.” Note that the left hemisphere is where language and deduction generally reside. I warn the reader hungry for “science” against attempts to build a neural map: all I’m trying to show is the biological basis of this tendency toward causality, not its precise location. There are reasons for us to be suspicious of these “right brain/left brain” distinctions and subsequent pop-science generalizations about personality. Indeed, the idea that the left brain controls language may not be so accurate: the left brain seems more precisely to be where pattern interpretation resides, and it may control language only insofar as language has a pattern-interpretation attribute. Another difference between the hemispheres is that the right brain deals with novelty. It tends to see the gestalt (the general, or the forest), in a parallel mode, while the left brain is concerned with the trees, in a serial mode.

To see an illustration of our biological dependence on a story, consider the following experiment. First, read this:

A BIRD IN THE
THE HAND IS WORTH
TWO IN THE BUSH

Do you see anything unusual? Try again. [1]

The Sydney-based brain scientist Alan Snyder (who has a Philadelphia accent) made the following discovery. If you inhibit the left hemisphere of a right-handed person (more technically, by directing low-frequency magnetic pulses into the left frontotemporal lobes), you lower his rate of error in reading the above caption. Our propensity to impose meaning and concepts blocks our awareness of the details making up the concept. However, if you zap people's left hemispheres, they become more realistic—they can draw better and with more verisimilitude. Their minds become better at seeing the objects themselves, cleared of theories, narratives, and prejudice.

Why is it hard to avoid interpretation? It is key that, as we saw with the vignette of the Italian scholar, brain functions often operate outside our awareness. You interpret pretty much as you perform other activities deemed automatic and outside your control, like breathing.

What makes nontheorizing cost you so much more energy than theorizing? First, there is the impenetrability of the activity. I said that much of it takes place outside of our awareness: if you don't know that you are making the inference, how can you stop yourself unless you stay in a continuous state of alert? And if you need to be continuously on the watch, doesn't that cause fatigue? Try it for an afternoon and see.

A Little More Dopamine

In addition to the story of the left-brain interpreter, we have more physiological evidence of our ingrained pattern seeking, thanks to our growing knowledge of the role of neurotransmitters, the chemicals that are assumed to transport signals between different parts of the brain. It appears that pattern perception increases along with the concentration in the brain of the chemical dopamine. Dopamine also regulates moods and supplies an internal reward system in the brain (not surprisingly, it is found in slightly higher concentrations in the left side of the brains of right-handed persons than on the right side). A higher concentration of dopamine appears to lower skepticism and result in greater vulnerability to pattern detection; an injection of L-dopa, a substance used to treat patients with Parkinson's disease, seems to increase such activity and lowers one's suspension of belief. The person becomes vulnerable to all manner of fads, such as astrology, superstitions, economics, and tarot-card reading.

Actually, as I am writing this, there is news of a pending lawsuit by a patient going after his doctor for more than $200,000—an amount he allegedly lost while gambling. The patient claims that the treatment of his Parkinson's disease caused him to go on wild betting sprees in casinos. It turns out that one of the side effects of L-dopa is that a small but significant minority of patients become compulsive gamblers. Since such gambling is associated with their seeing what they believe to be clear patterns in random numbers, this illustrates the relation between knowledge and randomness . It also shows that some aspects of what we call “knowledge” (and what I call narrative) are an ailment.

Once again, I warn the reader that I am not focusing on dopamine as the reason for our overinterpreting; rather, my point is that there is a physical and neural correlate to such operation and that our minds are largely victims of our physical embodiment. Our minds are like inmates, captive to our biology, unless we manage a cunning escape. It is the lack of our control of such inferences that I am stressing. Tomorrow, someone may discover another chemical or organic basis for our perception of patterns, or counter what I said about the left-brain interpreter by showing the role of a more complex structure; but it would not negate the idea that perception of causation has a biological foundation.

Andrey Nikolayevich's Rule

There is another, even deeper reason for our inclination to narrate, and it is not psychological. It has to do with the effect of order on information storage and retrieval in any system, and it's worth explaining here because of what I consider the central problems of probability and information theory.

The first problem is that information is costly to obtain .

The second problem is that information is also costly to store —like real estate in New York. The more orderly, less random, patterned, and narratized a series of words or symbols, the easier it is to store that series in one's mind or jot it down in a book so your grandchildren can read it someday.

Finally, information is costly to manipulate and retrieve.

With so many brain cells—one hundred billion (and counting)—the attic is quite large, so the difficulties probably do not arise from storage-capacity limitations, but may be just indexing problems. Your conscious, or working, memory, the one you are using to read these lines and make sense of their meaning, is considerably smaller than the attic. Consider that your working memory has difficulty holding a mere phone number longer than seven digits. Change metaphors slightly and imagine that your consciousness is a desk in the Library of Congress: no matter how many books the library holds, and makes available for retrieval, the size of your desk sets some processing limitations. Compression is vital to the performance of conscious work.

Consider a collection of words glued together to constitute a 500-page book. If the words are purely random, picked up from the dictionary in a totally unpredictable way, you will not be able to summarize, transfer, or reduce the dimensions of that book without losing something significant from it. You need 100,000 words to carry the exact message of a random 100,000 words with you on your next trip to Siberia. Now consider the opposite: a book filled with the repetition of the following sentence: “The chairman of [insert here your company name] is a lucky fellow who happened to be in the right place at the right time and claims credit for the company's success, without making a single allowance for luck,” running ten times per page for 500 pages. The entire book can be accurately compressed, as I have just done, into 34 words (out of 100,000); you could reproduce it with total fidelity out of such a kernel. By finding the pattern, the logic of the series, you no longer need to memorize it all. You just store the pattern. And, as we can see here, a pattern is obviously more compact than raw information. You looked into the book and found a rule . It is along these lines that the great probabilist Andrey Nikolayevich Kolmogorov defined the degree of randomness; it is called “Kolmogorov complexity.”

We, members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. Or, rather, sadly, so we can squeeze them into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is .

And the Black Swan is what we leave out of simplification.

Both the artistic and scientific enterprises are the product of our need to reduce dimensions and inflict some order on things. Think of the world around you, laden with trillions of details. Try to describe it and you will find yourself tempted to weave a thread into what you are saying. A novel, a story, a myth, or a tale, all have the same function: they spare us from the complexity of the world and shield us from its randomness. Myths impart order to the disorder of human perception and the perceived “chaos of human experience.”

Indeed, many severe psychological disorders accompany the feeling of loss of control of—being able to “make sense” of—one's environment.

Platonicity affects us here once again. The very same desire for order, interestingly, applies to scientific pursuits—it is just that, unlike art, the (stated) purpose of science is to get to the truth, not to give you a feeling of organization or make you feel better. We tend to use knowledge as therapy.

A Better Way to Die

To view the potency of narrative, consider the following statement: “The king died and the queen died.” Compare it to “The king died, and then the queen died of grief.” This exercise, presented by the novelist E. M. Forster, shows the distinction between mere succession of information and a plot. But notice the hitch here: although we added information to the second statement, we effectively reduced the dimension of the total. The second sentence is, in a way, much lighter to carry and easier to remember; we now have one single piece of information in place of two. As we can remember it with less effort, we can also sell it to others, that is, market it better as a packaged idea. This, in a nutshell, is the definition and function of a narrative .

To see how the narrative can lead to a mistake in the assessment of the odds, do the following experiment. Give someone a well-written detective story—say, an Agatha Christie novel with a handful of characters who can all be plausibly deemed guilty. Now question your subject about the probabilities of each character's being the murderer. Unless she writes down the percentages to keep an exact tally of them, they should add up to well over 100 percent (even well over 200 percent for a good novel). The better the detective writer, the higher that number.

REMEMBRANCE OF THINGS NOT QUITE PAST

Our tendency to perceive—to impose —narrativity and causality are symptoms of the same disease—dimension reduction. Moreover, like causality, narrativity has a chronological dimension and leads to the perception of the flow of time. Causality makes time flow in a single direction, and so does narrativity.

But memory and the arrow of time can get mixed up. Narrativity can viciously affect the remembrance of past events as follows: we will tend to more easily remember those facts from our past that fit a narrative, while we tend to neglect others that do not appear to play a causal role in that narrative. Consider that we recall events in our memory all the while knowing the answer of what happened subsequently. It is literally impossible to ignore posterior information when solving a problem. This simple inability to remember not the true sequence of events but a reconstructed one will make history appear in hindsight to be far more explainable than it actually was—or is.

Conventional wisdom holds that memory is like a serial recording device like a computer diskette. In reality, memory is dynamic—not static—like a paper on which new texts (or new versions of the same text) will be continuously recorded, thanks to the power of posterior information. (In a remarkable insight, the nineteenth-century Parisian poet Charles Baudelaire compared our memory to a palimpsest, a type of parchment on which old texts can be erased and new ones written over them.) Memory is more of a self-serving dynamic revision machine: you remember the last time you remembered the event and, without realizing it, change the story at every subsequent remembrance .

So we pull memories along causative lines, revising them involuntarily and unconsciously. We continuously renarrate past events in the light of what appears to make what we think of as logical sense after these events occur.

By a process called reverberation, a memory corresponds to the strengthening of connections from an increase of brain activity in a given sector of the brain—the more activity, the stronger the memory. While we believe that the memory is fixed, constant, and connected, all this is very far from truth. What makes sense according to information obtained subsequently will be remembered more vividly. We invent some of our memories—a sore point in courts of law since it has been shown that plenty of people have invented child-abuse stories by dint of listening to theories.

The Madman's Narrative

We have far too many possible ways to interpret past events for our own good.

Consider the behavior of paranoid people. I have had the privilege to work with colleagues who have hidden paranoid disorders that come to the surface on occasion. When the person is highly intelligent, he can astonish you with the most far-fetched, yet completely plausible interpretations of the most innocuous remark. If I say to them, “I am afraid that …,” in reference to an undesirable state of the world, they may interpret it literally, that I am experiencing actual fright, and it triggers an episode of fear on the part of the paranoid person. Someone hit with such a disorder can muster the most insignificant of details and construct an elaborate and coherent theory of why there is a conspiracy against him. And if you gather, say, ten paranoid people, all in the same state of episodic delusion, the ten of them will provide ten distinct, yet coherent, interpretations of events.

When I was about seven, my schoolteacher showed us a painting of an assembly of impecunious Frenchmen in the Middle Ages at a banquet held by one of their benefactors, some benevolent king, as I recall. They were holding the soup bowls to their lips. The schoolteacher asked me why they had their noses in the bowls and I answered, “Because they were not taught manners.” She replied, “Wrong. The reason is that they are hungry.” I felt stupid at not having thought of this, but I could not understand what made one explanation more likely than the other, or why we weren't both wrong (there was no, or little, silverware at the time, which seems the most likely explanation).

Beyond our perceptional distortions, there is a problem with logic itself. How can someone have no clue yet be able to hold a set of perfectly sound and coherent viewpoints that match the observations and abide by every single possible rule of logic? Consider that two people can hold incompatible beliefs based on the exact same data. Does this mean that there are possible families of explanations and that each of these can be equally perfect and sound? Certainly not. One may have a million ways to explain things, but the true explanation is unique, whether or not it is within our reach.

In a famous argument, the logician W. V. Quine showed that there exist families of logically consistent interpretations and theories that can match a given series of facts. Such insight should warn us that mere absence of nonsense may not be sufficient to make something true.

Quine's problem is related to his finding difficulty in translating statements between languages, simply because one could interpret any sentence in an infinity of ways. (Note here that someone splitting hairs could find a self-canceling aspect to Quine's own writing. I wonder how he expects us to understand this very point in a noninfinity of ways).

This does not mean that we cannot talk about causes; there are ways to escape the narrative fallacy. How? By making conjectures and running experiments, or as we will see in Part Two (alas), by making testable predictions. The psychology experiments I am discussing here do so: they select a population and run a test. The results should hold in Tennessee, in China, even in France.

Narrative and Therapy

If narrativity causes us to see past events as more predictable, more expected, and less random than they actually were, then we should be able to make it work for us as therapy against some of the stings of randomness.

Say some unpleasant event, such as a car accident for which you feel indirectly responsible, leaves you with a bad lingering aftertaste. You are tortured by the thought that you caused injuries to your passengers; you are continuously aware that you could have avoided the accident. Your mind keeps playing alternative scenarios branching out of a main tree: if you did not wake up three minutes later than usual, you would have avoided the car accident. It was not your intention to injure your passengers, yet your mind is inhabited with remorse and guilt. People in professions with high randomness (such as in the markets) can suffer more than their share of the toxic effect of look-back stings: I should have sold my portfolio at the top; I could have bought that stock years ago for pennies and I would now be driving a pink convertible; et cetera. If you are a professional, you can feel that you “made a mistake,” or, worse, that “mistakes were made,” when you failed to do the equivalent of buying the winning lottery ticket for your investors, and feel the need to apologize for your “reckless” investment strategy (that is, what seems reckless in retrospect).

How can you get rid of such a persistent throb? Don't try to willingly avoid thinking about it: this will almost surely backfire. A more appropriate solution is to make the event appear more unavoidable. Hey, it was bound to take place and it seems futile to agonize over it. How can you do so? Well, with a narrative . Patients who spend fifteen minutes every day writing an account of their daily troubles feel indeed better about what has befallen them. You feel less guilty for not having avoided certain events; you feel less responsible for it. Things appear as if they were bound to happen.

If you work in a randomness-laden profession, as we see, you are likely to suffer burnout effects from that constant second-guessing of your past actions in terms of what played out subsequently. Keeping a diary is the least you can do in these circumstances.

TO BE WRONG WITH INFINITE PRECISION

We harbor a crippling dislike for the abstract.

One day in December 2003, when Saddam Hussein was captured, Bloomberg News flashed the following headline at 13∶01: U.S. TREASURIES RISE; HUSSEIN CAPTURE MAY NOT CURB TERRORISM .

Whenever there is a market move, the news media feel obligated to give the “reason.” Half an hour later, they had to issue a new headline. As these U.S. Treasury bonds fell in price (they fluctuate all day long, so there was nothing special about that), Bloomberg News had a new reason for the fall: Saddam's capture (the same Saddam). At 13∶31 they issued the next bulletin: U.S. TREASURIES FALL; HUSSEIN CAPTURE BOOSTS ALLURE OF RISKY ASSETS .

So it was the same capture (the cause) explaining one event and its exact opposite. Clearly, this can't be; these two facts cannot be linked.

Do media journalists repair to the nurse's office every morning to get their daily dopamine injection so that they can narrate better? (Note the irony that the word dope , used to designate the illegal drugs athletes take to improve performance, has the same root as dopamine.)

It happens all the time: a cause is proposed to make you swallow the news and make matters more concrete. After a candidate's defeat in an election, you will be supplied with the “cause” of the voters’ disgruntlement. Any conceivable cause can do. The media, however, go to great lengths to make the process “thorough” with their armies of fact-checkers. It is as if they wanted to be wrong with infinite precision (instead of accepting being approximately right, like a fable writer).

Note that in the absence of any other information about a person you encounter, you tend to fall back on her nationality and background as a salient attribute (as the Italian scholar did with me). How do I know that this attribution to the background is bogus? I did my own empirical test by checking how many traders with my background who experienced the same war became skeptical empiricists, and found none out of twenty-six. This nationality business helps you make a great story and satisfies your hunger for ascription of causes. It seems to be the dump site where all explanations go until one can ferret out a more obvious one (such as, say, some evolutionary argument that “makes sense”). Indeed, people tend to fool themselves with their self-narrative of “national identity,” which, in a breakthrough paper in Science by sixty-five authors, was shown to be a total fiction. (“National traits” might be great for movies, they might help a lot with war, but they are Platonic notions that carry no empirical validity—yet, for example, both the English and the non-English erroneously believe in an English “national temperament.”) Empirically, sex, social class, and profession seem to be better predictors of someone's behavior than nationality (a male from Sweden resembles a male from Togo more than a female from Sweden; a philosopher from Peru resembles a philosopher from Scotland more than a janitor from Peru; and so on).

The problem of overcausation does not lie with the journalist, but with the public. Nobody would pay one dollar to buy a series of abstract statistics reminiscent of a boring college lecture. We want to be told stories, and there is nothing wrong with that—except that we should check more thoroughly whether the story provides consequential distortions of reality. Could it be that fiction reveals truth while nonfiction is a harbor for the liar? Could it be that fables and stories are closer to the truth than is the thoroughly fact-checked ABC News? Just consider that the newspapers try to get impeccable facts, but weave them into a narrative in such a way as to convey the impression of causality (and knowledge). There are fact-checkers, not intellect-checkers. Alas.

But there is no reason to single out journalists. Academics in narrative disciplines do the same thing, but dress it up in a formal language—we will catch up to them in Chapter 10 , on prediction.

Besides narrative and causality, journalists and public intellectuals of the sound-bite variety do not make the world simpler. Instead, they almost invariably make it look far more complicated than it is. The next time you are asked to discuss world events, plead ignorance, and give the arguments I offered in this chapter casting doubt on the visibility of the immediate cause. You will be told that “you overanalyze,” or that “you are too complicated.” All you will be saying is that you don't know!

Dispassionate Science

Now, if you think that science is an abstract subject free of sensationalism and distortions, I have some sobering news. Empirical researchers have found evidence that scientists too are vulnerable to narratives, emphasizing titles and “sexy” attention-grabbing punch lines over more substantive matters. They too are human and get their attention from sensational matters. The way to remedy this is through meta-analyses of scientific studies, in which an überresearcher peruses the entire literature, which includes the less-advertised articles, and produces a synthesis.

THE SENSATIONAL AND THE BLACK SWAN

Let us see how narrativity affects our understanding of the Black Swan. Narrative, as well as its associated mechanism of salience of the sensational fact, can mess up our projection of the odds. Take the following experiment conducted by Kahneman and Tversky, the pair introduced in the previous chapter: the subjects were forecasting professionals who were asked to imagine the following scenarios and estimate their odds.

  1. A massive flood somewhere in America in which more than a thousand people die.
  2. An earthquake in California , causing massive flooding, in which more than a thousand people die.

Respondents estimated the first event to be less likely than the second. An earthquake in California, however, is a readily imaginable cause , which greatly increases the mental availability—hence the assessed probability—of the flood scenario.

Likewise, if I asked you how many cases of lung cancer are likely to take place in the country, you would supply some number, say half a million. Now, if instead I asked you how many cases of lung cancer are likely to take place because of smoking, odds are that you would give me a much higher number (I would guess more than twice as high). Adding the because makes these matters far more plausible, and far more likely . Cancer from smoking seems more likely than cancer without a cause attached to it—an unspecified cause means no cause at all.

I return to the example of E. M. Forster's plot from earlier in this chapter, but seen from the standpoint of probability. Which of these two statements seems more likely?

Joey seemed happily married. He killed his wife .

Joey seemed happily married. He killed his wife to get her inheritance .

Clearly the second statement seems more likely at first blush, which is a pure mistake of logic, since the first, being broader, can accommodate more causes, such as he killed his wife because he went mad, because she cheated with both the postman and the ski instructor, because he entered a state of delusion and mistook her for a financial forecaster.

All this can lead to pathologies in our decision making. How?

Just imagine that, as shown by Paul Slovic and his collaborators, people are more likely to pay for terrorism insurance than for plain insurance (which covers, among other things, terrorism).

The Black Swans we imagine, discuss, and worry about do not resemble those likely to be Black Swans. We worry about the wrong “improbable” events, as we will see next.

Black Swan Blindness

The first question about the paradox of the perception of Black Swans is as follows: How is it that some Black Swans are overblown in our minds when the topic of this book is that we mainly neglect Black Swans?

The answer is that there are two varieties of rare events: a) the narrated Black Swans, those that are present in the current discourse and that you are likely to hear about on television, and b) those nobody talks about, since they escape models—those that you would feel ashamed discussing in public because they do not seem plausible. I can safely say that it is entirely compatible with human nature that the incidences of Black Swans would be overestimated in the first case, but severely underestimated in the second one.

Indeed, lottery buyers overestimate their chances of winning because they visualize such a potent payoff—in fact, they are so blind to the odds that they treat odds of one in a thousand and one in a million almost in the same way.

Much of the empirical research agrees with this pattern of overestimation and underestimation of Black Swans. Kahneman and Tversky initially showed that people overreact to low-probability outcomes when you discuss the event with them , when you make them aware of it. If you ask someone, “What is the probability of death from a plane crash?” for instance, they will raise it. However, Slovic and his colleagues found, in insurance patterns, neglect of these highly improbable events in people's insurance purchases. They call it the “preference for insuring against probable small losses”—at the expense of the less probable but larger impact ones.

Finally, after years of searching for empirical tests of our scorn of the abstract, I found researchers in Israel that ran the experiments I had been waiting for. Greg Barron and Ido Erev provide experimental evidence that agents underweigh small probabilities when they engage in sequential experiments in which they derive the probabilities themselves , when they are not supplied with the odds. If you draw from an urn with a very small number of red balls and a high number of black ones, and if you do not have a clue about the relative proportions, you are likely to underestimate the number of red balls. It is only when you are supplied with their frequency—say, by telling you that 3 percent of the balls are red—that you overestimate it in your betting decision.

I’ve spent a lot of time wondering how we can be so myopic and shorttermist yet survive in an environment that is not entirely from Mediocristan. One day, looking at the gray beard that makes me look ten years older than I am and thinking about the pleasure I derive from exhibiting it, I realized the following. Respect for elders in many societies might be a kind of compensation for our short-term memory. The word senate comes from senatus , “aged” in Latin; sheikh in Arabic means both a member of the ruling elite and “elder.” Elders are repositories of complicated inductive learning that includes information about rare events. Elders can scare us with stories—which is why we become overexcited when we think of a specific Black Swan. I was excited to find out that this also holds true in the animal kingdom: a paper in Science showed that elephant matriarchs play the role of superadvisers on rare events.

We learn from repetition—at the expense of events that have not happened before. Events that are nonrepeatable are ignored before their occurrence, and overestimated after (for a while). After a Black Swan, such as September 11, 2001, people expect it to recur when in fact the odds of that happening have arguably been lowered. We like to think about specific and known Black Swans when in fact the very nature of randomness lies in its abstraction. As I said in the Prologue, it is the wrong definition of a god.

The economist Hyman Minsky sees the cycles of risk taking in the economy as following a pattern: stability and absence of crises encourage risk taking, complacency, and lowered awareness of the possibility of problems. Then a crisis occurs, resulting in people being shell-shocked and scared of investing their resources. Strangely, both Minsky and his school, dubbed Post-Keynesian, and his opponents, the libertarian “Austrian” economists, have the same analysis, except that the first group recommends governmental intervention to smooth out the cycle, while the second believes that civil servants should not be trusted to deal with such matters. While both schools of thought seem to fight each other, they both emphasize fundamental uncertainty and stand outside the mainstream economic departments (though they have large followings among businessmen and nonacademics). No doubt this emphasis on fundamental uncertainty bothers the Platonifiers.

All the tests of probability I discussed in this section are important; they show how we are fooled by the rarity of Black Swans but not by the role they play in the aggregate, their impact . In a preliminary study, the psychologist Dan Goldstein and I subjected students at the London Business School to examples from two domains, Mediocristan and Extremistan. We selected height, weight, and Internet hits per website. The subjects were good at guessing the role of rare events in Mediocristan-style environments. But their intuitions failed when it came to variables outside Mediocristan, showing that we are effectively not skilled at intuitively gauging the impact of the improbable, such as the contribution of a blockbuster to total book sales. In one experiment they underestimated by thirty-three times the effect of a rare event.

Next, let us see how this lack of understanding of abstract matters affects us.

The Pull of the Sensational

Indeed, abstract statistical information does not sway us as much as the anecdote—no matter how sophisticated the person. I will give a few instances.

The Italian Toddler . In the late 1970s, a toddler fell into a well in Italy. The rescue team could not pull him out of the hole and the child stayed at the bottom of the well, helplessly crying. Understandably, the whole of Italy was concerned with his fate; the entire country hung on the frequent news updates. The child's cries produced acute pains of guilt in the powerless rescuers and reporters. His picture was prominently displayed on magazines and newspapers, and you could hardly walk in the center of Milan without being reminded of his plight.

Meanwhile, the civil war was raging in Lebanon, with an occasional hiatus in the conflict. While in the midst of their mess, the Lebanese were also absorbed in the fate of that child. The Italian child. Five miles away, people were dying from the war, citizens were threatened with car bombs, but the fate of the Italian child ranked high among the interests of the population in the Christian quarter of Beirut. “Look how cute that poor thing is,” I was told. And the entire town expressed relief upon his eventual rescue.

As Stalin, who knew something about the business of mortality, supposedly said, “One death is a tragedy; a million is a statistic.” Statistics stay silent in us.

Terrorism kills, but the biggest killer remains the environment, responsible for close to 13 million deaths annually. But terrorism causes outrage, which makes us overestimate the likelihood of a potential terrorist attack—and react more violently to one when it happens. We feel the sting of man-made damage far more than that caused by nature.

Central Park . You are on a plane on your way to spend a long (bibulous) weekend in New York City. You are sitting next to an insurance salesman who, being a salesman, cannot stop talking. For him, not talking is the effortful activity. He tells you that his cousin (with whom he will celebrate the holidays) worked in a law office with someone whose brother-in-law's business partner's twin brother was mugged and killed in Central Park. Indeed, Central Park in glorious New York City. That was in 1989, if he remembers it well (the year is now 2007). The poor victim was only thirty-eight and had a wife and three children, one of whom had a birth defect and needed special care at Cornell Medical Center. Three children, one of whom needed special care, lost their father because of his foolish visit to Central Park.

Well, you are likely to avoid Central Park during your stay. You know you can get crime statistics from the Web or from any brochure, rather than anecdotal information from a verbally incontinent salesman. But you can't help it. For a while, the name Central Park will conjure up the image of that poor, undeserving man lying on the polluted grass. It will take a lot of statistical information to override your hesitation.

Motorcycle Riding . Likewise, the death of a relative in a motorcycle accident is far more likely to influence your attitude toward motorcycles than volumes of statistical analyses. You can effortlessly look up accident statistics on the Web, but they do not easily come to mind. Note that I ride my red Vespa around town, since no one in my immediate environment has recently suffered an accident—although I am aware of this problem in logic, I am incapable of acting on it.

Now, I do not disagree with those recommending the use of a narrative to get attention. Indeed, our consciousness may be linked to our ability to concoct some form of story about ourselves. It is just that narrative can be lethal when used in the wrong places.

THE SHORTCUTS

Next I will go beyond narrative to discuss the more general attributes of thinking and reasoning behind our crippling shallowness. These defects in reasoning have been cataloged and investigated by a powerful research tradition represented by a school called the Society of Judgment and Decision Making (the only academic and professional society of which I am a member, and proudly so; its gatherings are the only ones where I do not have tension in my shoulders or anger fits). It is associated with the school of research started by Daniel Kahneman, Amos Tversky, and their friends, such as Robyn Dawes and Paul Slovic. It is mostly composed of empirical psychologists and cognitive scientists whose methodology hews strictly to running very precise, controlled experiments (physics-style) on humans and making catalogs of how people react, with minimal theorizing. They look for regularities. Note that empirical psychologists use the bell curve to gauge errors in their testing methods, but as we will see more technically in Chapter 15 , this is one of the rare adequate applications of the bell curve in social science, owing to the nature of the experiments. We have seen such types of experiments earlier in this chapter with the flood in California, and with the identification of the confirmation bias in Chapter 5 . These researchers have mapped our activities into (roughly) a dual mode of thinking, which they separate as “System 1” and “System 2,” or the experiential and the cogitative . The distinction is straightforward.

System 1 , the experiential one, is effortless, automatic, fast, opaque (we do not know that we are using it), parallel-processed, and can lend itself to errors. It is what we call “intuition,” and performs these quick acts of prowess that became popular under the name blink , after the title of Malcolm Gladwell's bestselling book. System 1 is highly emotional, precisely because it is quick. It produces shortcuts, called “heuristics,” that allow us to function rapidly and effectively. Dan Goldstein calls these heuristics “fast and frugal.” Others prefer to call them “quick and dirty.” Now, these shortcuts are certainly virtuous, since they are rapid, but, at times, they can lead us into some severe mistakes. This main idea generated an entire school of research called the heuristics and biases approach (heuristics corresponds to the study of shortcuts, biases stand for mistakes).

System 2 , the cogitative one, is what we normally call thinking . It is what you use in a classroom, as it is effortful (even for Frenchmen), reasoned, slow, logical, serial, progressive, and self-aware (you can follow the steps in your reasoning). It makes fewer mistakes than the experiential system, and, since you know how you derived your result, you can retrace your steps and correct them in an adaptive manner.

Most of our mistakes in reasoning come from using System 1 when we are in fact thinking that we are using System 2. How? Since we react without thinking and introspection, the main property of System 1 is our lack of awareness of using it!

Recall the round-trip error, our tendency to confuse “no evidence of Black Swans” with “evidence of no Black Swans;” it shows System 1 at work. You have to make an effort (System 2) to override your first reaction. Clearly Mother Nature makes you use the fast System 1 to get out of trouble, so that you do not sit down and cogitate whether there is truly a tiger attacking you or if it is an optical illusion. You run immediately, before you become “conscious” of the presence of the tiger.

Emotions are assumed to be the weapon System 1 uses to direct us and force us to act quickly. It mediates risk avoidance far more effectively than our cognitive system. Indeed, neurobiologists who have studied the emotional system show how it often reacts to the presence of danger long before we are consciously aware of it—we experience fear and start reacting a few milliseconds before we realize that we are facing a snake.

Much of the trouble with human nature resides in our inability to use much of System 2, or to use it in a prolonged way without having to take a long beach vacation. In addition, we often just forget to use it.

Beware the Brain

Note that neurobiologists make, roughly, a similar distinction to that between System 1 and System 2, except that they operate along anatomical lines. Their distinction differentiates between parts of the brain, the cortical part, which we are supposed to use for thinking, and which distinguishes us from other animals, and the fast-reacting limbic brain, which is the center of emotions, and which we share with other mammals.

As a skeptical empiricist, I do not want to be the turkey, so I do not want to focus solely on specific organs in the brain, since we do not observe brain functions very well. Some people try to identify what are called the neural correlates of, say, decision making, or more aggressively the neural “substrates” of, say, memory. The brain might be more complicated machinery than we think; its anatomy has fooled us repeatedly in the past. We can, however, assess regularities by running precise and thorough experiments on how people react under certain conditions, and keep a tally of what we see.

For an example that justifies skepticism about unconditional reliance on neurobiology, and vindicates the ideas of the empirical school of medicine to which Sextus belonged, let's consider the intelligence of birds. I kept reading in various texts that the cortex is where animals do their “thinking,” and that the creatures with the largest cortex have the highest intelligence—we humans have the largest cortex, followed by bank executives, dolphins, and our cousins the apes. Well, it turns out that some birds, such as parrots, have a high level of intelligence, equivalent to that of dolphins, but that the intelligence of birds correlates with the size of another part of the brain, called the hyperstriatum. So neurobiology with its attribute of “hard science” can sometimes (though not always) fool you into a Platonified, reductive statement. I am amazed that the “empirics,” skeptical about links between anatomy and function, had such insight—no wonder their school played a very small part in intellectual history. As a skeptical empiricist I prefer the experiments of empirical psychology to the theories-based MRI scans of neurobiologists, even if the former appear less “scientific” to the public.

How to Avert the Narrative Fallacy

I’ll conclude by saying that our misunderstanding of the Black Swan can be largely attributed to our using System 1, i.e., narratives, and the sensational—as well as the emotional—which imposes on us a wrong map of the likelihood of events. On a day-to-day basis, we are not introspective enough to realize that we understand what is going on a little less than warranted from a dispassionate observation of our experiences. We also tend to forget about the notion of Black Swans immediately after one occurs—since they are too abstract for us—focusing, rather, on the precise and vivid events that easily come to our minds. We do worry about Black Swans, just the wrong ones.

Let me bring Mediocristan into this. In Mediocristan, narratives seem to work—the past is likely to yield to our inquisition. But not in Extremistan, where you do not have repetition, and where you need to remain suspicious of the sneaky past and avoid the easy and obvious narrative.

Given that I have lived largely deprived of information, I’ve often felt that I inhabit a different planet than my peers, which can sometimes be extremely painful. It's like they have a virus controlling their brains that prevents them from seeing things going forward—the Black Swan around the corner.

The way to avoid the ills of the narrative fallacy is to favor experimentation over storytelling, experience over history, and clinical knowledge over theories. Certainly the newspaper cannot perform an experiment, but it can choose one report over another—there is plenty of empirical research to present and interpret from—as I am doing in this book. Being empirical does not mean running a laboratory in one's basement: it is just a mind-set that favors a certain class of knowledge over others. I do not forbid myself from using the word cause , but the causes I discuss are either bold speculations (presented as such) or the result of experiments, not stories.

Another approach is to predict and keep a tally of the predictions.

Finally, there may be a way to use a narrative—but for a good purpose. Only a diamond can cut a diamond; we can use our ability to convince with a story that conveys the right message—what storytellers seem to do.

So far we have discussed two internal mechanisms behind our blindness to Black Swans, the confirmation bias and the narrative fallacy. The next chapters will look into an external mechanism: a defect in the way we receive and interpret recorded events, and a defect in the way we act on them.

[1] The word the is written twice. GCMcd4w3jRgtLZLsmRlWMsO+mBPbQqjAyEqERk3l6BSdgdOB/7uoBGxFgjWaYog7

点击中间区域
呼出菜单
上一章
目录
下一章
×