Tuesday, September 1, 2009

Why Do Economists Use a Bell Curve if it Doesn’t Apply?

Many economic theories, such as modern portfolio theory and Black-Scholes option pricing, are based on the premise that equity price distributions follow a Bell curve. In his book The (Mis)Behavior of Markets, Benoit Mandelbrot makes a strong case that the Bell curve gets it wrong. So, why do we still use theories based on the Bell curve?

A partial answer is that a great many phenomena, such as human height, do follow a Bell curve (also called the Gaussian or Normal distribution). In fact, Bell curves come up so often that it is natural to suspect that it would apply to equity prices.

To see why Bell curves come up so often, let’s look at a simple example with dice. If we were to roll a die many times, we expect each face to come up roughly the same number of times. On a frequency chart, this would be a mostly flat curve.

But, if we roll 10 dice many times, adding them up each time, the frequencies of the different possible totals from 10 to 60 would no longer be flat. I had my computer roll 10 virtual dice a million times. Here is a chart of how often each total showed up:

The result is a nice Bell curve with a peak at a total of 35 as we would expect. Whenever we add many independent random values together (as long as they aren’t too wild as I’ll explain later), the result tends to drift towards a Bell curve.

This isn’t a perfect Bell curve, though. If it were, the odds of getting a total less than 10 would be about one in 9000. But the real probability of getting a total less than 10 using 10 dice is zero. The Bell curve models this situation well except for the most extreme events.

What happens if we make our dice “wilder” by replacing each 6 face with a 20? Here are the results after a million virtual rolls:

The beautiful Bell curve has been destroyed. However, we can get it back by increasing the number of dice to 100:

The frequency chart still has a few small bumps, but it is now very close to the familiar Bell curve. So, even wild dice can be tamed if we add up enough of them.

This tendency for all things to drift towards a Bell curve explains why economists would assume that equity price changes follow a Bell curve. A great many factors influence prices, and when we add them all together we might suspect that a Bell curve would pop out.

(For those more mathematically inclined, standard models assume that factors multiply together so that their logarithms are being added leading to a lognormal distribution rather than a normal distribution. This means that the logarithms of equity prices are following the Bell curve.)

However, Mandelbrot argues that the factors being added together are so wild that they cannot be tamed into a Bell curve.

A good example of a wild distribution starts with a blindfolded archer standing between two long parallel walls. The archer spins around and fires arrows in a random direction. We then measure how far along the wall the arrows land.

Most arrows won’t land too far away, but occasionally the archer will fire an arrow nearly parallel to the walls that will travel a long way. Even if we add up hundreds of arrow distances, there is still likely to be an arrow so far out that its distance dominates the total. Unlike the dice with a 20 instead of a 6, the arrow distances cannot be tamed into a Bell curve by adding up many of them.

These wilder distributions are much more likely to produce extreme events than a Bell curve, and Mandelbrot showed that real equity price changes are more consistent with wilder distributions.

All this means that standard theory will give incorrect answers to some questions. Unfortunately, it isn’t easy to know which questions will be answered incorrectly. One thing to look for is situations where extreme events would sink a financial plan such as when an investor is highly leveraged.


  1. Great post. The math is beyond me, but I have some understanding of the implications.

    Having your computer do two million dice throws reminds me of "Hitchhikers Guide to the Galaxy"'s Marvin the robot saying "Here I am, brain the size of a planet" (and you have me continuously rolling dice).

  2. Maybe the basic problem is the convenience of the model, which offers a handy starting point and manageability. The economists' argument seems to be like that for democracy - it isn't perfect but it's the best compared to available alternatives. But behavioural finance suggests that manias which create the extremes are the real "normal" - that the basic assumption that events are independent isn't true. So perhaps we do need a new model.

  3. The normal distribution is preferred, because it is mathematically convenient, and least squares optimization leads to maximum likelihood results. Using skewed distributions makes the math much more complicated. I haven't read Mandelbrot (yet), but Taleb refers to him in 'Fooled by Randomness' and 'The Black Swan'. Read both of those if you want to get a chuckle - there's no math in either...

  4. Anonymous: I've read both Taleb books you mention. Unfortunately, I read them before I read Mandelbrot's book. Taleb's ideas are largely based on Mandelbrot's, and I like Mandelbrot's book much better.

  5. There is no theoretical or empirical reason to believe that the data resulting from measurements are normally distributed. When economists assume the data are normally distributed they are making a mistake.

  6. @Terry: Actually, within a range, the data are close to normally distributed. However, as we get to more extreme events, a normal distribution doesn't work. So, a normal distribution model can be used to answer some questions reasonably well, but not others. The trick is understanding where it can be used. Unfortunately, there are those who deliberately misapply it for personal gain.