I'm a big fan of the CBS TV show,
Numb3rs, where the FBI recruits a mathematics professor (and his colleagues) to help solve crimes. Every week (it's usually shown on Friday evenings), there are some interesting bits of
math concepts and trivial that I find to be educational and informative.
Yesterday's episode (Oct. 21, 2006) had the following storyline: There are a series of freeway attacks in Los Angeles -- some involving rifle shots, others involving bricks or other objects thrown onto windshields. Are they random attacks or is there a particular individual (or individuals) who are deliberately carrying out the attacks?
'Charlie Epps' -- the math professor who consults with the FBI -- initially concludes that the attacks are random (a reasonable conclusion factoring in the fact that it involves Los Angeles roads) and dismisses any concern that there might be a pattern to these attacks as the hopelessly untrained musings of the mathematically challenged. 'Megan Reeves' -- one of the seemingly mathematically challenged FBI agents -- comes to a rather different conclusion. She begins to wonder out loud whether or not the attacks are "too random" to be actually random.
To his credit, Prof. Epps eventually comes to agree with Agent Reeves. The process by which Prof. Epps initially dismisses challenges to his expert opinion and then comes to a more humbler change of heart says a lot about how mathematicians, statisticians, and similar quantitative professionals have notions about randomness that are even more flawed than the common sense ideas of the mathematically challenged laymen that the pros regularly (to be fair, usually correctly) deride. I also think the storyline of the show is a great object lesson on the intellectually challenging nature of
randomness.
Prof. Epps explains his initial conclusion of random attacks by drawing an analogy to the shuffle mode on IPods. According to the Epps character on the show, the shuffle mode has an algorithm that randomizes the selection of songs that your IPod will deliver to your ears. Thus, Prof. Epps asserts, any 'pattern' you discern from what songs your IPod selects is illusory. (Both the show and this blog post will ignore the fact that such algorithms are psuedo-random, at best, and not
truly random since, by definition, algorithms are deterministic.)
Agent Reeves objects by wondering whether or not the supposed non-pattern of attacks seems too random. She points out that, compared to each other, none of the attacks follow a similar
m.o. (
modus operandi); i.e., in terms of methodology, the attacks are never repeated and/or clustered. While that may sound like it bolsters the case for random attacks, it is actually suspicious because -- as the very beginning of the show (where Prof. Epps makes this point to a class on probability theory) points out -- truly random series of events should have some clustering and/or repeats. In fact, one way people try to counterfeit randomness is by making events seem too evenly spread out from one another.
Prof. Epps realizes this error and corrects his IPod shuffle analogy: It turns out the IPod shuffle algorithm is not an ideal (psuedo-) randomizing algorithm because it does not repeat songs already played until all the songs on the playlist have been played (so, at best, it's like a well-shuffled deck in a hand of poker). So he winds up conceding that Megan Reeves was right to think that attacks were indeed "too random" to be random.
One of the fallacies that the Prof. Epps character committed was what
Nassim Nicholas Taleb has called the "Ludic Fallacy" (to my knowledge, he first coined and defined this term in his forward to Aaron Brown's
The Poker Face of Wall Street). As I understand it, the Ludic Fallacy refers to the unfortunate habit of many quantitative professionals (statistics and economics professors, Wall Street traders, management consultants, government technocrats, etc.) to
over-simplify the nature of randomness and chance by making mistaken analogies to games ('ludic' -- of or relating to games or play -- comes from the Latin
ludus -- game or play) of chance.
One of the several reasons why the Ludic Fallacy is a fallacy is because most games of chance (with the notable exception of poker, which -- as Nassim Taleb hints at -- is more reflective of real-life "wild" randomness because that game has major strategic and tactical components to it that is mashed up with the quasi-stochastic element of a shuffled deck) have probability distributions that are too neatly defined and managed to accurately reflect the real-life 'wildness' of randomness as we experience it in the real world. For example, the predicted results of those games of chance have nice, well-defined statistical 'moments' -- like mean (or average) and variance (or standard deviation) -- that are useful in classrooms or on hedge fund prospectuses but may not be as meaningful in our day-to-day lives (as this blog, NNT, and others have pointed out in the past).
Prof. Epps' analogy to the IPod shuffle mode was a ludic fallacy since the shuffle mode is essentially the same problem as shuffling a deck of cards (an ideal shuffled playlist of music on an IPod is analogous to the 'well-shuffled deck' problem in blackjack, poker, and combinatorics). The well-meaning TV character drew a bad analogy to a game of chance -- shuffled music on an IPod -- to explain something much more complicated to be shoe-horned into the IPod shuffle story. Sadly, it's not just TV characters that fall for this fallacy; statistics classes and investment sales pitches are filled with this sort of fallacy.
I don't want to be misunderstood. I can't speak for Mr. Taleb, but I want to make it clear that drawing analogies to even the simplest of games of chance (like flipping coins or throwing dice) can be -- and, in fact, are -- good ways of explaining and exploring the nature of randomness and probability. What I object to is the indiscriminate use of these stories where the storyteller is not thoughtful or informed enough to understand the benefits and limits of using such games in 'philosophical experiments.'
My bottom-line is: It's okay to draw analogies to games of chance when dealing with randomness. Just be sure to think through the true nature -- both the potential and the limits -- of those games of chance and chance itself.
One final note ... there is another side to the coin of how even math pros make mistakes about randomness. I've spent (and the TV show spent) most of the time talking about the nature of the underlying (and, usually, unknown) probability function. The other side of the coin of randomness fallacies is the idea of magnitude or impact of making the wrong predictions. As Charlie Epps' father (ably played by Judd Hirsch) pointed out to his son -- as the math 'genius' was telling story after story about card games, lotteries, and how all that makes it unlikely that dear old dad wouldn't get shot up on the freeways -- card games, lotteries, and dice games normally don't kill you.
That's the kind of lesson -- that the real-life consequences (financially, legally, reputationally, etc.) of guessing wrong is often devastating -- that traders and investors should take to heart as well.