The NBER (National Bureau of Economic Research) Digest's latest issue (June 2007) has an article -- TV Appearance and Electoral Success
-- that I found both fascinating and disturbing at the same time. In their paper, Thin-Slice Forecasts of Gubernatorial Elections
, Daniel Benjamin and Jesse Shapiro, conducted an experiment on a group of Harvard students to see whether or not the physical appearance of political candidates alone could lead to correct predictions of the outcome of elections. These subjects were shown 10 second clips of televised debates of candidates for governors in several U.S. states. Some subjects were shown clips without sound while others were either shown clips with muddled sounds or with full audio.
The researchers found that those who saw silent videos had greater success at predicting who would win the elections (58% of picks actually won their elections) compared to most other factors used to make electoral predictions. The students who made their predictions after seeing the videos either with muddled or full sounds could do no better than had they randomly guessed who would win (ranging from 52 to 48% success rates). Furthermore, purely visual forecasts were considerably more accurate than electoral predictions based on other measures such as per capita income, unemployment rates, and state fiscal health.
The results of this research is consistent with similar experiments carried out on subjects outside the U.S. to find out what factors people actually used when picking their political leaders. For example, two studies carried out in Europe (Romania and Finland) found that subjects basing predictions based on physical appearance tended to do better than those who had based their predictions on other seemingly more 'politically correct' factors such as competence in economic and policy matters.
Contrary to expectations, adding policy information to the predictive mix seems to worsen
the chances of making correct predictions. The findings of this body of research may explain why 'experts', "who are highly informed about and attentive to policy matters, are often found to perform no better than chance in predicting elections" (and, frankly, experts often do worse than chance in making predictions and this mis-predictive 'ability' is applicable to other fields as well).
These findings are consistent with the idea of 'thin-slicing' which was popularized by journalist, Malcolm Gladwell
, in his best-selling book, Blink
. Malcolm Gladwell's contention is that making rapid, intuitive decisions are often not much worse than -- and may often be superior to -- making more deliberate decisions. Although I really admire Malcolm Gladwell and, to some extent, agree with this idea, this is the part that I (and I'm sure others) find somewhat disturbing. As I recently wrote up in a blog post on the Travelers' Dilemma
, those who make snap decisions tend to make more emotional or random rather than rational or strategic decisions. Thus, while in a positivist sense 'thin-slicing' may be the method used to make decisions in political elections (and in other situations), in a normative sense this type of decision-making is a recipe for choosing incompetent and disastrous leaders.
Benjamin and Shapiro's research seems to contradict econometric models of elections done by researchers like Ray Fair
, who found that economic and other policy factors can play significant roles in making electoral predictions. Having said that, all three economists agree that incumbency is the factor with the most predictive power in trying to forecast electoral results. (Campaign spending is another factor that has great predictive power.)
The research discussed in the NBER Digest both complements and contradicts research carried out by Philip Tetlock, a political scientist at U.C. Berkeley. Prof. Tetlock -- most recently in his book, Expert Political Judgment: How Good Is It? How Can We Know?
-- eloquently debunks the predictive 'abilities' of experts. Tetlock's research seems consistent and complementary to Benjamin and Shapiro's findings in that all three would agree that experts are bad at making predictions. Tetlock's research is at odds with the reasoning laid out in the Digest in that Tetlock, as Isaiah Berlin would have put it, believes that 'foxes' -- those who are intellectually curious and have wide-ranging interests (i.e., knows at least a little about a lot of things) -- tend to do better than 'hedgehogs' -- those who are focused on a few matters (i.e., knows (a little? a lot?) about a few things ... usually one thing) -- in making predictions. Benjamin, Shapiro, et al., seem to suggest that such a distinction may not matter at all -- i.e., it really doesn't matter whether or not an expert is Berlin's intellectual fox; in fact, narrow-minded hedgehogs -- so long as they are focused on the right 'few' things (such as visual cues of personal appeal in this case) -- may make better predictions than Tetlock's foxes.
Perhaps the most fascinating aspect of Benjamin and Shapiro's findings -- beyond how the electorate is susceptible to making snap decisions based on 'looks' -- is the complexity of how people thin-slice during elections. Although the experimental subjects did predict better with purely visual cues (without sound or policy information), these visual-based predictions do not seem to be simplistically correlated to factors that are obviously associated with physical attractiveness.
Instead, there seems to be a sense of some intangible quality -- we can call it 'charisma' -- that somehow can be picked up visually (and perhaps is muddied a bit when we start considering the substance of what the candidates have to say) that seem to be the key factor in the predictive success of those who based their decisions on silent videos. Thus, personal charisma of a candidate -- as vague a concept as that may seem -- is a superior to the predictive power (or lack thereof) of more concrete factors like policy matters and competence.
As Benjamin and Shapiro aptly conclude, "Adding policy information to the video clips by turning on the sound tends, if anything, to worsen participants' accuracy, suggesting that naïveté may be an asset in some forecasting tasks."
Labels: cognitive science, elections, forecasting, political science, predicting, psychology