Traveler's Dilemma: 'Irrational' Game Theory
The Traveler's Dilemma (TD) basically involves the following scenario: Two players can individually choose a figure (usually couched in terms of some monetary amount or price) without in any way conferring or coordinating with each other. If both players choose the same amount, then they will each be paid that amount (hence, like Prisoner's Dilemma (PD), there are meta-incentives to 'cooperate'). But if they choose different amounts, then the player who chooses the lower amount will get that amount plus some reward/bonus while the player who chooses the higher amount will get the lower amount minus a penalty (which is usually the same size as the reward). Thus, as in the better known PD, the TD's payoff structure provides myopic incentives for players to 'defect' and try to undercut one another.
Under the standard assumptions of 'rationality' (I put it in quotes here because I find economists' and game theorists' definition of rationality to be too straight-jacketed and narrow) and the logic of backward induction (a solution concept in game theory), the players in TD should defect and choose the lowest number possible (similar to the result in PD). Of course, this result is to the deteriment of both players since both of them would have gotten a higher payoff if they had 'cooperated' and chosen a high amount.
Prof. Basu argues -- providing ample experimental/empirical evidence and common sense observations -- that, in reality, the theoretical result of TD does not hold. In practice, most players who participate in TD inspired game theoretical experiments tend to choose figures in the high range of options. These empirical results fly in the face of the standard assumptions of 'rationality' (although, as Prof. Basu helpfully points out, it is consistent with a sort of meta-rationality) in game theory and economics.
The article makes some other interesting observations. One of them are the results of an experiment carried out by economist/game theorist, Ariel Rubinstein. Rubinstein found that people who based their decisions in Traveler's Dilemma style experimental games using strategic reasoning or more formal 'rationality' took the longest time to respond to prompts. Conversely, those who made decisions on spontaneous 'emotional' or untrained intuitive responses or who made "random" (i.e., inexplicable or perhaps even crazy) choices tended to take the least amount of time in making decisions.
Another interesting observation made in the Scientific American article is whether or not the economist's/game theorist's concept of rationality needs to be modified and expanded. Prof. Basu argues that the standard notions of rationality are inadequate to explain the empirical tests of the Traveler's Dilemma as well as the Prisoner's Dilemma (especially, iterated PD). The problem that the article zeros in on is the assumption that rationality is "common knowledge" in its philsophical, formal logic sense of that term (i.e., that each of the players in TD know that the other player will act 'rationally' and they are each know that the other knows). Prof. Basu argues that people may not always conform to this standard assumption, especially in repeated games such as iterated (or repeated) Prisoner's Dilemma.
As the article points out, perhaps these observations from empirical tests of TD and PD will lead to a rethink of game theoretic logic. Perhaps there is a "meta-rational" consideration(s) that lead people to (without direct collusion and communcation) choose mutually beneficial options. This type of reasoning should give us some reasons to hope in humanity at large.