5. A Million for Sure:
The Allais Paradox
The question:
Decision 1: Which of the following two situations do you, dear reader, prefeR
(a) $1,000,000 for sure
or
(b) 10% chance of receiving $5,000,000,
89% chance of getting $1,000,000 and a
1% percent chance of getting nothing.
Think about it before you go on to the next decision.
Decision 2: Which of the two following lotteries do you prefer?
(c) 11% chance of getting $1,000,000, and
89% chance of getting nothing.
or
(d) 10% chance of getting $5,000,000, and
90% chance of getting nothing.
What did you come up with?
If you are like most people, you probably preferred (a) over (b) in the first decision, i.e., the one million dollars for sure, rather than a chance of gaining five million combined with a risk of gaining nothing at all. Then, in the second decision, if you are like most people you probably preferred the slightly lower probability of obtaining a substantially higher prize, i.e., you preferred (d) over (c).
The paradox:
Surprise, surprise: the choices preferred by most people – (a) over (b), and (d) over (c) – contradict each other. How so? Let us rewrite situation (a) as lottery (a’), and (b) as (b”), noting that ‘for sure’ is equal to 89% plus 11%. Hence, the decision is between
(a’) 11% chance of getting $1,000,000
89% chance of getting $1,000,000
and
(b) 10% chance of receiving $5,000,000,
89% chance of getting $1,000,000
1% chance of getting nothing.
Since ‘89% chance of getting $1,000,000’, is common to both situations, this part of the lottery is irrelvant and should be ignored. Hence, Decision 1 boils down to
(a”) 11% chance of $1,000,000.
or
(b”) 10% chance of receiving $5,000,000,
1% chance of getting nothing.
If you, like most decision makers, preferred (a) to (b) above, then you now also prefer (a”) to (b”). So far so good, now let’s turn to (c) versus (d).
We rewrite Decision 2 as:
(c) 11% chance of getting $1,000,000, and
89% chance of getting nothing.
or
(d’) 10% chance of getting $5,000,000, and
89% chance of getting nothing
1% chance of getting nothing.
This time, it is the ‘89% chance of getting nothing’ that is common to both lotteries and can therefore be ignored. Hence Decision 2 reduces to:
(c”) 11% chance of getting $1,000,000, and
or
(d”) 10% chance of getting $5,000,000, and
1% chance of getting nothing.
and most people prefer d” to c”.
Now note that a” is identical to c”, and b” is identical to d”. So, to be consistent, people should either go for a and c, or for b and d. Nevertheless, experiments show – and maybe you did too – that very many people prefer a” and d”. What a paradox! By adding ‘an 89% chance of getting $1,000,000’ to (a”) and (b”), people prefer (a) over (b). But by adding ‘an 89% chance of getting nothing’ to (c”) and (d”), they prefer (d) over (c).[1]
History:
In 1947, John von Neumann and Oskar Morgenstern had published the second edition of Theory of Games, including a proof that showed that people possess utility functions which determine their decisions. (See the chapter on the Independence of Irrelevant Alternatives.) According to their model, rational people should maximize the mathematical expectation of their utility.
The French economist Maurice Allais (1911 – 2010), did not accept this. He claimed that the model neglected the fact that human decision makers were…human. They do not perform complicated mathematical computations before making decisions, he claimed. The crucial element must be psychological.
Allais first came to international attention with a conference that he organized in Paris in 1952. He did not hide his disdain for the ‘American School’ and in the run-up to the conference, he devised a concise test whose results would prove the American School wrong. It consisted of the two questions posed at the outset of this chapter.
Over lunch at the conference, Allais presented the University of Chicago statistician Leonard Savage with the two questions. Savage was a staunch proponent of the von Neumann-Morgenstern model. As knowledgeable about rational decision-making as anybody in the world, the statistician considered the situations … and promptly fell into the trap: he chose (a) and (d).
After Allais pointed out his ‘irrationality’, Savage was deeply disturbed; he had violated his own theory! Allais, on the other hand, was jubilant. His lunchtime quiz had thrown the American School’s mathematical model of how to make decisions under uncertainty into irreparable turmoil.
Awarded the Nobel Prize for economics in 1988, Allais died just half a year shy of his hundredth birthday.[2]
Dénouement:
If you read the chapter on the Independence of Irrelevant Alternatives (Chapter ///), you probably already know where the problem lies. ‘An 89% chance of getting $1,000,000’ in Decision 1, and ‘an 89% chance of getting nothing’ in Decision 2, are ‘irrelevant alternatives’. Hence, rational decision makers should ignore these alternatives. That is why von Neumann and Morgenstern, and also Savage and others, had stipulated that anybody who claims to act ‘rationally’ must adhere to the notorious axiom of the ‘Independence of Irrelevant Alternatives’.
Allais challenged the stipulation. In one fell swoop, he showed that people, being fallible humans, violate the axiom: Human decisions are not independent of irrelevant alternatives.
Technical Supplement:
Allais explained the ‘paradox’ with the apparent preference for security in the neighbourhood of certainty, a profound psychological, if ‘irrational’, reality. Since there was no room for that lack of rationality in the theory of expected utility, something had to give. “If so many people accept this axiom without giving it much thought,” Allais wrote, “it is because they do not realize all the implications, some of which – far from being rational – may turn out to be quite irrational in certain psychological situations.” The rational man, behaving under risk according to expected utility theory, simply does not exist.
[1] Another way to recognize the paradox is to compute expected payouts. The expected value of lottery (a) is $100 million, while it is $139 million for (b); the expected value of (c) is $110,000 and of (d) it is $500,000. So, by the theory of expected utility, (b) should be preferred over (a), and (d) should be preferred over (c). In the first gamble, the less risky choice is preferred over a higher expected utility, while in the second gamble a higher expected utility is preferred over a less risky choice.
[2] The professor of economcs did not have a one-track mind, however. He also performed experimental work in physics that questioned some aspects of Einstein’s theory of relativity and have still not been fully explained.
Comments, corrections, observations: