top of page

59. One Box or Both Boxes?

Newcomb’s Paradox

 

The Question:

There are two boxes in front of you. One is made of glass and contains $1,000. The other is made of wood and you don’t see what’s in it. It could contain either $1,000,000 or nothing at all, depending on what Algol decided. You have the choice of picking either both boxes, or the wooden box alone.

Who’s Algol? Algol is an algorithm that has has been observing your online behavior for years, and has learned all about you. It can foresee exactly what you will do. In particular it knows whether you act in a restrained manner or if you are greedy.

If Algol foresees that you are greedy and will take both boxes, it will place nothing into the wooden box. If Algol foresees that you are restrained and will take only the wooden box, it puts $1,000,000 into it. Immediately after that, Algol self-destructs. Pouf!

Now it’s your turn. What do you do?

Obviously, you will restrain yourself and take only the wooden box. Since Algol foresaw this, it had placed $1,000,000 into the wooden box before self-destructing. Had you been greedy and taken both boxes– i.e., not only the wooden box but also the glass box that contains the $1000 –    Algol would have foreseen that and placed $0 into the wooden box before self-destructing. Thus, you would have ended up with the meager $1,000 of the glass box.

Correct?

 

The Paradox:

Surprisingly, not everybody agrees. As reasonable as the argument for taking only the wooden box sounds, a very good argument can be made for taking both boxes. Here it is:

Before self-destructing, Algol placed either $0 or $1,000,000 into the wooden box. The deed is done, Algol is gone, and no matter what you decide to do, the wooden box is either full or empty. It contains whatever it contains.  

Now, no matter what Algol has done, you would be $1,000 better off by taking both boxes: if Algol had foreseen that you would take both boxes, it would have placed nothing into the wooden box, and you would have ended up with only the $1,000 of the glass box. If, on the other hand, Algol had predicted that you would take only the wooden box, it would have placed $1,000,000 into the box before self-destructing. (The fact that you later decide to prove Algol’s prediction wrong, changes nothing with respect to what’s in the box.) So whatever Algol has done, by taking both boxes you gain an additional $1,000: either $1,000 versus nothing, or $1,001,000 versus $1,000,000. Therefore, you should take both boxes!

So, we see that there is an argument to be made for taking only the wooden box and an equally good argument for taking both boxes. A paradox!

 

Background:

The paradox was published by the philosopher Robert Nozick in 1969.[1] He was not the originator, however, that honor belongs to the physicist William Newcomb of the University of California’s Lawrence Livermore Laboratory. The paradox gained notoriety after Martin Gardener discussed it in an article in Scientific American in 1973.

Since then, Newcomb’s Paradox, as it was henceforth known, has become a hotly debated paradoxes in philosophy and decision theory.

 

Dénouement:

Believers in Algol’s foresight will opt for the wooden box. After all, it is akin to a self-fulfilling prophecy. Skeptics, on the other hand, will opt for both boxes. True, Algol had foreseen that you would take only the wooden box, and it operated according to its prediction. But you are your own master; you can dupe Algol and change your mind at any time.[2] Okay, so Algol’s prediction turned out to be wrong. But your decision at any time after Algol did its deed does not affect what’s in the box.

Who’s right? “To almost everyone it is perfectly clear and obvious what should be done”, Nozick wrote. “The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.”

So, about half the people go by the ‘expected payout principle’: since the probability of Algol predicting and doing the ‘right’ thing is very high, one may expect a higher payout when taking only what’s in the wooden box.[3]

But about half the people go by the ‘dominance principle’: no matter what Algol did, taking both boxes yields $1000 more than just taking one box.

In answer the question who’s right: the jury is still out…and not likely to come back with a verdict.

 

Technical supplement:

Note that Algol is not ominpotent and not even omniscient. It is just a very good prognosticator and acts according to its prognosis. However, occasionally it can be wrong.

***

An interesting question is whether a person of faith should pray before opening the wooden box. The Jewish answer to that question is a resounding No. The Talmud rules that one may not recite a prayer in vain. For this reason, one may not pray for something that has already occurred. For example, if someone hears of a tragedy that has occurred in a place where he has relatives, he should not pray that this tragedy did not affect any of his family. Hence, neither must there be any prayers that the wooden box contains a million dollars.

***

Newcomb’s Paradox has some resemblance with the Monty Hall Problem (see Chapter ////) in that a decision is made, or changed, when the result is already determined.

 

© George Szpiro, 2019

 

 

[1] Nozik, Robert (1969) “Newcomb’s Problem and Two Principles of Choice”  in N. Rescher et al. (eds.), Essays ín Honor of Carl G. Hempel, Springer Verlag.

[2] This is what the philosophy of existentialism is about: individuals are individuals— responsible, conscious beings, acting independently.

[3] The expected payout is p·1,000,000 + (1-p)·0 if Algol predicts correctly and you take only the wooden box. If Algol predicted incorrectly and you take both boxes, the payout is q·1,001,000 + (1-q) ·1,000. Since p is close to one (Algol is usually right), and q is close to zero (Algol is rarely wrong), the expected payout is higher when one takes only the wooden box. (See: Bar-Hillel, Maya and Avishai Margalit (1972) “Newcomb's Paradox Revisited”, The British Journal for the Philosophy of Science, Vol. 23, No. 4 pp. 295-304.)

bottom of page