The ‘paradox’ is in this solution to this question. After choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, what is the probability that the next coin drawn from the same box will also be a gold coin?

In the discussion below, I’ll call the boxes #1, #2, and #3 in the order from left to right in this diagram, and use “coins” instead of “balls.”

The answer. People calculated this in various ways, some using Bayesian statistics, some, like me, a simple intuitive multiplication. But the two most common answers were 1/2 (50%) or 2/3 (67%). The latter answer is correct. Let me explain how I thought it through:

One you’ve drawn a gold coin, you know you’ve chosen from either the first or second box above. The third box, containing only silver coins, becomes irrelevant.

You’ve thus chosen a gold coin from the first two boxes. There are three gold coins among them, and thus if you picked a gold coin on your first draw, the probability that you chose from box #1 is 2/3. The probability that you chose it from box #2 is 1/3.

If you chose from box #1, the probability that you will then draw a second gold coin is 1 (100%).

If you chose from box #2, the probability that you will then draw a second gold coin is 0 (0%) for there are no gold coins left.

Thus, the overall probability that if you got a gold coin on the first pick then you will get another one if drawing from the same box is (2/3 X 1) + (1/3 X 0), or 2/3 (probability 67%). The answer is thus 2/3 (probability 67%).

If you don’t believe it, first check the Wikipedia explanation. It also explains why people think the probability is 50%, but fail to comprehend that it’s more likely, if you drew a gold coin on the first go, that the box you drew from is #1 than #2.

Then, if you still don’t believe it, try it yourself using either two or three boxes (you don’t really need three). That is, you can do an empirical test, though the explanation above should suffice. You will find that once you’ve drawn a gold coin on your first pick (you can just use two types of coins, with one box having like coins and the other unlike coins), the chance that you will draw another from the same box is 2/3. In other words, you’ll see that outcome 67% of the time. Remember, we are talking outcomes over a number of replicates, not a single try! You’d be safe betting against those who erroneously said 50%.

If you think Wikipedia is wrong and you’re right, good luck with correcting them!

73 Comments

The key is to realize that the separation of balls into boxes is irrelevant to the probability of drawing gold EXCEPT that drawing the gold allows you to eliminate two silver coins. There are only three coins remaining in the boxes you could have, and two of them are gold.

I don’t think the separation into boxes is irrelevant. You analysis works in this case, but consider the following: Box 1 has 5 gold, box 2 has 2 silver and 3 gold.

Let’s follow your procedure. You select a gold coin. There remain 9 coins, 7 gold and 2 silver. Thus you would say that the chance of getting a gold coin on the second draw is 7/9. However the correct answer is 5/8.

If you drew a gold coin the first time. The probability is 5/8 that you got it from the first box and 3/8 that you got it from the second box.

If you drew from the first box, it is a certainty that the second coin is also gold. However, if you drew from the second box, there are still two gold coins left giving you a probability of 1/2 of getting a gold coin. The total probability is therefore 5/8 x 1 + 3/8 x 1/2 = 13/16.

There are actually two paradoxical elements here. The first is why not 50%, which is easy enough to dispel. The other is why not the fraction of gold coins to total overall, which you don’t see in the original Bertrand puzzle. Wendell’s example @22 illustrates the latter one clearly.

What I love about this (and the Monty Hall problem) is how the scenario of the problem obscures the information provided.

In this case (for me, anyway), saying ‘a gold ball’ makes it harder to think of the balls in the first box as being separate things with their own probabilities — the 50% answer is plausible because you think of ‘box 1’ and ‘box 2’ (one wins and one does not) and not ‘balls 1, 2, 3’ (two win and one does not).

I like to think that there is an evolutionary angle here. That so many people today get this kind problem wrong suggests that our ancestors did not often encounter versions of it in their environments. If handling Monty Hall situations well had increased fitness our brains’ problem solving ability might have made you and me more likely to “get it”.

Your last sentence is correct. The successful outcome (choosing a second gold) is completely dependent on initially choosing the right box.

At the time of choosing, you don’t know the contents of the specific box you have chosen, so your chosen box could either have two golds or one gold; since you are twice as likely to pick gold from the two gold box as you are from the one gold box, and total probability always equals one, there is a 2/3 probability you have chosen the right box.

If you selected a silver coin, you’re done, finished, kaput. That means you’re in Box 1 or Box 2. If you stuck your mitt in Box 1 you had double the chance of selecting a gold coin that you would’ve had had you stuck it in Box 2. Therefore, the odds are 2-to-1 that the box you picked the gold coin out of was numero uno (which is to say, two out of three times that’s where the first gold coin would’ve come from).

if you have to switch boxes the chances are reduced, and, moreover the third box with the 2 silvers comes into play again.
If the coin came from the double gold box (a 2/3 chance), you have to switch to either the g-s or the s-s box, so your chances are 2/3 x 1/4 = 1/6 there.
If your coin came from the g-s box (1/3 chance) your chance of striking gold are 1/2 (either g-g or s-s) 1/3 × 1/2 = 1/6 too.

As you stated in the earlier post about this problem it is related to the Monty Hall paradox, and one person who was wrong about that – and so stubborn that it was not until he had set up a computer simulation and seen with his own eyes that switching wins two out three times – was no less a person than John Von Neumann. I was able to come up with the right answer very quickly because this sort of thing interests me and I have a good memory.

I did try testing it empirically. I set up an excel spreadsheet starting with a randomly ordered sequence of the numbers from 1 to 10,000 created for me by Random.org. I then extracted the last digit from each of those numbers and counted how many of each digit appeared there. It was 1000 for each digit as expected. I then eliminated the rows containing a 9 and created a formula for each of the remaining 9000 rows which calculated a box number (1,2, or 3) based on that last digit being in the range 0-2, 3-5, or 6-8 I then deleted 200 rows from the a randomly chose region more or less in the middle of the set of rows and counted how many rows were < 3 (meaning, not the 3rd box was chosen. That worked out to 5804. Then I counted the number of those rows which had < 3 in them which had a 1 meaning the box with two golds. That count was 2900. Pretty damn close to 50/50. The thing is once you know you didn't chose box 3, it is the same as just choosing randomly between two boxes. What am I doing wrong here? Would a truly random sequence of the numbers 1 to 3 give different results somehow? I would think that the above procedure would achieve that anyway.

If the rule was you could choose to switch boxes after you got the 1 gold, then there would be an analogy with Monty Hall, but the way the puzzle is defined, you can’t. You made one choice. You already know that it wasn’t box 3, so given that, you how is the situation you now face for your 2nd draw any different than just starting with 2 boxes in the first place, one with no gold and one with two gold?

You don’t need to do a statistical test with random choices.
Because of the way the first ball was selected, there is twice the likelihood that the final box will contain a gold as a silver, therefore 2/3.
Its the first selection (the prior in Bayesian stats) not the last one that skews the odds.

An alternate explanation is that right from the beginning the chances are 2/3 that you drew from a box with 2 coins the same.
So whatever color you found, there is a 2/3 chance that the other one is the same color.

Hmmm. Ok, I think I got it now. I need to number all the balls and figure out the probability of getting ball 1, 2, or 3 vs 4, 5, or 6 where 1-3 are gold spread across box 1 and half of box 2 while 4-6 are silver spread across the other half of box 2 and box 3. Then figure out the probability of picking ball 3 vs picking ball 1 or 2 each of which have 1/6th probability of being picked. Thinking of it that way does give 2/3 as the answer since there are two balls to choose from in box 1 which meet the criterion vs just 1 in box 2.

You have a 2/3 chance of being in box 1. If that’s the case, you have a 100% win with “stick” and a 50% win with “switch.”

You have a 1/3 chance of being in box 2. If that’s the case, you have a 0% chance win with “stick” and a 100% chance win with “switch.

Rearranging those…

Chance of win via stick: 2/3*1+1/3*0 = 2/3

Chance of win via switch: 2/3*0.5+1/3*1 = 2/3

The reason the odds of winning don’t add up to 100% could due to either (a) my math error, or (b) the elimination of box 3 in round 1. Monty, after all, didn’t have two prizes behind each door, just one.

I wonder if the impulse (if that’s the right word) to answer 1/2 would be so strong if there were a hundred — or a thousand, or a million — gold coins in the first box. Would there still be as strong a tendency to see this as an equal choice between boxes 1 or 2.

I’m sure you’ll get lots of replies. How many said 2/3, how many 1/2? I still don’t get it, I’ll admit. Seems to me that the first choice is over and done, and you know you’re not in box 3. So you are either in 1 or 2. If 2, you won’t get a second gold. If 1, you will. Chances are 50-50. Sure, you had a 50-50 chance of getting silver if in 2, but you didn’t, that’s given, over, done with, and no longer relevant. 50-50 (and I’m 100% stubborn). 😉

The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal energy as being two distinct identifiable photons. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose’s “error” leads to what is now called Bose–Einstein statistics.

This might help. Let’s suppose there were 100 gold coins in box 1 and in box 2 99 silver coins and one gold coin. You pick out a gold coin. Which box did you get it from?
You can be almost certain that you got it out of box 1. (100/101) Therefore it is almost certain that your second pick will be gold.

Number the gold balls 1, 2, and 3. There is equal probability – 1/6 – of choosing each at the outset.
If you picked #1, the next pick will be gold.
If you picked #2, the next pick will be gold.
If you picked #3, the next pick will be silver.
2/3.

Nope, if you picked box 2 your next coin will be silver. You are confusing the issue here.
The point is that ifyou have pulled a gold coin, there is a 2/3 chance it came from the double gold box (so the second coin is gold) and 1/3 chance it came fŕom the gold-silver box (hence the second coin is silver).

“the overall probability that if you got a gold coin on the first pick then you will get another one if drawing from the same box is (2/3 X 1) + (1/3 X 0), or 2/3 (probability 67%)”

This ^^^ and the “event space” explanations the other day I think are very concise but also show – one in a mathematical way, the other in a logic-flavored way – how the step at which you pick, or which boxes or anything else isn’t relevant. I really like having every angle covered in these things – thanks z____ and whomever also wrote what PCC(E) wrote.

…. Car Talk has (had) a weekly Puzzler, perhaps a Weekly Puzzler on WEIT would be real fun!

After a night and day to cogitate I agree with you and wiki. I came to this conclusion by approching the problem from the opposite side. I have two boxes one with a gold coin in it. I have three coins on the table to fill the boxes. Two coins are gold and one is silver. if I were to pick a coin at random to put into my box containing the gold coin my choice would be two chances of picking a gold coin from three choices therefore the odds are 2 from 3 or 2/3.
thanks for the brainwork.

It’s only paradoxical because of the way it’s phrased, which appears to de-couple the original event. If re-stated: “Given this scenario, if you at first select a gold ball, what are the chances it came from the box with two gold balls?” then the final answer is readily apparent.

Maybe another way to look at it is to take the multiple Universe view. There are 6 Universes in which a different ball is chosen. You already know you are not in the three where you chose Silver, leaving three Universes in which you chose gold. One of those is the one in which you chose Gold from the {gold, silver} box, the other two are Universes in which you chose one of the two balls in the {gold,gold} box. So counting up the Universes in which you chose Gold, leaves you with a 1/3 probability of having done so from the {gold,silver} box and 2/3 probability of having done so from the {gold,gold} box.