In the puzzle, I clicked on the car instead to avoid goat links. However, the car had a huge ugly rusted gaping hole in the back bumper, dripping oily sludge. It was horrible! I'll never look at cars the same way again. The humanity!

In my experience (and I have a fair bit of exposure to and experience with the medical psychology) psychology is only good when the practitioners ignore their trade and just act like friends to their patients. That has nothing to do with the fact that they are psychologists, and more to do with the fact that they are good people. The world needs more good people, not psychologists.

Wait... You don't have any psychological problems (other than replying to my post ^.^) You've never had sessions - you only run into this psychologist because she rents the space next to where you work. And you definitely haven't given her any money.

Makes me think of probably the most famous psychologist, Dr. Phil. He got hugely popular pretty much just telling people what they needed to hear. Nothing he says is profound or thought provoking. He basically just tells you the way it is. Most people aren't really to do that, so I applaud him for doing that. However, I don't think that he's really doing that much that anybody else couldn't do.

You glossed over the part where most people don't know how to deal with problems that a psychologist is trained to handle. There's something to be said for the education.

After all, I have plenty of friends, and I'm in complete contact with my family, but they have no idea how to help me get through a bout of depression in anything approaching a concrete manner. Just being there isn't enough.

And I noticed further down where you market your experience with psychology. I'd just like to remind you, your personal evidence isn't any sort of justification for such sweeping statements.

I'd also like to remind you that your concept of "good people" seems a little skewed to me. I think you need to dwell a bit on how to remove so much of your personal bias from your opinions on general topics. You have no basis for positing that the world is shy of good people, because you only know a vanishingly small fraction of them.

Your description of psychologists sounds very much like psychoanalysts to me - a kind of psychologist that, to me, rank possibly lower than a Scientologist (and slightly above a cockroach) when it comes to solving people's problems.

Fortunately, there's some other kinds of psychologists that actually do stuff that works. I'll discuss a trifle about them below. Before that, though:

Any psychologists have a couple of things going for them, even without the "working method of psychotherapy" part. Going to a psychologist will make a patient regularly think about their problems, and will make them feel that they are in a process with the problems, and this seems to lead to change. It also makes the person deal with the problems in contact with a stranger, which makes for a more neutral setting than with a friend or family member. With a friend or family member, the relation in other contexts will very often intrude.

So, any psychotherapy will usually have *some* effect, though it may be very restricted, and for some kinds of problems it does not work at all. There are some forms that have more effect, chief among them behavioral therapy (with most research having gone into the cognitive behavioral version of this, but with very little evidence the cognitive part add effectiveness.) This is mostly "common sense" put into a system. Some examples: If a person is depressed and sitting at home, make them go out and do stuff, starting with small enough stuff that they're able to do it ("Behavioral Activation"). If the person is afraid, have them go through the fear in small enough parts that they can handle it, exposing them to situations they are afraid of and let them learn that they can be safe there, waiting until the fear dies down. If they have OCD, expose them to the situation that makes their obsessive response come forth, and prevent/delay the response. ("Exposure and Response Prevention.)

The good thing is that the psychologist knows that this common sense works, and can put the weight of both experience and theory behind the words to make the person feel that it can work.

Most psychotherapy works better without drugs; drugs interfere with the learning process.

A little knowledge is a dangerous thing - I refer to your post where you say you gleaned this information by discussing this with a counseller in the next office, not one whom you've actually worked. Not exactly a basis for the sweeping generalisations you make.I went to a counseller for 4 years and it did me good. I agreed goals with my counsellor, not exactly something you'd do with a friend. Our relationship was very productive.

It was a wierd feeling, recounting my life to her, and in the process relearn

2) The issue seems easy enough to settle empirically, given a few monkeys and a bag of M&Ms, besides the fact that it seems to have been empirically settled decades ago anyway.

One would think, but as it turns out, there are too many complexities. You see, you have to consider the socio-economic background of the monkeys, their upbringing, and their inherent biases to figure out if they like green, blue or red M&M's best. You see, the monkeys have an inherent bias toward green, but only if they have been captured from the wild (where presumably green would be comforting, the color of trees and whatnot). And of course there is the political bias associated with red and blue, so it depends on whether the monkey's political biases. These are especially hard to sort out as monkeys tend to just throw feces at the other side, at every opportunity, so you can easily separate the two groups, but rarely can you tell which is which. Its difficult to determine if they like to eat blue M&M's because they themselves are blue (or feel blue, as depressed monkeys have a significant bias toward the blue M&M's) or because they are red as it were, and feel like eating the blue ones to get back at the other side.

As someone who majored in psychology, worked in two labs, and read countless psychology papers, I can tell you that 99% of psychologists avoid math when possible, and the other 10% try to use it but make obvious errors.

To the psychology researcher, it's more about getting the "story" right than actually quantifying anything.

I had a somewhat heated discussion with someone who called herself a psychologist but hadn't studied statistics. To my thinking, statistics is central to psychology being called a science. Without statistics you're trading in conjecture and anecdote. When I said psychology without stats isn't science, it didn't go down too well.

I had a somewhat heated discussion with someone who called herself a psychologist but hadn't studied statistics. To my thinking, statistics is central to psychology being called a science. Without statistics you're trading in conjecture and anecdote. When I said psychology without stats isn't science, it didn't go down too well.

When I took my degree (double major: CS and Psych) all psychology undergrads were required to take courses in statistics and scientific methodology. I find it hard to believe that s

The percentages were a joke. But I do mean in all seriousness to suggest that psychology researchers are often averse to math and tolerate math errors in papers. Psychology is often only quantitative insofar as there are certain numerical rituals associated with null hypothesis significance testing that researchers must use to be accepted by other researchers.

I haven't found this in my experience but then it might depend upon what kind of circles you move in. If it's the "lower end" of the scale in terms of ability, then yes I would agree but the same probably goes for any subject. Otherwise, I would disagree. I majored in psych and did a PhD in the topic. My external examiner for my viva was a Cambridge statistician and ex-math olympian (Alan Dix if you're curious) who focused his interests in human-computer interaction I think because of the challenges he got

I watched 21 and I am pretty sure that they didnt botch the Monty Hall problem. It seemed weird that it would be in a senior level math course at a top notch engineering school, but the way they described it was mathematically correct.

Suppose Monty Hall gives you a choice of two envelopes. Each envelope contains a check, and one of them is written for TWICE the amount of the other. So you pick an envelope.

Now, Monty gives you the chance to switch envelopes. (Assume Monty always gives you the chance to switch.) Logically, since your envelope contains X, the other envelope can contain either 0.5X or 2X, with 50% probability... So the expected value of switching envelopes is 50% (0.5X + 2X), or 1.25X. So, you should switch.

But here's the tricky part: Monty now gives you the chance to switch back! Since your new envelope contains Y, then by the same logic as above, the expected value of switching back is 1.25Y... So you should switch back. Right?

Clearly, something is wrong with this chain of thinking. Can you figure out what it is?

Yes, but you're missing the REAL puzzle, which is that even the FIRST switch (with the calculated 1.25x expected return) doesn't gain any information. By symmetry, the expected return on the initial switch MUST be exactly 1.0x, yet the simple math says 1.25x. Where does the math/logic go wrong?

I find it even funnier that it is an economist that is saying it. Admittedly some economists are really mathematicians that have wandered in to try to bring some professionalism to a bunch of fortune tellers but in general economists have a bad reputation every time there is an attempt to assert itself as a science. Years ago when I had the misfortune to do an engineering economics subject I was astounded to find that the university level economics text we were using had one version of the compound interest formula for every variable - it was assumed that economics students could not do introductory algebra.

but in general economists have a bad reputation every time there is an attempt to assert itself as a science.

True. But not all schools of economics try to make themselves a science. It's a difference in methodology. The Austrian School is a notable example, because they specifically reject scientific positivism. The Neoclassicists are obsessed with deriving mathematical formulas, and the Monetarists are obsessed with scientific predictability.

I used to laugh at economists when they claimed to do science too. Then one of my friends at uni showed me the notes from their math course. As a physicist I like to think I can handle a few equations, but they do some serious math. After that, I kept quiet. Keep picking on the psychologists, it's safer.

My experience at a the University of Edinburgh ("a good uni") was that Psychologists really don't know math. I spent ~6 months being subjected to lectures on statistical theory about chi-squared and normal distribution that frankly didn't make any sense: "Why do we add +1 here?" "Because it works"

Seriously.

At the end of the course we were given a summary lecture that (shock horror, ladies fainting at the back) gave us a FORMULA that explained the whole point of what we'd been taught. I wasn't the only person who, at this point, suddenly realised wtf they had been blabbering on for the past 2 months... and more to the point, how much crap they'd been talking. Psychologists were taking formulae based on reason and using them to support conjecture. That's not inflammatory, it's fact.

The psychologists were claiming that if you choose X over Y then you are more likely to choose Z over Y because your *choice* causes bias against Y. (This fits the observed data).

The new suggestion is that if you choose X over Y then you are more likely to choose Z over Y because the choice indicates prior bias against Y. The important part being that this holds even if the bias against Y is so small that it is hard to detect. The only thing required is that there is a fixed "preferred order" of the three.

At least, that's what I understand from the article. Given the field, I also understand that I am most probably wrong:)

There are several problems with all of this. The original experiment does not appear to have any control group, it is unclear if the population sampled was genuinely random, the size of group tested seems to have been extremely small for a meaningful statistical study, and (perhaps most important of all), it assumes that mammalian vision is uniform greyscale AND that the candy was monochromatic.

(That last pair of points are important. Monkeys do not see all colours with equal clarity. Neither do humans, which is why monitors actually have more real-estate set aside for blue than for anything else. Complicating things, colours are usually the product of mixing. They are not "pure". We don't know what the monkeys saw, therefore cannot tell if their decision was influenced by their ability to even see the treats.)

Personally, I have developed a skepticism of such observational science. Too many possible explanations, yes, but more importatly too little experimentation to eliminate alternatives. If an explanation is put forward and then acted upon, especially in an area like psychology where those being acted upon are likely vulnerable groups, it's important to make sure the explanation is likely to be correct. Likely to be possible isn't good enough.

What would I suggest? Well, in the 1950s through to the last few years, options have been limited. These days, though, you can take fMRIs, MRIs and CAT scanners into the field. During the Chernobyl accident, it was fairly standard procedure for MRIs on trucks to be used to scan farm animals for contamination. See the brain in action as it makes the choices. See when the choice is made and which neural pathways were involved. Much better than speculating about what's going on. If you want more data, scientists decoded the optic fibre transmissions of cats ten years ago, or thereabouts. We can literally see if that plays a part in the decision.

You still end up doing statistics, sure, but with far more numbers that have far more meaning behind them and far less room for interpretation.

If monkeys were unable to reliably distinguish red, blue and green M&Ms, then they would have no systematic preference for one colour over another, and the experiment would not have found statistically significant evidence for such a preference (whatever its cause). However the experiments did find the monkeys have preferences about which colours they like.You could equally well run the experiment with three types of treat - say peanuts, brazil nuts and pecan nuts - as long as individual monkeys have p

Not to brag, but I have very acute taste buds. So much so, that when I was in high school, I would put M&Ms in my mouth with my eyes closed and be able to tell which color it was with nearly 100% accuracy.

Uhhhh.... you DO know that Britain invented the MRI? That the MRI was invented in 1973 but Chernobyl went up in 1986? As for the rest of your claims, they sound more like sour grapes (Britain has more high-ranking Universities than any country other than the United States). If you're more interested in trolling than querying, you're doing a good job of it. As for "proof", since you're probably not going to consider the fact that I was a research assistant at the University of Manchester for the inorganic bi

Presumably one could use NMR (MRI) to look for certain isotopes produced in a reactor explosion such as Cesium-137 or tritium. Getting the spectra out would be very easy for tritium, but an absolute bitch for cesium (1/2 vs 3 1/2). But I am still not sure why you'd do it that way rather than using a much cheaper scintillation detector (for example).

I agree it would be interesting to have some links, so I hope GP isn't just talking out of his ass.

If anyone wants some interesting stuff to read about irrational choices that humans have near-pathologies in that they constantly make them across the board, read some Tversky and Kahneman, or Robyn Dawes. Here's an example about a Tversky and Kahneman experiment characterised by Dawes in Rational Choice in an Uncertain World:"...Tversky and Kahneman offered each subject a bet. They would roll a fair die with four green (G) and two red (R) ones, and the subject made a choice between betting that the seque

Marilyn vos Savant explained the problem in Parade magazine, and a whole bunch of math professors wrote in to tell her that she was wrong... turns out it's kind of a bad idea to play "gotcha" with someone who has an IQ of 228.

I read one of Marilyn Vos Savant's books, and in it she listed 9 as a prime...

But there's a more-than-50% chance that 9 is prime!

I test primeness by dividing the test-number by all integers, from 2 through the test-number's square root, looking for a zero remainder. So, first, I divided 9 by 2. I worked on this for a while, and ended up with a nonzero remainder. So far, 9 looks prime, and I've already tested half of the potential divisors! In fact, there's just one more potential divisor to try: the number 3. I'm almost done, and everything rides on this final calculation. There's a lot of uncertainty here.

What are the chances that 9 is just going to happen to be divisible by the very last potential divisor that I try? I'll grant you that the chances are non-zero; there really are some composite numbers out there. But the chances aren't one, either. For example, when I was testing 17 for primeness, the last potential divisor I tried was 4, and it didn't work. This last calculation could go either way.

So here we are, having tested half of the possible divisors, and so far 9 is looking prime and there's just one more divisor to test against. So, I ask you: do you want to bet 9's primeness/compositeness on this last calculation? I'll make it easier for you: I tell you right now, that 9 is just like 17, in that it is not divisible by 4. And then, I'll even give you an option: we can finish the calculation by dividing 9 by 3, or you can change your candidate divisor to 5, now that you know 4 doesn't work. Well.. what'll it be?

Marilyn vos Savant explained the problem in Parade magazine, and a whole bunch of math professors wrote in to tell her that she was wrong... turns out it's kind of a bad idea to play "gotcha" with someone who has an IQ of 228.

You obviously haven't read her absolutely idiotic book about Fermat's Last Theorem.

The problem can be easily misunderstood.
If it is a known rule of the game that after we choose a door, a door with a goat is opened, then it always pays to change our choice: as TFA indicates, it raises our odds to 2/3.

If, however, opening a goat door is the host's choice, then we are entering a poker-like situation. For example, if the host only chooses to reveal a goat when we choose correctly, then changing our choice will cause us to loose every time! And in general, for each strategy that a host might employ, there is an optimal counter-strategy.

In the latter scenario, it may be our goal simply to preserve our initial odds. If so, it pays to toss a coin on the second choice. This way, quite regardless of the host's strategy, we will have our odds at 1/3 or above.

Marilyn vos Savant explained the problem in Parade magazine, and a whole bunch of math professors wrote in to tell her that she was wrong

In that case the mathematicians were correct. Vos Savant left out a key criteria when explaining the problem -- that Monty Hall knew what was behind each door and always chose to open one containing the boobie prize. That gives the game a memory and gives the player an advantage in the second part. If Monty just chooses randomly, as Vos Savant's version implied, the mathematicians would be correct.

Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors?

Indeed. There's considerable evidence in favor of reductions in cognitive dissonance as a motivating psychological force from other types of studies and other disciplines. For instance, in my field of political science, the evidence is pretty overwhelming that citizens systematically misperceive candidates' positions to make them more similar to the citizens' own preferences. Voters often engage in "projection," believing that candidates' they prefer hold positions like the voters' own, even when those aren't the positions the candidates actually hold. The opposite process also occurs, where voters believe that candidates they dislike hold positions those voters dislike regardless of the candidates' true preferences. My own dissertation research on voters for the British Liberal Party in the 1960's and 1970's also confirmed these hypotheses.

The relationship is obviously bi-directional. Determining the direction of causality is thus a difficult matter, and one that preoccupied folks in my discipline for quite some time. One method is to use an "instrumental variables" approach (see any advanced econometrics text for details), but perhaps a more accessible answer comes from my own research.

The Liberal Party was often seen as "between" the Conservative and Labour monoliths. I focused my attention on the preferences of voters who switched from one of the major parties to the Liberals between elections. (We have "panel" surveys where the same people are interviewed over time which helps to eliminate problems of misperceived past voting behavior.) Now it turns out that voters who switched to Liberals usually saw them as taking positions in opposition to the party from which the switchers came. Sometimes those views were, in fact, contrary to espoused Liberal positions. For instance, on the question of entry into the European Economic Community, the forerunner of today's EC, former Conservative voters who supported entry were more likely to switch to the Liberals, while former Labour voters who opposed entry made the same switch. This pattern recurred across a number of issues. The most parsimonious explanation is that voters who disagreed with their normal party for whatever reasons were more likely to defect to the Liberals, using them as a instrument to express displeasure regardless of the Liberals' true position. (In the case of the EC the Liberals were consistently pro-Market; the other parties tended to waver.) Voting Liberal was "easier" than moving all the way over the opposition major party. That meant that voters would tend to "project" their own views on the Liberals rather than being persuaded to support the Liberals because of agreement with that party's positions.

Most of the traditional literature on American voting behavior focuses on the role of "party identification" as a primary determinant of issue opinions rather than the other way round. Voters often seem not to tote up the various stances of parties and candidates as a method of determining which party to support. Many people have Democratic or Republican partisanships because of family and social factors. People "inherit" partisanships from their parents or adapt to conform to the social roles they adopt in adulthood. These prior partisan dispositions then color their interpretations of events and campaign issues.

Let me tell you a story about my grandmother. She emigrated from Ireland in the late 19th century and lived outside Boston for the rest of her life. Despite the fact that most Irish Catholics living around Boston voted Democrat in her lifetime, she was a stolid Republican for the entire time I knew her. Her Republicanism wasn't based on support for that party's positions; it originated in the 1928 Presidential election when the Catholic (and "Wet") Al Smith ran as the Democratic candidate. Smith lost that year because anti-Catholic "Drys" in the Southern states defected to the Republicans. My grandmother felt that the Democrats failed to work hard enough for Smith because of his Catholicism, and so she started voting Republican. She was unfazed by the rather substantial evidence that showed that the Democrats in this period supported policy positions much closer to her own views. By the way, after Kennedy was shot in 1963 she claimed she had voted for JFK in the 1960 election, but we all knew she'd voted for Nixon.

The problem with Blogs is that they are inevitabley the whining and yapping of dogs that don't know any better. The worthless opinion that you link to fails to explain the original experiment correctly before weighing in. While it doesn't add anything of value, I guess it lets you slur the reputation of Dr Chen which is what you apparently wanted to do.Chen didn't try to prove that the experiment was definitely flawed - he showed that their own reasoning for why it was correct was not valid. That is there w

I pick door 1, monty shows me what's behind door 3 - a goat. Door 1 might have a goat or a car, door 2 might have a goat or a car. Sounds like 50/50 to me - I don't see the benefit of changing my choice. I don't have any evidence of a goat or car behind 1 or 2. I picked 1, and without evidence, I don't see how changing my choice will make it better.

I don't think this has anything to do with cognitive dissonance at all. It's a question of probability. There were 3 - my odds of success were 1 out 3. Monty shows me that one of them is bad, so now my odds are 1 out of 2. In any particular Monty event, the odds will always be 50/50. If you ALWAYS pick door 1, and if Monty ALWAYS shows you door (not 1) is a goat, then your odds will always be 50/50, assuming the assignment of the car or goat to door 1 or 2 is always truly random and fair.

When you choose one door out of three, and one of those three was pre-chosen randomly to be "the winner", your chance of having picked the right door is 1/3. At least one of the other two doors is not the winner, so the fact that Monty can show you that one is not the winner doesn't change your chance of having chosen the winner.

HOWEVER, now your chance is the same (1/3), but the chance of either the door you chose or the remaining door closed door being the winner is 100%. Therefore the chance that the remaining door is the winner is 2/3. Switch doors to double your chances.

I have a BS in math (not statistically oriented, but I had the normal discrete math sequence) and I still had to think about this a lot before I switched answers from the wrong one to the right one:-)

One thing that I think needs to be pointed out, however, is that for the odds to increase from 1/3 to 2/3, the player must know for sure that the host will *always* uncover a goat after the player's first choice irrespective of initial choice of goat vs. car. If the host's decision to uncover or not to uncover a goat is related to the player's initial choice, one can't say anything about the new odds.

Here is the easiest explanation to follow that I've heard. Extend the game to 1000 doors.One door has a car, 990 have nothing.

You pick a door. Monty opens 998 of the other doors showing nothing. Which door would you pick? He essentially gave away the location of the car - you only had a 0.1% chance of winning, but he eliminated 998 incorrect choices. The chance of the car being behind the last remaining door is 99.9%. This way it actually is somewhat intuitive.

Look at it this way.
Your original odds were 1/3. Monty has a 2/3 chance of having the right one. Monty's odds of having the right one is greater than your odds of having the right one so statistically you should switch.
Look at it by way of cards (in the article).
You need to pick the ace of hearts. Monty will then go through the deck and pick the ace of hearts or a random card. He will then show you the other 50 "goats" and ask if you want to trade. You have a 1/52 chance of picking it. Monty then h

Monty's choice of a door to open is not random -- he has to pick a door that doesn't have a car. Say you pick door 1. Here are the three equally-likely possibilities:

If 1 has the car, he can pick either door. If you switch, you lose. Prob 1/3If 2 has the car, Monty *has* to open 3. If you switch, you get the car, Prob 1/3If 3 has the car, Monty *has* to open 2. If you switch, you get the car, Prob 1/3

Thus, there's a 2/3 chance of getting the car when you switch.

The other way to think about this is that Monty is revealing no information about *your* door when he opens one of the other two. Thus, the probability that your door has the car must be 1/3 both before and after Monty opens one of the other doors. Since there's only one closed door left, the car is behind it with prob = 2/3.

What you're missing is that Monty might have shown you the goat behind door 2, instead of 3. The fact that he didn't tells you something, and the consequence of that knowledge is that door 2 is a better choice than door 1.

I have no idea what it has to do with cognative dissonance. But the reason it's better to switch is the following:

You have a 1/3rd probability of choosing the car initially and a 2/3rds probability of choosing the goat. If you do not switch, you have a 1/3rds probability of having the car. After one of the other doors has been revealed to be a goat, however, the following is true: If you originally picked a car, you will get a goat. If you originally picked a goat you will get a car. Since you had a

Because your initial probability of picking the car isn't 50/50, it's 2:1 against the car. You choose from 3 doors, remember, not 2. So initially the probability is 1/3rd that you've chosen the car, 2/3rds that the car is behind one of the doors you haven't chosen. Then Monty opens one of the doors you haven't chosen. He's constrained to open a door with a goat behind it, but the fact that he's opened a door doesn't change the initial probabilities. So the probabilities remain 1/3rd that you've chosen the c

Your first choice has a one in three chance of being wrong. Your second choice has a 50/50 chance of being wrong. Your first choice has a greater chance of being wrong, therefore, you should change it.

It has nothing to do with cognitive dissonance. The cognitive dissonance experiment has been show to contain a similar type of error, that is all. I don't think you really read the article.

Try thinking of the monty hall problem with 1000 doors. Your initial pick of 1 door has 1/1000 of being correct. Monty then opens 998 of the other 999 doors to show that the prize is not there.
Should you switch to the other remaining door when asked or not? (You should: the other door has probability 999/1000 of being the one with the prize)
The thing you are missing in your analysis is the extra information gained when Monty opens the oher door.

My wife and step-son asked me to clarify this probability after getting home from watching "21".

I realized that the door analogy wasn't working as it didn't help them visualize 'possession of the odds'

Instead I explained it as follows:

We're going to play the game with 10 boxes - 9 boxes are empty and 1 box contains a prize.

My wife is asked to pick a box and she is handed the box that she chose.

Then my step-son is handed the other 9 boxes.

I then ask both my wife and step-son what each ones odds are of having the prize is. The agree on :

Wife : 1 in 10 (or 10%) chance of having the prizeStep-Son : 9 in 10 (or 90%) chance of having the prize

At this point I explain the physical-ness of my son 'holding the odds' - It is clear to both that he is in possession of 90% of the odds.

I ask my wife, at this moment, with her holding 1 box and he holding 9 boxes, if she would like to switch possession and trade her 1 box for his 9

She of course says 'heck yeah!'

They both have an 'ahah!' moment and I don't really have to go any further, but I did for completeness.

I make a statement that my step-sons 90% is evenly distributed across the boxes he posses - currently 9 of them.

Now I start opening my step-sons boxes, one at a time - Boxes guaranteed NOT to contain the prize

After opening one of the 9 boxes, leaving my step-son with 8 boxes, I point out that he is still in possession of 90% of the odds, but now those odds are distributed between the 8 remaining boxes.

Then you remove one more box, along with explanation, and they see the pattern - The odds stay the same, and are still in my step-son's possession, but are continuously distributed among fewer boxes.

Finally both my wife and step-son are each holding one box.

I bring back the fact that my step-son is still in possession of 90% of the odds, but that entire 90% is wrapped up in that one single box.

With a final closing - that they were patient enough to listen to, since they asked me to explain after all - I point out to my wife that, since she was willing to trade 1 box for 9 boxes earlier, she must certainly be willing (if not eager) to trade her 1 box for my step-son's 1 box.

They really connected the dots pretty fast once I placed the prize in a box and had them each holding the boxes - Putting a physical location to the odds.

Since she gave her [correct] answer [to the Monty Hall Problem], Ms. vos Savant estimates she has received 10,000 letters, the great majority disagreeing with her. The most vehement criticism has come from mathematicians and scientists, who have alternated between gloating at her ("You are the goat!") and lamenting the nation's innumeracy.

Since some math PhDs got it wrong too, isn't it a bit disingenuous to claim its the psychologists are the issue as the article title states?

Some researchers involved in pchycology (social behaviour etc.) came to high schools and drew up the friendship graph of the class. (Maybe school works differently where you live, we had a class of size 30-40 students attending exactly the same lectures.)

They assumed friendship to be mutual (if not, than it was not considered friendship). One clever cookie made the observation that almost always there is a group of 6 students who all friends to each other (a clique), or alternatively a group of 4 students, who do not like each other.

There were excited discussions among the researchers what social forces are the reason that one of the above situations always seemed to occur.

They were somewhat disillusioned when our math teacher explained them Ramsey's theorem. Since R(6, 4) is between 35 and 41, indeed one can expect either a frienship or hateship clique to appear with quite high probability... (This does not mean that properties of the frienship graph worth not examining, but one needs to know the math to do it properly.)

I started questioning this article before the end of the first sentence. An Economist, calling a Psychologist "wrong" about math?One should remember what happens when you put 50 economists in a room - you get 100 opinions - one for each hand.I recognize that the author of the article may be correct, I just couldn't help commenting on the first sentence.

If you are sick on a Friday or Monday, they assume you are 'taking a long weekend' even though there is a 2/5 chance someone will be sick on those work days. 40% of the time it would be Monday or Friday. More so for a 4 day work week.

TFA has been adequately refuted, so I'll forego more on that. And despite the inflammatory nature of the title and claims here, it is unfortunately too correct too often.

I've been told by "superiors" to perform certain analyses because "everyone does", and they gave me references which supposedly showed these were proper. When I looked these up, the authors not only made no claims supporting their necessity, but both stated that the researcher should know enough about what they're doing to know what analyses to perform. I took my instructions to the statistics consultant for our department, and without showing him the references he made the same claims as both authors, contradicting the rationale given by those who gave me the instructions. I've seen many cases of psychologists performing statistical analyses based on their knowledge of how to use SPSS et al., rather than any fundamental grasp of the maths required by the design. Perhaps the most egregious error is their faith in fMRI analyses via statistical probability mapping, when the correction factor required by the 10^4 to 10^5 simultaneous T-tests makes any one result within the traditional collective p >.05 significance level to have an individual p value in the 10^-6 to 10^-9 range. That's a hell of a requirement for a single test, and very unlikely to actually exist. "Figure the odds" applies, and they don't seem to grasp that they don't grasp it.

On the other hand, some of us can apply such analyses as tensor calculus and Gabor transforms to dendritic electrical fields, showing where each of those are correct and where each fail, and can correctly apply nonlinear, N-dimensional statistical testing of time/frequency maps produced by continuous wavelet transform. But of those of us who can do these things, I know of none who learned of them, much less how, within the confines of a psychology department. (Well, except for the Gabor stuff, as used and taught by Karl Pribram, that being the only case I know of).

"Everything I Needed To Know I Learned At The Santa Fe Institute". No, not everything, but that'd make a hell of a book.

There's no question that your story about a researcher with no clue what s/he was doing is repeated often in psychology, and probably in other fields as well.However, your example from fMRI speaks to complete ignorance of the field, and I'd like to force you to defend it. Thousands of fMRI experiments have been carried out, and this standard for significance is often met. When you say "very unlikely to actually exist," I can't imagine what you're thinking, since this statement is so easily falsified (in f

So you're saying that somehow I magically choose Door 1 twice as often as any other door? The door that just happens to have the car behind it? That's not right. Each of "my" initial choices must be of equal probability for this analysis to make any sense.

You have two (equally probable) possibilities under case "I choose 1" -- "Host chooses 2" and "Host chooses 3". The probability of "I choose 1" is a third, so each of the two possibilities have probability a sixth.

But each of the other cases -- I choose 2 or 3 -- only have one possibility each. So since the probability of choosing each of the cases is a third, each of the possibilities have probability a third.

This one has been debated over and over, and is a classic example of lies, bloody lies and statistics.
The fallacy lies in stating that before Monty opens the door and shows the goat, your chance of picking the car is 1/3.
It is NOT, because as Monty will always pick a door with a goat behind it, your choices are always going to be two... one with a goat, and one with a car, because the one Monty opens is taken out of the equation - in fact it was never IN the equation in the first place. You only ever had two options... one with a goat and one with a car.
Thus your chance of picking the door with the car are 1/2, they were 1/2 at the start, and they are STILL 1/2 after Monty opened his door. The odds do not change "in your favour", because they simply do not change AT ALL. Ergo, there is no advantage or disadvantage in changing doors.

A careful mathematical analysis of the problem proves that you're wrong. There are many computer simulations of the problem that show that you're wrong. The only thing you have going for you is your intuition, and your intuition is wrong.

"It is NOT, because as Monty will always pick a door with a goat behind it, your choices are always going to be two"

Your argument *only* works if Monty opens a door *before* you pick. *And*, you get to pick *twice*. First time from three doors, second time from two doors.You pick, from a choice of three, giving Monty a choice of two.Your argument is based on the reverse, Monty being able to pick from three doors, and you only get two.

Do you see it now? You 'lock' a door, precluding Monty from choosing it.

Remember, since you have first pick, your chances of getting a goat are 2/3. Meaning you most likely picked a goat. Meaning when Monty reveals a goat, the remaining door is most likely a car.

X Y means EITHER X or Y occurs.P(X Y) = P(X)+P(Y) if X and Y are mutually exclusive (this is a probability theory axiom)All four of our outcomes are mutually exclusive (they CANNOT occur at the same time)

That's equivalent to providing a table with all possible outcomes of a roll of two dice (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12) and saying that they are all equally likely just because each outcome has one entry in the table, except what you have done is the logical inverse.
The example of the dice is combining multiple outcomes and pretending they are one - you are taking one possibility and branching it on a variable that has no effect on your outcome: the door that Monty picks if you picked the car to start with.
If you pick the car to begin with, the number of the door that Monty picks has no effect on your outcome. To be more precise, the number of the door that Monty picks NEVER affects your outcome. If you want to keep the Monty column, you should replace the numbers with the word GOAT and then get rid of all of the duplicate entries, and the table will then represent the probabilities correctly.

Redraw the entire truth table with branches instead of separate rows for each possible outcome. Drawn this way, there are three starting points (CGG, GCG, GGC) and 24 outcomes.

For each starting point, write "1/3" above it. That is the probability of it occurring, since each is equally likely. Step down each node, and for each one, multiply the previous denominator by the total number of branches that could have been taken at the last node. So, for example, you'd have written 1/3 above CGG, and for each of the three branches coming from it (door 1, door 2, door 3), you'd have a 1/9 above it. You'll soon see that in the "Monty" column, when he has no choice about what door he could have picked, you'll have a 1/9 above the node, but when he could chose from two doors, there will be a 1/18 over each (this assumes that his choice of the two doors is random. If it isn't, it doesn't matter, because the probabilities above each choice will sum to 1/9, even if they aren't equal).

Proceed down each branch this way to the end, but don't branch on choosing "switch" or "don't switch." Since we want to see the results if we had picked either "switch" or "don't switch" in every possible situation, just write down the results as if you had picked "switch." We'll logically NOT the results later to simulate picking "don't switch".

When you finish the last column, you'll see that not every outcome has the same probability of occurring. Some of the outcomes will have probabilities of 1/9, and there will be outcomes that have probabilities of 1/18, because there was an extra decision branch involved in Monty picking the door.

Finally, sum up the probabilities of each outcome. "Win" will be 2/3, and "lose" will be 1/3. Obviously, if we logically NOT all the results to represent picking "don't switch" each time, the results invert, so "lose" has 2/3 probability and "win" has 1/3 probability.

Branching on each decision fixes the "problem" of a truth table like yours making it look like each outcome is equally probable.

No, the four possibilities here are not equally likely. If the initial pick is random, then the probability that case 1 occurs is 1/3, the probability of case 2 is 1/3, and the probability that EITHER case 3 or case 4 occurs is 1/3.

Start with the initial case: you choose from 3 doors, 1 of which has a car and 2 of which have goats behind them. Now, suppose Monty just opens all the doors on the spot, revealing whether you won or not. What's the probability that you chose the car? 1/3rd. It has to be, only 1 door out of three had the car.

Next step, you make the same choice. Monty opens a door but doesn't give you the option of changing your selection. Now, what's the probability of your winning? You made the same choi

However as a sometime game show contestant I know you have to take into account one fact that is left out of the classical form of this problem.

WHEN YOU ARE ON A GAME SHOW, YOU ONLY GET ONE ATTEMPT!

Hehe, oh, and there's a bigger fact that is left out of the classical form of this problem, one that was revealed when someone asked Monty Hall himself what he thought of the eponymous probability problem.

The problem assumes that Hall always offers you the choice to switch, but this was not the case! He did not necessarily have to give you the choice (which kinda makes that part of the game show boring), and in fact said that he mostly only offered the choice to switch when the player had chosen the correct door, in order to lure them away from it!

So in the math problem version, switching is the best choice (by a factor of 2 even), while in the real-world version, staying was the better choice (by some unknown factor, but maybe a lot more than 2 depending on how evil Mr. Hall was).