Bayesed and confused

So the Occupational Health department at MIT Medical has a doctor who specializes in animal encounters. I thought that was pretty cool. The official word from her is that the risk is low enough (given no visible symptoms of rabies in the raccoon, none in the dog, and no broken skin) that I shouldn't bother getting the shots.

Now let's say:

- I'd pay $2000 not to have the side effects I've seen described- estimate my life is worth $10M1

- P(raccoon was rabid | observed behavior) = 6% (apparently the behavior is perfectly normal, so this is just the average incidence of rabies in raccoons4)

Expected losses given treatment = .5 * $2k = $1000

Expected losses given no treatment = .001 * .06 * $10M = $600

Of course, this is close enough to be extremely sensitive to P(rabies | raccoon was rabid), which I don't actually know. I washed up and I don't think I even touched the creature, but if I missed a scratch, I'm in trouble. Calibrated confidence makes me somewhat iffy about the 0.1% estimate; if it's 5% instead, we're looking at $30k instead of $600, and I should berate the doctor until she gives me a shot.

But also, I think my estimate above is not including my social values, on which I effectively place a very high dollar value. One of these values might be phrased as "trust the experts or become an expert", and another "don't whine". Wah, math is hard.

1It's tempting to say one's life is worth $∞; and indeed, if I had $X, and I had to pay $X to stay alive, I probably would, no matter how large $X was. But there's got to be a maximum dollar amount that comes into play when you're talking about a differential chance of death rather than a certainty: you can rationalize crossing the street, going snowmobiling, or whatever, because there's something that's "worth it"; and "it" is (probability of death) * (value of your life). In wrongful death lawsuits and engineering calculations, this is normally set to an actuarial estimate of your expected lifetime earnings.

4To justify the use of the name "Bayes", here is an application of Bayes' Rule: P(rabid | aggressive) = P(aggressive | rabid) P(rabid) / P(aggressive) = 0.06 * P(aggressive | rabid) / P(aggressive); my supposition that P(aggressive | rabid) ~= P(aggressive) doesn't actually seem very sensible when written out like that. If the ratio is even as high as 2, the decision flips the other way.