Posted
by
samzenpuson Thursday April 04, 2013 @01:48AM
from the protect-ya-neck dept.

netbuzz writes "The questioner on Quora asks: 'When is the difference between 99% accuracy and 99.9% accuracy very important?' And the most popular answer provided cites an example familiar to all of you: service level agreements. However, the most entertaining reply comes from a computer science and mathematics student at the University of Texas, Alex Suchman. Here's his answer: 'When it can stop a Zombie Apocalypse.'"

Yes, but the other key item is the incidence of "false positives" and "false negatives". Both of these incidences are very dependent upon the penetration of the disease in the general population in the first place. See the concept of sensitivity and specificity [wikipedia.org] for more details..
But the summary is a test that is 99% accurate (for both true positives and true negatives) with the zombie incidence rate shown would have:

the possibility that a positive test result being a true positive of only 1/6 = 16.66%

whereas a test that is 99.9% accurate would have

the possibility that a positive test result being a true positive of only 2/3 = 66.66%

for the incidence of Zombies (Mad Human disease) given in that student's example.

Alternatively, you could stop trying to be the arbiter of what is good and worthy and just indulge in the media you do enjoy. I'm very sorry* if you feel marginalised by those who have an interest in all things undead and shambling, but no-one's actually forcing to watch The Walking Dead or Jersey Shore.

Just because people bite other people doesn't make them zombies. If they're not undead, they're not zombies.

You can't write a story about a world where some weird virus makes people want to bite each other's necks and drink their blood and say it's about vampires. It's about a weird virus that makes people want to bite each other's necks and drink their blood.

Actually, the Challenger disaster hinged on a different failure in statistics. Originally the SRB segments were mated with 2 O-rings. Inspection of the SRBs after launch revealed the O-rings were failing at a higher than expected rate. So to mitigate the risk they redesigned the system and... added a 3rd O-ring. The reasoning was that if a single O-ring had a (say) 1% chance of failure, then two would have a.01^2 =.01% chance of failure, and three would have a.01^3 =.0001% chance of failure.

Unfortunately, that reasoning only works when the failures are independent events. If a single event (like cold weather) can cause the failure of one O-ring, it can also cause the failure of the other O-rings, so that failure mode is not independent. And your chance of all three O-rings failing is closer to 1% instead of 0.0001%.

Same thing happened at the Fukushima nuclear plant. They had something like a dozen diesel generators under the theory that even if a few failed to start, it was highly unlikely that all would fail to start. They completely missed the possibility that a single common event could cause all the generators to fail the same way.