Posted
by
timothy
on Tuesday September 23, 2008 @11:02AM
from the when-dowsing-meets-voight-kampff dept.

holy_calamity writes "New Scientist reports that the Department of Homeland Security recently tested something called Future Attribute Screening Technologies (FAST) — a battery of sensors that determine whether someone is a security threat from a distance. Sensors look at facial expressions, body heat and can measure pulse and breathing rate from a distance. In trials using 140 volunteers those told to act suspicious were detected with 'about 78% accuracy on mal-intent detection, and 80% on deception,' says a DHS spokesman."

Why yes, yes there is. It can randomly spurt out false positives, subjecting people to random stops and questioning. It can still miss the real terrorists who are doing their damnedest to look normal and unthreatening. It can further the "show us your papers" society we've been building and seem so enamored of. It can supply the mindless thugs at security checkpoints an ironclad "the machine says so" excuse to hassle harried, irritated travelers. It can further the "security theatre" in all aspects of everyday life. In short, it can do nothing positive.

He'll still show signs of stress, though. Just because you think it's right to get into a fight doesn't mean that the adrenaline doesn't start pumping.

The real problem with this is that the number of wrongdoers is small while the pool for false positives is high. If 5% of people have some intent that should be picked up by this, then 4% of all people with ill intent will be picked up. At the rate, then they'd have to have less than a 5% rate of false positives just to reach the point where half the people it says have ill intent actually do. What are the chances that it's going to have a false positive rate less than 5%?

And that's assuming that 1/20 people have some intent that would need to be picked up by this, while the actual rate is almost certainly smaller. Millions of people fly on airplanes every year, yet every year only a handful try something stupid. This is security theater at its finest.

The real problem with this is that the number of wrongdoers is small while the pool for false positives is high. If 5% of people have some intent that should be picked up by this, then 4% of all people with ill intent will be picked up. At the rate, then they'd have to have less than a 5% rate of false positives just to reach the point where half the people it says have ill intent actually do. What are the chances that it's going to have a false positive rate less than 5%?

And that's assuming that 1/20 people have some intent that would need to be picked up by this, while the actual rate is almost certainly smaller. Millions of people fly on airplanes every year, yet every year only a handful try something stupid. This is security theater at its finest.

You've hit that on the head. About 200,000 people go through Chicago O'Hare, just that single (though large) airport, every day. And so far, zero terrorist attacks launched out of O'Hare. The odds that a person this machine flagged being an innocent is ridiculously high, even if it is has high specificity.

Also, aside from the raw statistics of the thing, there's another compounding factor that makes this even more useless*, which is it's rather simple for terrorists to game the system with dry runs.

Terrorist organizations already tend to use people not on our radar for attacks, so if they get pulled out of line on a dry-run, we won't have anything on them and it'll look like yet another false positive. Our young jihadi goes through the line with a bunch of his buddies, and everyone who gets pulled out of line doesn't go through the next time. Once you've discovered the group of people who aren't detected by the terrorist detector/profilers/crystal ball, the hot run can proceed with little fear of getting caught.

* For the stated goal, of course, not the goal of Security Theater for which a magical terrorist detector is great.

And then the DHLS agents can all be Commanders Braxtion, zipping through the timelines, arresting or aborting people, or arresting and coitus-interrupting the would-be parents, all stating like the Vidal Sassoon (And, THEY'LL tell two friends, and so on and so on and so on... commercial):

"I am Commander Braxton, of the DHLS Timeship Aeon. You are being arrested for crimes you WILL commit...", or,

"I am Commander Braxton, of the DHLS TimePos ICOS. You sex act is being disrupted to delay or prevent arrival of

Wth kinda of teenagers STEAL a dead elk from a bunch of guys with guns no less. I mean an elk weighs what 800lbs? These are some well prepared kids if they can run off with fresh kills like that, were they waiting in the woods in camo or something?

Actually, the real purpose is to pull out those kids who are nervous about leaving home for the first time going to college or something. That way they can scare them into not turning into one of those dirty liberal elitist intellectuals that would dare question the authority of the system.

Actually, the real purpose is to pull out those kids who are nervous about leaving home for the first time going to college or something. That way they can scare them into not turning into one of those dirty liberal elitist intellectuals that would dare question the authority of the system.

Because nothing turns a kid into a conservative like a bad run-in with the cops, right?

Actually, better yet, don't tell them it's a dry run ahead of time. Have them go through security to be inside by a specific time. Then call them, and say "It's a go" or "Nevermind, enjoy your trip."
After a couple of "Nevermind" runs and not getting pulled over, you should know who to send...

If the terrorists know it's a dry run, then their responses will be different - amongst other things, if they are caught, there will be no evidence or deniability.

Still, I can't see this as having a low false positive rate.

- Guy goes home to his beloved but too-oft left alone wife is nervous over the obvious.- Gal had to much to drink last night and woke up with someone... unusual. Worried about a few things that could really change her life- [insert various nervousness-inducing-mental-conditions-here] sufferer forgot to take his/her medicine.- First time flier.

It's about as accurate as a lie detector. You know, because we all know lie detectors are so perfect. It's not like people know how to game them or anything. The "you can't hide your true intentions; your body will know" part is a 100% fallacy and guaranteed to not be accurate.

Paradox of the false positiveStatisticians speak of something called the Paradox of the False Positive. Here's how that works: imagine that you've got a disease that strikes one in a million people, and a test for the disease that's 99% accurate. You administer the test to a million people, and it will be positive for around 10,000 of them - because for every hundred people, it will be wrong once (that's what 99% accurate means). Yet, statistically, we know that there's only one infected person in the entire sample. That means that your "99% accurate" test is wrong 9,999 times out of 10,000!

Terrorism is a lot less common than one in a million and automated "tests" for terrorism - data-mined conclusions drawn from transactions, Oyster cards, bank transfers, travel schedules, etc - are a lot less accurate than 99%. That means practically every person who is branded a terrorist by our data-mining efforts is innocent.

In other words, in the effort to find the terrorist needles in our haystacks, we're just making much bigger haystacks.

You don't get to understand the statistics of rare events by intuition. It's something that has to be learned, through formal and informal instruction. If there's one thing the government and our educational institutions could do to keep us safer, it's this: teach us how statistics works. They should drill it into us with the same vigor with which they approached convincing us that property values would rise forever, make it the subject of reality TV shows and infuse every corner of our news and politics with it. Without an adequate grasp of these concepts, no one can ever tell for sure if he or she is safe.

That means that your "99% accurate" test is wrong 9,999 times out of 10,000! [snip] If there's one thing the government and our educational institutions could do to keep us safer, it's this: teach us how statistics works

They can start by teaching Cory Doctorow how to count. The hypothetical test is wrong 9,999 times out of 1,000,000. Assuming, of course, that the test only produces false positives, and not also false negatives. That's what 99% accurate means.

Out of the 10,000 people indicated as having the disease, only one did. If the purpose of the test is to find those with the disease, then it's wrong 9,999 times out of 10,000 when it reports someone has it.

Our lovely machine that is currently 78% accurate on 'mal-intent' (sic) detection is going to incorrectly tag 22 people out of every 100 as having mal-intent. With the gp's quoted figure of 200,000 people traveling through O'Hare every day, that means potentially 46,000 people a day incorrectly tagged as terrorists. Not one of them actually a terrorist, just someone caught as a false positive.

One airport. One day. 46,000 people whose lives have just been screwed over in some manner. And no guarantee that the one terrorist that might show up once every billion of people is going to be caught by the machine.

With the gp's quoted figure of 200,000 people traveling through O'Hare every day, that means potentially 46,000 people a day incorrectly tagged as terrorists. Not one of them actually a terrorist, just someone caught as a false positive.

One airport. One day. 46,000 people whose lives have just been screwed over in some manner. And no guarantee that the one terrorist that might show up once every billion of people is going to be caught by the machine.

If the attackers knew it was a dry run, then they would not exhibit the signs of stress that the machine detects, therefore all would test negative.

If the attackers did NOT know it was a dry run, then they must also carry attack devices with them through the screening process, and be at risk of detection of the devices or by an observant screener or secondary screening.

Plus, they must either carry out the attack, making their future use moot, or have the attack called off at the las

Orwell got lots of easy stuff right (people like authority...a call for a leader starts with a desire to follow), but he missed the boat on just how easy it has become (and is becoming!) to use computers to not merely threaten to monitor anybody at any time, but to monitor everybody all the time.

but he missed the boat on just how easy it has become (and is becoming!) to use computers to not merely threaten to monitor anybody at any time, but to monitor everybody all the time.

Given that he published it in 1949, he can be forgiven for not foreseeing modern computers.

In terms of showing how pervasive and evil a surveillance society can be, he's still highly relevant.

Pointing out just how eerie something like an automated "future crimes" concept is hardly just sarcastic bitching -- I'm betting an awful lot of people read that summary and thought "holy crap!!", I sure as hell did. Because, the sheer idea of being detained or hassled because some computer suggested you might be stressed is nuts. It's scary to think this could give them any grounds to act on anymore more than a very cursory level -- I mean, talk about your unreasonable search, and people being told they need to get the rubber glove treatment because some computer program identified them as stressed is lunacy.

Time was when one would have through it impossible for the USA to degenerate into a place where this would be happening. Now, it's hard to think of how one would stop it. Spending billions of dollars to make all of the scary stuff in Orwell come true is frightening to some of us.

FWIW, I'm glad she exists. We need more voices that are not afraid of point out that the Friedman meme that laissez-faire capitalism spreads human freedom may not be accurate. It was heresy up until just a few years ago to question that popular opinion. Anything that upsets the true believers is fine with me (btw, I'm not a fan of Klein's).

Oh he saw it perfectly. Orwell's protagonist was caught by a complicit Human agent, the shopkeeper. Orwell's message wasn't about fearing machines and their overwatching, but fearing the culture that their use necessarily created. Who watches the watchers? Who prevents abuse?

The biggest problem with this, is between 78% and 80% of people told to act suspiciously can fool the system into believing they are intending to commit crime, logically those same people should be able to act in the opposite fashion to fool the system into believing they are not, I mean really, what are they thinking the logic of their analysis represents.

Apparently excuses for legal pre-emptive arrests for unsavoury people is the new focus, much like the no fly lists. A list of politically undesirable people who will be arrested, searched, interrogated, transferred to a prison facility whilst their identities are confirmed (which I am sure will take no longer than 24 to 48 hours). All this will be done at a range of designated choke points, like train and subway stations and, maybe even toll booths.

Adjust your political alignment or you will find you, your family, your friends subject to random humiliations, violent arrests, searches including sexual groping and destruction of private property, of course your will be released and it will all be done with a masquerade of legality. I believe some journalists have already experienced exactly this type of pre-emptive arrest at the RNC convention, I don't believe they were particularly impressed with the concept.

Great. So now every time I return from a business trip to Thailand where I had relations with young men of questionable age, and I call my wife from the customs line the machine will catch my guilty face and my increased heart rate from trying to pass a lie off to her. And I'll be stuck in the airport for a good six hours under arrest.

There isn't necessarily a "fight" scenario. The individuals could very easily train themselves into regarding their acts as no different from a postman delivering a parcel. Assuming, of course, the person IS the one with the hostile intent - if the system is remotely effective, groups could be expected to migrate to unwitting "volunteers". Of course, such systems may be jammable, depending on how they work. It doesn't matter if vulnerabilities appear to be theoretical - organizations that are willing to, well, burn money (literally at extreme temperatures and pressures) are likely to find exploits because a populace deluded into thinking they are safe would logically be easier to manipulate and control by fear.

It's easy to move heat around, so any simple thermal camera can be tricked into thinking the person looks normal. This is only useful if the camera is simple. The heat has to go somewhere, so you'd see some point being much hotter than expected, but any software designed to reject absurd anomalies would reject such a point as impossible.

Facial expressions would logically require a course at an acting school or a few minutes with a bottle of latex and a blow-drier to create a fake facial skin. Criminals would not require the skill of Hollywood. They would only need to fool automatic face recognition and facial expression recognition software. At worst, they'd also need to fool low-res, low frame-rate CCTV operators at range. Most LARP groups have experience at producing very realistic face masks. Learning from them would produce someone who could (if they wanted to) be totally secure against CCTV systems. Many ethnic profilers could logically be fooled with similar methods.

As for false positives - anyone who is ill will show higher-than-normal heat, as will anyone who has gone jogging or exercising. Anyone caught in a hot car due to snarled-up roads will be hot and show an angry, hostile expression. Many in New England are permanently in a state of anger. So, in all probability, 90% of all city-dwellers and New Englanders will be classed as potential terrorists. Of course, I've always been somewhat suspect of Philadelphia cheese, but that seem to be taking the complaint a bit too far.

but using this to help narrow who to watch would be what this should be used for.

I can't disagree more strongly. When the flood the false positives start coming in, they'll quickly start dismissing them. As another poster pointed out, Chicago O'Hare alone has 200,000 people go through it every day; when several thousand of them are flagged as suspicious, you can bet that security will stop caring pretty quickly.

approach to fighting terrorism. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)

( ) Terrorists can easily play the system to go unnoticed( ) Too many legitimate travellers would be affected( ) No one will be able to find the guy or collect the money( ) It is defenseless against brute force attacks( ) It will terrorism for two weeks and then we'll be stuck with it( ) Travellers will not put up with it( ) Airlines will not put up with it( ) The FBI will not put up with it( ) Requires too much cooperation from terrorists( ) Requires immediate total cooperation from everybody at once( ) Many airlines cannot afford to lose business or alienate potential customers( ) Terrorists don't care about collateral damage( ) Anyone could anonymously destroy anyone else's career or business

( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical( ) Any scheme based on opt-out is unacceptable( ) Toothpaste should not be the subject of legislation( ) Blacklists suck( ) Whitelists suck( ) We should be able to talk about bombs without being detained( ) Countermeasures should not involve wire fraud or credit card fraud( ) Countermeasures should not involve sabotage of public networks( ) Countermeasures must work if phased in gradually( ) Your first bag should be free( ) Why should we have to trust you and your henchmen?( ) Incompatiblity with open source or open source licenses( ) Feel-good measures do nothing to solve the problem( ) Temporary/one-time visas are cumbersome( ) I don't want the government reading my email( ) Killing them that way is not slow and painful enough

Furthermore, this is what I think about you:

( ) Sorry dude, but I don't think it would work.( ) This is a stupid idea, and you're a stupid person for suggesting it.( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!

Absolutely untrue. Suicide bombers fail as often as they do (in Israel, Iraq, Sri Lanka,...) because they're usually bug-eyed, sweating, twitching, and frequently high. Highly trained operatives might be reliably calm, but the run-of-the-mill terrorist is usually pretty obvious, although they can still often kill people before someone can stop them.

Particularly, a religious fanatic will be in a state of peace and righteousness-filled euphoria because he is finally "fulfilling his destiny" in life and just hours away from being rewarded by his God for being a faithful "Holy Warrior".

I've got to disagree there. I don't want to praise the machine - This thing is nuts. And I agree that, just before detonation, a fanatic may experience a sense of euphoric peace. But, when going through security, it's a toss up between beautiful martyrdom and failure resulting in a good long stretch in Guantanamo Bay being questioned unmercifully by the infidels. A good lot of training may help them deal with that stress. And their faith may provide them with confidence that their gods wouldn't allow t

It can randomly spurt out false positives, subjecting people to random stops and questioning. It can still miss the real terrorists who are doing their damnedest to look normal and unthreatening.

Sheesh! I've never seen a bunch of geeks so opposed to developing an immature technology before! Perhaps a toning down of the pessimism would be in order, and perhaps we may see some improvements in our understanding of human behaviour, and the programs built to understand it.

Sheesh! I've never seen a bunch of geeks so opposed to developing an immature technology before! Perhaps a toning down of the pessimism would be in order, and perhaps we may see some improvements in our understanding of human behaviour, and the programs built to understand it.

It's not that they oppose the development of the technology. It's that they're fed up with privacy invasions and random harassment and see this device as a means of propagating both. Even if this thing threw up 50% correct red-flags, you'd see objections.

Sheesh! I've never seen a bunch of geeks so opposed to developing an immature technology before! Perhaps a toning down of the pessimism would be in order, and perhaps we may see some improvements in our understanding of human behaviour, and the programs built to understand it.

It isn't the idea of developing an immature technology that upsets people. It is our well-justified fear of the government deploying immature technology. I'd rather not be subjected to a public beta-test of a thoughtcrime detector.

So this device was 80% successful at picking up suspicious activity from PEOPLE WHO WERE ASKED TO LOOK SUSPICIOUS.

Wow, amazing! Something any police officer who has served a couple of years would be able to do with 100% (or nearly so) accuracy.

What is missing is an assay of how many people it would flag if they were told to behave as if they were SCARED. You know... scared of being flagged for behaving abnormally, strip-searched, tortured, and never seeing their families again. Something tells me that the rate of false positives on this machine will overshadow the rate of false negatives by a very large margin.

Even known terrorist groups are now using "non-traditional" people as attackers, so either positive (i.e., "you look like a terrorist") or negative ("you don't look like a terrorist") profiling will cause too many false positives and negatives.

Second, it wouldn't be surprising to see people that aren't part of the "traditional" terrorist groups perfoming acts of terror for reasons unrelated to the political goals of groups like al-Qaida. In the US, it might be one of the "militias", while in Germany it mig

All we've got is a device which can spot normal people trying to be visibly "suspicious".

You are correct. From TFA:

Some subjects were told to act shifty, be evasive, deceptive and hostile. And many were detected.

It is absolutely ridiculous to think that they have produced any kind of test results that would indicate a functioning system. This is government and business at its absolute worst.

Not only is DHS trying their damnedest to become big brother, they are doing it in the most incompetent way possible.

This tech will never, ever work. All it can measure is physiological attributes. Correlation is not causation. Just because some percentage of people who are intending to commit a crime have certain physiological characteristics does not mean that anyone with those characteristics is a 'pre-criminal' and should be questioned. I weep for the future.

And even if, in some far-flung scenario, it did become functional it would still be illegal. It is invasion of privacy. Our thoughts and intentions are private. They mean nothing until we act on them. Human thought is vast and unlimited, part of our nature is boiling down the infinite array of ideas we have into action in the physical world where there are consequences. Everyone has the right to think whatever they want. When they act on it, then that action enters the territory of having (potentially bad) consequences.

What this evolves into is thought control and that is the end of liberty.

If I recall correctly, the last time I traveled to USA, I had to fill a form stating that the intent of my travel was not to kill the US president. People who create such forms would probably fund a research on a "suspicious person detector"

Testing on my new device starts tomorrow. It has a remarkable 98% accuracy in identifying people told to dress completely in purple and sing "I Love You, You Love Me". Even at a distance. As long as the terrorists play along (and who wouldn't?) we'll win this war on terror any time soon. And even if they don't, think of all the Barney impersonators we'll get off the streets. It's an everybody-wins scenario.

Yes, it does sound idiotic. My reaction was: ROFLcopter at the idea that you can successfully "tell people to act suspicious". Um, if it were possible in the first place for people to notice and control the aspects of themselves that make them look suspicious, others wouldn't be suspicious of those aspects in the first place!

Think about it: people become suspicious of others based on criteria X,Y,Z because meeting X,Y,Z reveals a higher probability of intent to cause harm. But anybody trying to cause harm will suppress any *controllable* sign that they are trying to cause harm before it's too late to stop. So the only remaining criteria people use in dermining whether they'll be suspicious of someone are those that are very difficult if not impossible to control. As a bad example: someone will only look around to see if he's being watched (which looks suspicious), if he's about to do something objectionable (like picking a lock). But he can't suppress that because then he takes the chance of someone noticing him picking the lock.

A better test would be to set up a scenario like a line at the airport where the screeners have to keep out dangerous items. Then, have a few of the participants try to smuggle items through, and get a huge reward if they succeed, while the screeners get the reward if smugglers don't succeed. Then, put a time limit on, so the screeners have to be judicious about who they check, so they only check the most suspicious. Oh, and make it double-blind as much as possible. Then, the people trying to smuggle will have the same incentive structure that real smugglers have, and thus will give off all the real-world signs of planning something objectionable.

What do those 78% and 80% mean, you ask? Let's look at The Fine Article:

Some subjects were told to act shifty, be evasive, deceptive and hostile. And many were detected.

Answer: it's a bad acting detector.

Seriously, a better test would be to ask test subjects to do something relevant such as, say, defeat the detector (duh!). If the subject fails, something unpleasant, yet harmless, will happen; a device that emits a startling noise and perhaps belch some smelly smoke. Imagine a grown up version of the game Operation [youtube.com] (I hate that game). Better yet, have the subject carry the device on their person. The nature of the device would be demonstrated to the subject beforehand, just as a domestic animal is allowed to experience the shock from an electric fence to establish the proper respect for the deterrent.

The summary talks about the sujects being told to act suspicious. So, if you are told to be suspicious does this make any difference from someone who is actually planning something nasty? I suppose it is difficult to find subjects who are unaware they are being observed, and yet also intent on doing something bad. Nevertheless, I'd hypothesize there might be significant, observable differences between the two groups.

You will always get these sorts of results with forced actions. If I made a happiness detector (via facial expressions), and told half of the group to smile, and the other half not to, I bet it would pick that up. Now, what if half the group were given personal responsibility toy, and the other half were given a cuddly teddy bear? I bet it wouldn't be accurate anymore...

A better test would be to give the group water bottles. Most of the group are given real water in bottles. A few of the group are give

Wouldn't "suspicious" also be highly subjective? Many times that's more reflective on the prejudices of the observer. So let's take a programmer who's been up all night trying to solve a problem. He's disheveled, unshaven, and probably unkempt. He's deep in thought and in his own world. He starts talking to himself about the problem. Is he suspicious?

Wouldn't "suspicious" also be highly subjective? Many times that's more reflective on the prejudices of the observer. So let's take a programmer who's been up all night trying to solve a problem. He's disheveled, unshaven, and probably unkempt. He's deep in thought and in his own world. He starts talking to himself about the problem. Is he suspicious?

Is he sitting on a park bench? Snot running down his nose, greasy fingers smearing shabby clothes?

In other words, 22% of the time it is wrong. Saying it's right 78% of the time is pure and simple market speak.

The interesting thing about this is if people started to intrinsically act suspicious, the numbers become fudged and mostly meaningless. One way this could be accomplished is by standing around handing out complimentary eye patches, telling people it is act like a pirate day.

Most AIDS tests are 99%+ accurate at telling you that a person with HIV actually has HIV. They're also 99% accurate at saying a person who doesn't have HIV, doesn't have HIV. Its the combination of those two facts plus "Very few people in the general population have HIV" which makes mass one-time AIDS screenings a bad idea -- you successfully pull the guy out of 100 who had HIV, then you throw in one negative bystander, and you end up adding 99% accurate + 99% accurate to get 50% accurate.

There are a heck of a lot less terrorists than 1% of the flying public.

There is a countermeasure, of course -- you use the magic machine not as a definitive test but as a screening mechanism. Know why we aggressively screen high risk groups for AIDS? Because they're high risk -- if 1 out of every 4 screenies is known to be positive (not hard to reach with some populations) then the 99%/99% math adds up to better than 95%. Better news. (You then independently run a second test before you tell anyone they're positive. Just like you wouldn't immediately shoot anybody the machine said is a terrorist -- you'd just escalate the search, like subjecting them to a patdown or asking for permission to search their bags or what have you.)

So you could use the magic machine to, say, eliminate 75, 90, 99%, whatever of the search space before you go onto whatever your next level of screening is -- the whole flying rigamarole, for example. Concentrate the same amount of resources on searching 20 people a plane instead of 400. Less hassle for the vast majority of passengers, less cursoryness to all of the examinations.

The quick here will notice that this is exactly the mechanism racial profiling works by -- we know a priori that the 3 year old black kid and the 68 year old white grandmother is not holding a bomb, ergo we move onto the 20 year old Saudi who it is merely extraordinarily improbable to be holding a bomb. That would also let you lop off a huge section of the search space off the top.

The difference between the magic machine and racial profiling is that racial profiling is politically radioactive, but the magic machine might be perceived as neutral. Whether you consider that a good or a bad thing is up to you. Hypothetically assuming that the machine achieves, oh, 80% negative readings for true negatives, many people might consider it an awfully nice thing to have 80% of the plane not have to take off their shoes or get pat down -- they could possibly get screened as non-invasively as having to answer two of those silly, routine questions.

(Of course, regardless of what we do, people will claim we're racially profiling. But that is a different issue.)

If you ever decide to do something as stupid as build an automatic terrorism detector, here's a math lesson you need to learn first. It's called "the paradox of the false positive," and it's a doozy.

Say you have a new disease, called Super-AIDS. Only one in a million people gets Super-AIDS. You develop a test for Super-AIDS that's 99 percent accurate. I mean, 99 percent of the time, it gives the correct result -- true if the subject is infected, and false if the subject is healthy. You give the test to a million people.

One in a million people have Super-AIDS. One in a hundred people that you test will generate a "false positive" -- the test will say he has Super-AIDS even though he doesn't. That's what "99 percent accurate" means: one percent wrong.

What's one percent of one million?

1,000,000/100 = 10,000

One in a million people has Super-AIDS. If you test a million random people, you'll probably only find one case of real Super-AIDS. But your test won't identify one person as having Super-AIDS. It will identify 10,000 people as having it.

That's the paradox of the false positive. When you try to find something really rare, your test's accuracy has to match the rarity of the thing you're looking for. If you're trying to point at a single pixel on your screen, a sharp pencil is a good pointer: the pencil-tip is a lot smaller (more accurate) than the pixels. But a pencil-tip is no good at pointing at a single atom in your screen. For that, you need a pointer -- a test -- that's one atom wide or less at the tip.

This is the paradox of the false positive, and here's how it applies to terrorism:

Terrorists are really rare. In a city of twenty million like New York, there might be one or two terrorists. Maybe ten of them at the outside. 10/20,000,000 = 0.00005 percent. One twenty-thousandth of a percent.

That's pretty rare all right. Now, say you've got some software that can sift through all the bank-records, or toll-pass records, or public transit records, or phone-call records in the city and catch terrorists 99 percent of the time.

In a pool of twenty million people, a 99 percent accurate test will identify two hundred thousand people as being terrorists. But only ten of them are terrorists. To catch ten bad guys, you have to haul in and investigate two hundred thousand innocent people.

In other news today, Homeland Security has detained the entire Chili Cook-off Carnival event after their new FAST software registered positive hits on EVERYTHING there, including some domesticated animals and a squirrel with three legs.

"In trials using 140 volunteers those told to act suspicious were detected with 'about 78% accuracy on mal-intent detection, and 80% on deception,' says a DHS spokesman."

None of that matters - what's important is the false positive rate, ie. the proportion of people with no malicious intent who get flagged up. If it's as high as 1% the system will be pretty much unworkable.

So if I'm running and about to lie to my trainer or doctor about how far I ran today, my pulse rate, breathing rate, and body temperature are up. I'm thinking about deceiving someone. So I guess that means it's now a crime to lie to your trainer according to the DHS?

I was just about to finish up my patent application for a device that could accurately detect a human pretending to be a monkey 80% of the time when a human test subject is asked in advance to pretend to be a monkey.

Just an fyi, the accuracy number doesn't directly tell you the ratio of false negatives. It's a measure not just of how many true positives it gets (that's the sensitivity), but also of true negatives(that's the specificity), in that it should both identify the "suspicious" correctly and correctly identify the non-"suspicious".

You can't go from the accuracy directly to the specificity and sensitivity, since it's a combination of several measurements. The result, though, will be highly dependent on the prevalence of "suspicious" people in their test, which is the ratio of how often what you're trying to detect actually occurs.

I'm willing to bet that the prevalence they used in their testing is way, way higher than it would be in real life (like 1/4 to 1/2 of the test subjects were "suspicious", while in real life the odds of a random person in an airport being a terrorist is more like 1/1e6 on a bad day). So this would skew the accuracy measurement towards detecting the suspicious and understate the importance of figuring out correctly that someone is not suspicious. The problem is that when you're dealing with something very rare, even if your specificity is very high, the odds that someone you pull out of line because the machine flagged them is in fact innocent is extremely high (it's going to be over 99% chance unless this machine is -very- specific), and if your test methodology doesn't worry as much about specificity, then it's going to be even worse.

The device relies on the assumption that the physiology of people up to no good may be different than normal people.

And that may be true.

However, this'll be much more useful somewhere like an embassy or checkpoint than in an airport. In a sea of potentially hostile people, it's harder to pick out the ones who may actually do something. In a sea of basically docile people, it should be relatively simple to visually pick the nervous ones.

Awesome, now we have a great tool to accuse people with. How can anything with an accuracy of 78% be worth using? On a grading scale it's a C+. How many innocent people (22%) will be caught up in this mess? If the government is trying to create a rebellion by the people, then this is a perfect method.

How about hiring intelligent guards? Or people with common sense?

If we spent 10% of what we spend on this kind of crap on actually solving the real problems we face, then we might actually get somewhere. But as long as we live in this ultra-paranoid world filled full of invisable terrorists then we'll never get the chance to overcome the real problems. What a shame and what a waste.

Sociopolitical fear is a strategy to push the population to the political right.

The old saw about a conservative being a liberal who's been mugged holds true; all you have to do is mug their minds and they'll cave in.

It's a sleight of mind in risk assessment: the real risks are automobiles, heart disease (i.e. a botched food system), botched health care, botched education, natural disasters, and crime/poverty. Well, everyday accidents too, but that's just natural selection. Terrorism is about as much of a r

When I am bored (standing in an endless lineup, waiting for a delayed flight, etc) I often look at my surroundings. I used to install video equipment, so I look at the installed video monitors and cameras.
Is noticing security cameras (and the quality of their installation) in an area suspicious?

I am a model railroader. Is is suspicious that I take pictures of trains and their environment so that I can build more accurate models?

I studied architecture for a time.
Is it suspicious that I spend a lot of time looking at (and sometimes photographing) interesting buildings?

Am I acting suspicious when I notice a guard of some sort watching me doing the above, and that I am curious as to how he might react to my perfectly harmless activities in these highly paranoid times?

Nothing. Just like there's nothing to stop the TSA from arresting someone with a phobia of flying (or crowded airports, or fascism...) on the grounds that they "look nervous". You didn't seriously think this had anything to do with catching terrorists, did you?