Obesity remains a serious health problem and it is no secret that many people want to lose weight. Behavioral economists typically argue that “nudges” help individuals with various decisionmaking flaws to live longer, healthier, and better lives. In an article in the new issue of Regulation, Michael L. Marlow discusses how nudging by government differs from nudging by markets, and explains why market nudging is the more promising avenue for helping citizens to lose weight.

Two long wars, chronic deficits, the financial crisis, the costly drug war, the growth of executive power under Presidents Bush and Obama, and the revelations about NSA abuses, have given rise to a growing libertarian movement in our country – with a greater focus on individual liberty and less government power. David Boaz’s newly released The Libertarian Mind is a comprehensive guide to the history, philosophy, and growth of the libertarian movement, with incisive analyses of today’s most pressing issues and policies.

Search form

Tag: identification

With the Department of Homeland Security constantly spinning out new projects and programs (plus re-branded old ones) to investigate you, me, and the kitchen sink, it’s sometimes hard to keep up. But I was intrigued with a report that behvaior detection officers are getting another look from the Transportation Security Administration. Behavior detection is the unproven, and so far highly unsuccessful (Rittgers, Harper), program premised on the idea that telltale cues can reliably and cost-effectively indicate intent to do harm at airports.

But there’s a new behavior detection program already underway. Or is it interrogation?

Due to a bottleneck at the magnetometers in one concourse of the San Francisco airport (no strip-search machines!), I recently had the chance to briefly interview a Transportation Security Administration agent about a new security technique he was implementing. As each passenger reached him, he would begin to examine the traveler’s documentation and simultaneously ask the person’s last name. He confirmed to me that the purpose was to detect people who did not immediately, easily, and accurately respond. In thousands of interactions, he would quickly and naturally learn to detect obfuscation on the part of anyone carrying an ID that does not have the last name they usually use.

As a way of helping to confirm identity, it’s a straightforward and sensible technique. Almost everyone knows his or her last name, and quickly and easily repeats it. The average TSA agent with some level of experience will fluently detect people who do not quickly and easily repeat the name on the identity card they carry. The examination is done quickly. This epistemetric check (of a “something-you-know” identifier—see my book, Identity Crisis) occurs during the brief time that the documents are already getting visual examination.

Some people will not repeat their name consistent with custom, of course. The hard of hearing, speakers of foreign languages, people who are very nervous, people who have speech or other communication impediments, and another group of sufferers—recently married women—may exhibit “suspicious” failure to recite their recently changed surnames. Some of these anomalies TSA agents will quickly and easily dismiss as non-suspicious. Others they won’t, and in marginal cases they might use non-suspicious indicia like ethnicity or rudeness to adjudge someone “suspicious.”

The question whether these false positives are a problem depends on the sanction that attaches to suspicion. If a stutterer gets a gauntlet at the airport each time he or she fails to rattle off a name, the cost of the technique grows compared to the value of catching … not the small number of people who travel on false identification—the extremely small number of people who travel on false identification so as to menace air transportation.

We used this and closely related techniques, such as asking a person’s address or the DMV office where a license was issued, at the bar where I worked in college. It did pretty well to ferret out people carrying their older friends’ IDs. Part of the reason it worked well is because our expert doormen could quickly escalate to further inquiry, dismissing their own suspicions or denying entry to the bar very quickly. The cost of getting it wrong was to deny a person entry to the bar and sometimes possession of a license. These are relatively small costs to college students, unlike the many hours in time-costs to a traveler wrongly held up at the airport. According to my interview, suspicion generated this way at the airport requires a call to a supervisor, but I did not learn if secondary search is standard procedure, or if cases are handled some other way.

TSA agents are not doormen at bars, of course, and the subjects they are examining are not college kids out to get their drink on. These are government agents examining citizens, residents, and visitors to the United States as they travel for business and pleasure, often at high cost in dollars and time. The stakes are higher, and when the government uses a security technique like this, a layer of constitutional considerations joins the practical issues and security analysis.

I see three major legal issues with this new technique: Fourth Amendment search and seizure, the Fifth Amendment right against self-incrimination, and Due Process. When questioning joins an ID check at the airport, it’s a deepening of a search that is already constitutionally suspect. The Fifth Amendment issues are interesting because travelers are being asked to confess through their demeanor whether they are lying or telling the truth. It would seem to cross a Fifth Amendment line and the rule against forced self-incrimination. The Due Process issues are serious and fairly straightforward. When a TSA screener makes his or her judgment that a person is not responding consistent with custom and is therefore “suspicious,” these judgement calls allow the screeners to import their prejudices. Record-keeping about suspicion generated using this technique should determine whether administration of this epistemetric check violates constitutional due process in its application.

In its constant effort to ferret out terrorist attacks on air transportation, the TSA is mustering all its imagination. Its programs raise scores of risk management issues, they create constitutional problems, and they are a challenge to our tradition of constitutionally limited government. The threat that a person will use false identification to access a plane, defeating an otherwise working watch-list sytem, to execute some attack is utterly small. At what cost in dollars and American values do we attack that tiny threat?

The founding problem is the impetuous placement of federal government agents in the role of securing domestic passenger aviation. There are areas where government is integral to securing airports, airlines, and all the rest of the country—foreign intelligence and developing leads about criminal plots, for example—but the day-to-day responsibility for securing infrastructure like airports and airplanes should be the responsibility of its owners.

If the TSA were to go away, air security measures might be similar in many respects, but they would be conducted by organizations who must keep travelers happy and safe for their living. The TSA hasn’t anything like private airports’ and airlines’ incentives to balance security with convenience, privacy, cost-savings, and all other dimensions of a satisfactory travel experience. Asking people their names at airport security checkpoints is an interesting technique, and not an ineffective one, but it should probably be scrapped because it provides so little security at a relatively great cost.

I’ve emphasized in the past that a national ID requirement—for travel, for work, whatever the case—would exclude the indigent from rungs on the ladder.

If you don’t know the story of the homeless man whose golden radio voice got him a second chance, you should. But, as the New York Daily News reports, his long-awaited reunion with his mother has been delayed while he proves his identity so he can fly.

A land of freedom doesn’t put paperwork requirements between a man on the rebound and a long-awaited reunion with his mother.

This month at Cato Unbound, political scientist James C. Scott joins us in a discussion of his landmark book Seeing Like a State. His lead essay “The Trouble with the View from Above” gets readers up to speed and reviews some of the key themes of the book. Here’s an excerpt:

State naming practices and local, customary naming practices are strikingly different. Each set of practices is designed to make the human and physical landscape legible, by sharply identifying a unique individual, a household, or a singular geographic feature. Yet they are each devised by very distinct agents for whom the purposes of identification are radically different. Purely local, customary practices, as we shall see, achieve a level of precision and clarity—often with impressive economy—perfectly suited to the needs of knowledgeable locals. State naming practices are, by contrast, constructed to guide an official “stranger” in unambiguously identifying persons and places, not just in a single locality, but in many localities using standardized administrative techniques.

To follow the progress of state-making is, among other things, to trace the elaboration and application of novel systems which name and classify places, roads, people, and, above all, property. These state projects of legibility overlay, and often supersede, local practices. Where local practices persist, they are typically relevant to a narrower and narrower range of interaction within the confines of a face-to-face community.

Local knowledge both empowers and constrains – it allows and/or encourages some social practices, while making others more difficult. The progress of state power, meanwhile, depends on systematized, uniform knowledge of a wide area, with a loss of local particularity and the knowledge that goes with it. Seeing like a state has costs, in other words.

Over the next couple of weeks, we’ll be joined by discussants Donald Boudreaux, Brad DeLong, and Timothy Lee, each of whom will have a chance to ask Scott about his work, discuss its significance, and relate it to their own thinking about states, markets, and societies.

I have always regarded standard-setting organizations as serious players who take care to keep slightly boring the work of establishing uniformity in products and protocols. But a press release from the American National Standards Institute (ANSI) may cause me to reassess.

[T]he Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) and the REAL ID Act of 2005 require verification of identity prior to the issuance of birth certificates and driver’s licenses / ID cards, respectively. However, the IRTPA regulations have not yet been released even in draft form and the REAL ID regulations do not provide practical guidance on how to corroborate a claim of identity under different circumstances.

Folks, REAL ID repealed the identity security provisions in the Intelligence Reform and Terrorism Prevention Act. (It’s a good bet that regulations for a repealed law aren’t going to move out of draft form for a very long time, eh?) And REAL ID does not require verification of identity prior to issuance of birth certificates. What could that even mean?! “Hey you—little baby—let me see some ID before I issue you your birth certificate.”

The release repeats the tired mantra that 9/11 terrorists got U.S. identity documents—“some by fraud.” The 9/11 Commission dedicated three-quarters of a page to its identity recommendations—out of 400 substantive pages—and neither the commission nor anyone since has shown how denying people U.S. identity documents would prevent terrorism.

Are there needs for identity standards? Of course. And there are a lot of projects in a lot of places working on that. If an organization doesn’t know the law, and doesn’t know how the subject matter it’s dealing with functions in society, I don’t know how it could possibly be relied on to set appropriate standards.

ANSI should take a look at this subgroup and see if its work is actually competent. Judging by this press release, it’s not.

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead. Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like. You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time. But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance. To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations. So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns. If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise. It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods. It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans. As of 2006, we know “some intelligence sources”—probably NSA’s—were growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month. Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data. As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective. But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Constitutional rules often comport with common sense. The Fourth Amendment’s search and seizure clause — so burdensome to law enforcement, some argue — requires officials to look for evidence of crime where they think they’ll find it and not elsewhere. Common sense.

So it is with an Indiana Court of Appeals ruling that the state’s voter ID law violates the equal protection clause of the state’s constitution. The law requires in-person voters to show ID, but makes no attempt to verify the identities of absentee voters. The U.S. Supreme Court upheld the law against a recent challenge, but the Indiana court struck it down based on a broader protection in the state constitution’s equal protection clause.

Think what you will on the legal merits. (I generally appreciate courts breathing independent life into their state constitutions.) What is interesting here is that the result is imbued with constitutional common sense.

Requiring ID at polling stations would have a marginal effect on vote fraud because it makes it harder to impersonate a voter or manufacture a vote-qualified identity. But the risk of in-person voter fraud is very low compared to absentee ballot fraud, which the Indiana law did not touch. The Indiana voter ID law was tantamount to caulking windows to keep out the cold but leaving the front door open. Because of the disproportionate effect on different classes of voters, the court struck it down.

Voter fraud will continue to be a hot issue, and states should continue to tune the balances they strike between voter access and vote integrity. My concern is that the issue might boil over and produce national ID proposals, as we have seen in the past.

Congress took a major step forward on the PASS ID secure identification legislation.

There was a markup of PASS ID in the Homeland Security and Governmental Affairs Committee. It’s a step – not sure how major.

PASS ID is critical national security legislation

People who have studied identity-based security know that knowing people’s identities doesn’t secure against serious threats, so this is exaggeration.

that will break a long-standing stalemate with state governments

Thirteen states have barred themselves by law from implementing REAL ID, the national ID law. DHS hopes that changing the name and offering them money will change their minds.

that has prevented the implementation of a critical 9/11 recommendation to establish national standards for driver’s licenses.

The 9/11 Commission devoted three-quarters of a page to identity security – out of 400+ substantive pages. That’s more of a throwaway recommendation or afterthought. False identification wasn’t a modus operandi in the 9/11 attacks, and the 9/11 Commission didn’t explain how identity would defeat future attacks. (Also, using “critical” twice in the same sentence is a stylistic no-no.)

No, it said “travel documents are as important as weapons.” It was talking about passports and visas, not drivers’ licenses. Oh – and it was exaggerating.

but progress has stalled towards securing identification documents under the top-down, proscriptive approach of the REAL ID Act

True, rather than following top-down prescription, states have set their own policies to increase driver’s license security. It’s not necessarily needed, but if they want to they can, and they don’t need federal conscription of their DMVs to do it.

– an approach that has led thirteen states to enact legislation prohibiting compliance with the Act.

“… which is why we’re trying to get it passed again with a different name!”

Rather than a continuing stalemate with the states,

Non-compliant states stared Secretary Chertoff down when he threatened to disrupt their residents’ air travel, and they can do the same to Secretary Napolitano.