Eventually, it will work. You'll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it's just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

The police want this sort of system. MORIS is an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.

The system can compare biometric data at 46,000 points on a face and will immediately signal any matches to known criminals or people wanted by police.

In the future, this sort of thing won't be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)

Of course, there are false positives -- as there are with any system like this. That's not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age -- and eventually the identity -- of the person looking at it, but is more problematic if the application is a legal one.

In Boston, someone erroneously had his driver's license revoked:

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.

And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity.

[...]

At least 34 states are using such systems. They help authorities verify a person's claimed identity and track down people who have multiple licenses under different aliases, such as underage people wanting to buy alcohol, people with previous license suspensions, and people with criminal records trying to evade the law.

The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.

Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual's "burden'" to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.

"A driver's license is not a matter of civil rights. It's not a right. It's a privilege," she said. "Yes, it is an inconvenience [to have to clear your name], but lots of people have their identities stolen, and that's an inconvenience, too."

Related, there's a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?

Google detects malware in its search data, and alerts users. There's a lot that Google sees as a result of its unique and prominent position in the Internet. Some of it is going to be stuff they never considered. And while they use a lot of it to make money, it's good of them to give this one back to the Internet users.http://googleonlinesecurity.blogspot.com/2011/07/...
Smuggling drugs in unwitting people's car trunks.http://www.npr.org/2011/07/21/138548294/...
This attack works because 1) there's a database of keys available to lots of people, and 2) both the SENTRI system and the victims are predictable.

I second Matt's recommendation of Susan Landau's book "Surveillance or Security: The Risks Posed by New Wiretapping Technologies" (MIT Press, 2011). It's an excellent discussion of the security and politics of wiretapping.http://www.amazon.com/exec/obidos/ASIN/0262015307/...

ShareMeNot is a Firefox add-on for preventing tracking from third-party buttons (like the Facebook "Like" button or the Google "+1" button) until the user actually chooses to interact with them. That is, ShareMeNot doesn't disable/remove these buttons completely. Rather, it allows them to render on the page, but prevents the cookies from being sent until the user actually clicks on them, at which point ShareMeNot releases the cookies and the user gets the desired behavior (i.e., they can Like or +1 the page).http://sharemenot.cs.washington.edu/

Attacking embedded systems in prison doors.http://m.wired.com/threatlevel/2011/07/...
This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common. As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

The article is in the context of the big Facebook lawsuit, but the part about identifying people by their writing style is interesting.http://www.nytimes.com/2011/07/24/opinion/sunday/...
It seems reasonable that we have a linguistic fingerprint, although 1) there are far fewer of them than finger fingerprints, 2) they're easier to fake. It's probably not much of a stretch to take that software that "identifies bundles of linguistic features, hundreds in all" and use the data to automatically modify my writing to look like someone else's.

Two items on hacking lotteries. The first is about someone who figured out how to spot winners in a scratch-off tic-tac-toe style game, and a daily draw style game where expected payout can exceed the ticket price. The second is about someone who has won the lottery four times, with speculation that she had advance knowledge of where and when certain jackpot-winning scratch-off tickets would be sold.http://www.wired.com/wiredscience/2011/07/...http://www.scribd.com/doc/60495831/...

Freakonomics asks: "Why has there been such a spike in hacking recently? Or is it merely a function of us paying closer attention and of institutions being more open about reporting security breaches?"

They posted five answers, including mine:

The apparent recent hacking epidemic is more a function of news reporting than an actual epidemic. Like shark attacks or school violence, natural fluctuations in data become press epidemics, as more reporters write about more events, and more people read about them. Just because the average person reads more articles about more events doesn't mean that there are more events -- just more articles.

Hacking for fun -- like LulzSec -- has been around for decades. It's where hacking started, before criminals discovered the Internet in the 1990s. Criminal hacking for profit -- like the Citibank hack -- has been around for over a decade. International espionage existed for millennia before the Internet, and has never taken a holiday.

The past several months have brought us a string of newsworthy hacking incidents. First there was the hacking group Anonymous, and its hacktivism attacks as a response to the pressure to interdict contributions to Julian Assange's legal defense fund and the torture of Bradley Manning. Then there was the probably espionage-related attack against RSA, Inc. and its authentication token -- made more newsworthy because of the bungling of the disclosure by the company -- and the subsequent attack against Lockheed Martin. And finally, there were the very public attacks against Sony, which became the company to attack simply because everyone else was attacking it, and the public hacktivism by LulzSec.

None of this is new. None of this is unprecedented. To a security professional, most of it isn't even interesting. And while national intelligence organizations and some criminal groups are organized, hacker groups like Anonymous and LulzSec are much more informal. Despite the impression we get from movies, there is no organization. There's no membership, there are no dues, there is no initiation. It's just a bunch of guys. You too can join Anonymous -- just hack something, and claim you're a member. That's probably what the members of Anonymous arrested in Turkey were: 32 people who just decided to use that name.

It's not that things are getting worse; it's that things were always this bad. To a lot of security professionals, the value of some of these groups is to graphically illustrate what we've been saying for years: organizations need to beef up their security against a wide variety of threats. But the recent news epidemic also illustrates how safe the Internet is. Because news articles are the only contact most of us have had with any of these attacks.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.