The incident, which illustrates the kind of damage many privacy-law advocates have long feared, is spurring legislators to take a new look at data privacy initiatives that died in the last session of Congress.

ChoicePoint, based in Atlanta, disclosed earlier this month that scammers accessed information on more than 145,000 consumers, including Social Security numbers and credit histories.

In a separate incident, thieves stole some of Science Applications International Corp.’s computers, which contained lists of SAIC shareholders, including their addresses, phone numbers, stock holdings and Social Security numbers.

Following requests from minority leadership last week, Sen. Arlen Specter, R-Pa., chairman of the Senate Judiciary Committee, said he would hold a hearing on the ChoicePoint incident.

Two of the Senate's leading champions of privacy rights, Patrick Leahy, D-Vt., and Dianne Feinstein, D-Calif., called for an investigation.

The Anti Phishing Working Group now includes Pharming on its web site. However, so far it has not changed its name to the “Anti Phishing and Pharming Working Group” – which is definitely a good thing. Anyway, the site says “Pharming uses the same kind of spoofed sites, but uses malware/spyware to redirect users from real websites to the fraudulent sites (typically DNS hijacking). By hijacking the trusted brands of well-known banks, online retailers and credit card companies, phishers are able to convince recipients to respond to them.” I think it would be wise to add attacks on DNS itself to this definition.

The Group has posted its report for December 2004, and I give it FIVE STARS. It goes way beyond counting incidents and into analysing trends. Email phishing had a 24% month to month growth rate since August (how's that for a CAGR?). The number of brands attacked grew as well (expanding into new markets, too).

“The number of reported hijacked brands grew again to 55, including nine brands first reported this month, eight of them financial institutions. This brings the total number of brands that have reportedly been hijacked to 131 since the APWG began examining phishing trends and reporting findings in November of 2003.”

There is an analysis of hijacked brands by industry sector. as well as a sobering chart pointing to the international dimensions of the problem.

The report includes an examination of a sample malware attack – a significant contribution in helping people understand the attacks to which an identity system will be subjected.

One of the main goals of a unifying identity system for the Internet is to mitigate these attacks. But let's be clear. If it succeeds at this it will become the new prime target of Internet crime. It must be designed from the start to withstand such attacks, using technology flexible enough to evolve faster than that of the attackers.

Put another way, it can be neither an “expedient hack” nor an unchangeable monolith. I've just begun to understand how the metasystem characteristics we have been discussing relate to achieving the flexibility needed by a component which is under continuous and escalating attack. This, in turn, testifies to the wide applicability of the fifth and sixth laws of identity.

Sutter , Sutter County — Bowing to objections from some angry parents, the Brittan School District's board has decided to temporarily halt its practice of making students wear identification badges with tiny transmitters that tell teachers when pupils are in class.

InCom, a company in Sutter, had been testing a system designed to ease teachers’ attendance-taking by using radio signals beamed from identification badges worn by seventh- and eighth-graders.

The company said Tuesday at a school board meeting that it was ending the test, though the system had been turned off since a board meeting on Feb. 8 at which several parents, backed by the American Civil Liberties Union, said the badges violated their children's privacy rights.

“I'm disappointed we didn't have an opportunity to go through with this test,” said board member Russ Takata. “Anything to make a classroom or teacher more efficient needs to be looked at.”

InCom said it deleted the data collected from its testing, which began in January. (Read more here.)

“InCom is being flooded with e-mail messages and calls from schools administrators across the country that are interested in testing the product, but InCom has no new trials scheduled at this time.”

“Most of the schools that are contacting us with requests for pilots have already issued picture IDs to students, so that part of the program won't be a problem… InCom will recommend to any school it works with in the future that parents be made aware of the use of RFID before the pilot begins.”

“Making people aware” is better than just putting the tags around the kids’ necks. But it would be better to fully understand the Law of Control. And it would be really great if they turned their attention to the opportunities that would open up by embracing unidirectional identitifiers for an application like this one, where public identities are not suitable (fourth law).

Further to the curse just discussed, Mark Wahl points us to last year's NIST Knowledge-based Authentication Symposium web site. Several presentations take on the issues of authenticating to an infrequently-used service with potentially public information.

It's happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a “secret question” to answer. Twenty years ago, there was just one secret question: “What's your mother's maiden name?” Today, there are more: “What street did you grow up on?” “What's the name of your first pet?” “What's your favorite color?” And so on.

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It's a great idea from a customer service perspective — a user is less likely to forget his first pet's name than some random password — but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I'll bet the name of my family's first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

What can one do? My usual technique is to type a completely random answer — I madly slap at my keyboard for a few seconds — and then forget about it. This ensures that some attacker can't bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don't remember how I authenticated myself to the customer service rep at the other end of the phone line.)

Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can't possibly do it. I know this is a customer service issue, but it's a security issue too. And if the password is controlling access to something important — like my bank account — then the bypass mechanism should be harder, not easier.

Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.

“If your point is that better identity management would prevent phishing and other end-user identity theft attacks, I agree. However most of the techniques described in the article point to the need for better security, such as firewalls, virus protection, and software updates, not the need for better identity management. The only way identity management would solve this problem is if you had to identify yourself in some secure way before you were able to use the internet, perhaps a global 802.1x network. I think that's still a little way off. :)”

I had said that Keefe's contention that the machines of unsuspecting consumers are being hijacked by sinister forces:

“… speaks directly to the urgency of the need for an identity system for the Internet: an identity system that people fully understand and are willing to buy into because it is designed in accordance with the laws of identity.”

Now I agree that fixing these problems requires better “firewalls, virus protection and software updates”. But what software is safe, and what isn't? Isn't identity required here – identity mechanisms that are understandable (i.e. in keeping with the sixth law, where the three foot channel between the computer and the individual's brain is a reliable one)? And exactly who should be allowed in through firewalls? So, solving this problem goes beyond ascertaining the identity of the computer user. It involves knowing the identity of organizations, and of the products they produce. It also includes various important intersections.

Multiple Intersecting Identities

As a user, for example, I should be able access my contact list. Since I use Outlook for mail, Outlook should be able to access my contact list when I am using it. But some attachment I download through Outlook shouldn't be able to access it.

There are many identities that need to work together in a harmonious system if we want to nail this scenario – my identity as a user of a computer, Microsoft's identity as a supplier of the software I use, Outlook's identity as a specific Microsoft product, the identity of my Contact List, and that of some policy which hooks them all together. And we need the right ways to “reify” these identities so they are easily understood.

Specific is Good

The idea of having some “secure identity” before gaining access the network won't in itself keep sinister forces at bay (they can be stolen and purchased). The best way to protect a resource is by making it necessary to have not only “some identity”, but a very specific identity. Then the only way for a sinister actor to obtain access to the resource is to procure one of the very specific identities which are able to access the resource. Doing this requires knowing what the specific set of identities is. The combined effect is a very high barrier.

Extrapolating a bit further, we need to get to the point where the only way you can get to resources on internet machines is to have the very specific identities which open those very specific resources. This approach, combined with the security measures you talk about, is the only road to progress on these problems.

What stands in our way?

Outside of the enterprise, current identity systems are too hard to deploy. They are too hard to understand. And too hard to use. The different systems exist in silos, making everything harder still, and the number of silos is likely to increase. Many people feel the only way to get anything done quickly is turn protection off – maybe with the intent of turning it on later… But if you forget, there is no way to know what you've left undone or who can access what.

All of this needs to be fixed. At the center of everything is the construction of a unifying and easily used identity system.

Whispers of Probing Mind points out that the Brittan School District may be the first in California to use RFID tags for children, but not in the US or around the world:

November 18, 2004: Suburban Houston school district is tagging 28,000 students with RFID-equipped ID badges that are read when children get on and off school buses. The children's’ locations are automatically sent wirelessly to police and school administrators. School officials say the $180,000 system was enthusiastically supported by parents as a school safety measure. We're guessing the kids haven't yet hired ACLU or EFF lawyers.

In Japan, Schoolkids were tagged with RFID chips in larger scale.

July 12 2004: The rights and wrongs of RFID-chipping human beings have been debated since the tracking tags reached the technological mainstream. Now, school authorities in the Japanese city of Osaka have decided the benefits outweigh the disadvantages and will now be chipping children in one primary school.

The tags will be read by readers installed in school gates and other key locations to track the kids’ movements. The chips will be put onto kids’ schoolbags, name tags or clothing in one Wakayama prefecture school. Denmark's Legoland introduced a similar scheme last month to stop young children going astray.

Again, from my point of view there are two issues here – consent (law 1) and omnidirectionality (law 4).

James Kobelius argues that by sending your child to a school you consent to the way that school is run, and that informing parents about use of RFID is basically a formality. This argument touches on the relationship between societies, individuals and their childrens’ schools – issues which are far beyond the scope of this blog. My point here is simply that one way or another, consent is required, or there will be a ruckus which undermines the success of the system. For the system to succeed, consent should be as clear as possible. In the California incident, a number of parents did not feel they had given their consent, so the consent was not clear. I am very curious to see whether the system will recover from this.

Given my interests, I generalize from this whole experience: When trying to build a successful system of identity for the Internet, let's all agree to make this kind of dynamic a thing of the past by ensuring that above all, the users of the system are in firm control of it.

In terms of omnidirectionality, I very much suspect that the children in the all these cases wear their tags home. And that the tags are omnidirectional, cabable of being energized by any compatible reader employed by any stranger. I believe we need to nip this in the bud. If children are to be tagged, the devices employed should refuse to respond except to readers run by parties known and approved by their parents. The identity of the party monitoring the children is at least as important as the identity of the children. We are capable of building such systems for use in protecting our children, and don't have to fall back to technologies suitable for boxes of cereal.

A reader of yesterday's piece on bodynets suggested checking out thisVodafone site, which is a must-see for the identity affectionado. It's superbly put together, although at one point I got trapped in Vincenzo's incredibly messy bedroom as he played, if you can believe this, a mediterranean version of “This is my dog” to a Mitch Miller-like bouncing ball reborn on a foldable organic screen. But many of the scenarios are very concrete and believeable.

This world is lush with communicators sensing your digital ID and adjusting all aspects of your environment in cahoots with your visual bracelet, a kind of wrappable cellphone that filters incoming events on your behalf. It is Eric Norlin'spolycomm scenario gone Hollywood, with privacy issues galore. All in all, a great accomplishment.

Of course we have a lot of work to do in figuring out the implications of the laws of identity for these scenarios. I wonder if Vodafone has a paper on these issues?

Funny what you find on the net! While reading through some links related to wearable computer research I cam across this great page with some thoughts by Ana Viseu about “bodynets” and Identity. Besides that fact that I really like the look of the web site, I like this train of thought:

Identity, loosely defined as the way we see and present ourselves, is not static. On the contrary, identity is primarily established in social interaction. This interaction consists, in its most basic form, of an exchange of information. In this information exchange individuals define the images of themselves and of others. This interaction can be mediated-through a technology, for example-and it can involve entities of all sorts, e.g., an institution or a technology. I am investigating this interaction through the study of bodynets.

Bodynets can be thought of as new bridges or interfaces between the individual and the environment. My working definition of a bodynets is: A body networked for (potentially) continuous communication with the environment (humans or computers) through at least one wearable device-a computer worn on the body that is always on, ready and accessible. This working definition excludes implants, genetic alterations, dedicated devices and all other devices that are portable but not wearable, such as cell phones, smart cards or PDAs.

Besides the matters related to identity, bodynets also raise serious issues concerning privacy, which in turn feedback on identity changes. Bodynets are composed of digital technologies, which inherently possess tracking capabilities, this has major privacy implications.

If you like this, continue reading … there is a lot of additional material. Whenever I see the University of Toronto, I have to guess that Steve Mann is involved. These are all important directions to look at.

There is a thought-provoking piece by Patrick Radden Keefein the Village Voice about the “darknet”. If that's a new term for anyone, Keefe says that:

“In 2002 four Microsoft engineers published a paper in which they coined the term the “darknet.” This was essentially an extensive and opaque Internet black market, ‘not a separate physical network but an application and protocol layer riding on existing networks…'”

Keefe goes on to look at the relation between the darknet and terrorism:

“The dark regions of the Internet have allowed Al Qaeda to reconstitute itself as a virtual terrorist group, one that is beginning, through its masterful distribution of propaganda, to resemble not so much an organization as a movement, and one that has used America's accelerated rate of technological growth to its own advantage…

“Bin Laden associates employ cutting-edge steganography, which involves implanting a text message into a single image or letter on a website. “

He argues government agencies are ill-prepared to deal with these threats, that Internet users are unwitting accomplices, and that Internet technology, which promised so much “good”, is a two-edged sword:

“If American forces are unaccustomed to pursuing adversaries through the caves of Afghanistan or the streets of Baghdad, they will have even more trouble tracking Al Qaeda online, because Internet technology favors the fugitive criminal and the migrant threat, and because terrorists know how to turn the new digital divide to their advantage. In this evasive game they have at their disposal a most unusual accomplice: unwitting Americans with personal computers and Internet connections…

“What's… unsettling is that American computer users may assist in this growth phase for Al Qaeda.”

The article keeps coming back to the idea that to escape detection, terrorists hijack legitimate resources left vulnerable because people don't understand how to protect them. And this, of course, speaks directly to the urgency of the need for an identity system for the Internet: an identity system that people fully understand and are willing to buy into because it is designed in accordance with the laws of identity.

Just for the record, while the concept is very key, I'm not a fan of the word “darknet”. I think we can do better than that. The dark-light dichotomy is too last-century.