Consult Hyperion's thought leadership blog around the bridge between physical identity and virtual identity, "Digital Identity" has now been integrated into the "Tomorrow's Transactions" blog at http://www.chyp.com/media/blog/

About The Blog

Debate at the intersection of business, technology and culture in the world of digital identity, both commercial and government, a blog born from the Digital Identity Forum in London and sponsored by Consult Hyperion

[Dave Birch] I went to an enjoyable dinner (under Chatham House Rule) organised by DEMOS (a think tank that published a paper on privacy called "Putting People First" a couple of years ago) to discuss some issues around identity and privacy, particularly in the context of social networking. A couple of people raised that point that more privacy is, by itself, not necessarily a social benefit or a individual benefit. The "Privacy Taliban" should recognise economic activity as a social good, essentially.

It’s a controversial topic, but important, since hasty legislation could have dire consequences for the survival of newspapers.

Indeed, and I was keen to press the point about helping content industries to reshape rather than preserve their business models a point of which there seemed to be fairly wide agreement. One area where there wasn't, and where my opinions were regarded as odd, was choice. I said that it was obvious to me that giving people choices about how much information they disclosed online (and to whom) was a practical way forward.

It turns out that the people who most benefit from the ability to set their own software preferences are well educated I.T.-saavy professionals with money — the people who suffer are the poorer and less educated users. So making privacy an individual option basically takes privacy away from the poor.

This is surely correct. Now, I accept that the coming generation see privacy in a different way, and may have different norms, but we don't let them have a choice about whether to wear seatbelts or build houses that aren't to code, even though we acknowledge their perspectives.

Digital immigrants tend to think about privacy as the ability to conceal information from others. Digital natives instead share information within certain contexts, and with granular privacy controls on that information.

One topic that was raised was that the trawling of social networks by machines can take facts that are by themselves not particularly sensitive and match them together to obtain information that is sensitive. This is a topic discussed here before, and I don't want to rehash it, but it is interesting to delve into the commercial side of this. I think, because I'm optimistic about technology, that it ought to be possible for the "system" to mine data about me and offer me useful and relevant commercial relationships without knowing who I am. And I don't mean just knocking the name off.

You just don't know what data will be "sensitive" in the future. A consequence is the trail of unintended consequences that litter the privacy roadmap. An especially interesting trail to follow is the use of "video" rental records in the US. Older readers may recall the story of Robert Bork's confirmation hearings, where his video rental choices were exposed to the world. The natural result was the Congress leapt into action and passed draconian protections for such records. You can now be fined $2,500 per record for giving unauthorised access to such records. Fast forward a couple of decades, and the online DVD rental company Netflix released "anonymised" customer rental records as part of a competition to create a better recommendation engine. Now they are being sued.

The lawsuit claims that Netflix's information disclosure was illegal under the Video Privacy Protection Act (VPPA), which was itself passed in response to a bizarre video rental data leak. In the late 1980s, during the infamous Robert Bork Supreme Court confirmation process, a reporter from Washington's City Paper simply went down to a local video store, asked the manager for Bork's rental history, and was given a photocopied set of records... which he then wrote about.

The problem is that simply deleting names from records does not anonymise them at all: you need cryptography to do that.

The contest offered a data file with a few million movie rentals in it. Names were not attached, but within weeks researchers had found a way to use an external data source to decode an individual's viewing history with surprising accuracy.

This may seem like an amusing but not especially significant story about finding out who has been watching DVDs featuring lesbians, but it actually illustrates (rather neatly) a more general point about the tension between individual privacy and data sharing as well as an underlaying problem about the failure of our "common sense" (ie, pre-industrial) ideas about privacy in a modern context. Most people would think that taking the names off is adequate (in fact I was in a meeting about this in the context of medical records for research purposes a few weeks ago). but that's because they don't understand the subject in sufficient detail. That wouldn't matter if the subject were quantum physics, where the average MP or civil servant might happily defer to a physicist -- not that they would be able to tell whether he or she is actually a physicist or not, but that's another issue -- but in the case of identity, common sense is not merely wrong but dangerous.

Now, conversely, if someone were able to make part of their commercial proposition to mathematical protection of privacy, that ought to change the game.

This is a potentially win-win situation because essentially, if data is anonymised at the source and is under the control of the customer, the customer will trust the provider who anonymises their data (and in turn protects them).

Microsoft were present at the DEMOS dinner, coincidentally the day after they made the very significant announcement about precisely the technology that might make this kind of commercial development a reality.

To encourage broad community evaluation and input, Microsoft announced it is providing core portions of the U-Prove intellectual property under the Open Specification Promise, as well as releasing open source software development kits in C# and Java editions. Charney encouraged the industry, developers and IT professionals to develop identity solutions that help protect individual privacy.

It's taken many years for this "zero-knowledge proof" technology (which is not unique to Microsoft -- IBM in Zurich have been developing something similar) to make it to the marketplace -- I first invited Stefan Brands to present to the Digital Identity Forum back in 2003 -- but I remain convinced that it contains the seeds of the new identity infrastructure that we need to make the connected world work properly. I notice that in advanced nations, it is being taken seriously.

Microsoft is working with the Fraunhofer Institute for Open Communication Systems in Berlin on an interoperability prototype project integrating Microsoft's U-Prove technology and Active Directory services with the German government's future electronic identity card system

Look, we don't know what is going to happen to our data in the future so it makes sense to anonymise as much of it as possible. The more places we have to prove our identity, the more places our identity can be stolen from.

These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]

TrackBack

TrackBack URL for this entry:http://www.typepad.com/services/trackback/6a00d8341c4fd753ef0120a900fa54970b