Category: Ethics

According to this piece in Digital Trend, LinkedIn has “opted” 100 million of us into sharing private information within advertisements. This includes posting our names and photos as advertisers’ helpers.

“When a LinkedIn user views a third-party advertisement on the social network, they will see user profile pictures and names of connections if that connection has recommended or followed a brand. Any time that a user follows a brand, they unwittingly become a cheerleader for the company or organization if it advertises through LinkedIn.”

And in case that doesn't surprise you, how about this:

“In order to opt out of social advertising, the LinkedIn user has to take four steps to escape third-party advertisements:

“Hover over the user name in the top right hand corner of any LinkedIn page and click ‘Settings’. On the Settings page, click ‘Account’. On the Account tab, click ‘Manage Social Advertising’. Uncheck the box next to “LinkedIn may use my name, photo in social advertising.” and click the save button.”

What a mistake.

I know there are many who think that if Facebook can take the huddled masses to the cleaners, why shouldn't everyone?

It seems obvious that the overwhelming majority of people who participate in Facebook are still a few years away from understanding and reacting to what they have got themselves into.

But Linked In's membership is a lot more savvy about the implications of being on the site – and why they are sharing information there. Much of their participation has to do with future opportunities, and everyone is sensitive about the need to control and predict how they will be evaluated later in their career. Until yesterday I for one had been convinced that Linked In was smart enough to understand this.

But apparently not. And I think it will turn out that many of the professionals who until now have been happy to participate will choke on the potential abuse of their professional information and reputation – and Linked In's disregard for their trust.

My conclusion? Linked in has just thrown down the gauntlet and challenged us, as a community of professionals, to come up with safe and democratic ways to network.

This much is obvious: we need a network that respects the rights of the people in it. Linked In just lost my vote.

Skud at Geek Feminism Blog has created a wiki documenting work she and her colleagues are doing to “draft a comprehensive list” of those who would be harmed by a policy banning pseudonymity and requiring “real names”.

The result is impressive. The rigour Skud and colleagues have applied to their quest has produced an information payload that is both illuminating and touching.

Those of us working on identity technology have to internalize the lessons here. Over-identification is ALWAYS wrong. But beyond that, there are people who are especially vulnerable to it. They have to be treated as first class citizens with clear rights and we need to figure out how to protect them. This goes beyond what we conventionally think of as privacy concerns (although perhaps it sheds light on the true nature of what privacy is – I'm still learning).

Often people argue in favor of “Real Names” in order to achieve accountability. The fact is that technology offers us other ways to achieve accountability. By leveraging the properties of minimal disclosure technology, we can allow people to remain anonymous and yet bar them from given environments if their behavior gets sufficiently anti-social.

But enough editorializing. Here's Skud's intro. Just remember that in this case the real enlightenment is in the details, not the summary.

This page lists groups of people who are disadvantaged by any policy which bans Pseudonymity and requires so-called “Real names” (more properly, legal names).

This is an attempt to create a comprehensive list of groups of people who are affected by such policies.

The cost to these people can be vast, including:

harassment, both online and offline

discrimination in employment, provision of services, etc.

actual physical danger of bullying, hate crime, etc.

arrest, imprisonment, or execution in some jurisdictions

economic harm such as job loss, loss of professional reputation, etc.

social costs of not being able to interact with friends and colleagues

possible (temporary) loss of access to their data if their account is suspended or terminated

The groups of people who use pseudonyms, or want to use pseudonyms, are not a small minority (some of the classes of people who can benefit from pseudonyms constitute up to 50% of the total population, and many of the others are classes of people that almost everyone knows). However, their needs are often ignored by the relatively privileged designers and policy-makers who want people to use their real/legal names.

Wait a minute. Just got a note from the I Can't Stop Editorializing Department: the very wiki page that brings us Skud's analysis contains a Facebook “Like” button. It might be worth removing it given that Facebook requires “Real Names”, and then transmits the URL of any page with a “Like” button to Facebook so it can be associated with the user's “Real Name” – whether or not they click on the button or are logged into Facebook.

Pangloss sent me reeling recently with her statement that “in the wake of the amazing News of the World revelations, there does seem to be some public interest in a quick note on why there is (some) controversy around whether hacking mesages in someone's voicemail is a crime.”

What? Outside Britain I imagine most of us have simply assumed that breaking into peoples’ voicemails MUST be illegal. So Pangloss's excellent summary of the situation – I share just enough to reveal the issues – is a suitable slap in the face of our naivete:

The first relevant provision is RIPA (the Regulation of Investigatory Powers Act 2000) which provides that interception of communications without consent of both ends of the communication , or some other provision like a police warrant is criminal in principle. The complications arise from s 2(2) which provides that:

“….a person intercepts a communication in the course of its transmission by
means of a telecommunication system if, and only if … (he makes) …some or all of the contents of the communication available, while being transmitted, to a person other than the sender or intended recipient of the communication”. [my itals]

Section 2(4) states that an “interception of a communication” has also to be “in the course of its transmission” by any public or private telecommunications system. [my itals]

The argument that seems to have been been made to the DPP, Keir Starmer, on October 2010, by QC David Perry, is that voicemail has already been transmitted and is thus therefore no longer “in the course of its transmission.” Therefore a RIPA s 1 interception offence would not stand up. The DPP stressed in a letter to the Guardian in March 2011 that this interpretation was (a) specific to the cases of Goodman and Mulcaire (yes the same Goodman who's just been re-arrested and inded went to jail) and (b) not conclusive as a court would have to rule on it.

We do not know the exact terms of the advice from counsel as (according to advice given to the HC on November 2009) it was delivered in oral form only. There are two possible interpretations of even what we know. One is that messages left on voicemail are “in transmission” till read. Another is that even when they are stored on the voicemail server unread, they have completed transmission, and thus accessing them would not be “interception”.

Very few people I think would view the latter interpretation as plausible, but the former seem to have carried weight with the prosecution authorities. In the case of Milly Dowler, if (as seems likely) voicemails were hacked after she was already deceased, there may have been messages unread and so a prosecution would be appropriate on RIPA without worrying about the advice from counsel. In many other cases eg involving celebrities though, hacking may have been of already-listened- to voicemails. What is the law there?

When does a message to voicemail cease to be “in the course of transmission”? Chris Pounder pointed out in April 2011 that we also have to look at s 2(7) of RIPA which says

” (7)For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it.”

A common sense interpretation of this, it seems to me (and to Chris Pounder ) would be that messages stored on voicemail are deemed to remain “in the course of transmission” and hence capable of generating a criminal offence, when hacked – because it is being stored on the system for later access (which might include re-listening to already played messages).

This rather thoroughly seems to contradict the well known interpretation offered during the debates in the HL over RIPA from L Bassam, that the analogy of transmission of a voice message or email was to a letter being delievered to a house. There, transmission ended when the letter hit the doormat.

Fascinating issues. And that's just the beginning. For the full story, continue here.

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example, Le droit à l’oubli sur Internet). The notion is that after some time, information should simply fade away (counteracting digital eternity).

Whatever words we use, the right, if recognized, would be a far-reaching game-changer – and as I wrote here, represent a “cure as important as the introduction of antibiotics was in the world of medicine”.

Against this backdrop, the following report by CIARAN GILES of the Associated Press gives us much to think about. It appears Google is fighting head-on against the “the Right to be Forgotten”. It seems to be willing to take on any individual or government who dares to challenge the immutable right of its database and algorithms to define you through something that has been written – forever, and whether it's true or not.

MADRID – Their ranks include a plastic surgeon, a prison guard and a high school principal. All are Spanish, but have little else in common except this: They want old Internet references about them that pop up in Google searches wiped away.

In a case that Google Inc. and privacy experts call a first of its kind, Spain's Data Protection Agency has ordered the search engine giant to remove links to material on about 90 people. The information was published years or even decades ago but is available to anyone via simple searches.

Scores of Spaniards lay claim to a “Right to be Forgotten” because public information once hard to get is now so easy to find on the Internet. Google has decided to challenge the orders and has appealed five cases so far this year to the National Court.

Some of the information is embarrassing, some seems downright banal. A few cases involve lawsuits that found life online through news reports, but whose dismissals were ignored by media and never appeared on the Internet. Others concern administrative decisions published in official regional gazettes.

In all cases, the plaintiffs petitioned the agency individually to get information about them taken down.

And while Spain is backing the individuals suing to get links taken down, experts say a victory for the plaintiffs could create a troubling precedent by restricting access to public information.

The issue isn't a new one for Google, whose search engine has become a widely used tool for learning about the backgrounds about potential mates, neighbors and co-workers. What it shows can affect romantic relationships, friendships and careers.

For that reason, Google regularly receives pleas asking that it remove links to embarrassing information from its search index or least ensure the material is buried in the back pages of its results. The company, based in Mountain View, Calif., almost always refuses in order to preserve the integrity of its index.

A final decision on Spain's case could take months or even years because appeals can be made to higher courts. Still, the ongoing fight in Spain is likely to gain more prominence because the European Commission this year is expected to craft controversial legislation to give people more power to delete personal information they previously posted online.

“This is just the beginning, this right to be forgotten, but it's going to be much more important in the future,” said Artemi Rallo, director of the Spanish Data Protection Agency. “Google is just 15 years old, the Internet is barely a generation old and they are beginning to detect problems that affect privacy. More and more people are going to see things on the Internet that they don't want to be there.”

Many details about the Spaniards taking on Google via the government are shrouded in secrecy to protect the privacy of the plaintiffs. But the case of plastic surgeon Hugo Guidotti vividly illustrates the debate.

In Google searches, the first link that pops up is his clinic, complete with pictures of a bare-breasted women and a muscular man as evidence of what plastic surgery can do for clients. But the second link takes readers to a 1991 story in Spain's leading El Pais newspaper about a woman who sued him for the equivalent of euro5 million for a breast job that she said went bad.

By the way, if it really is true that the nothing should ever interfere with the automated pronouncements of the search engine – even truth – does that mean robots have the right to pronounce any libel they want, even though we don't?

If you have kept up with the excellent Wall Street Journal series on smartphone apps that inappropriately collect and release location information, you won't be surprised at their latest chapter: Federal Prosecutors are now investigating information-sharing practices of mobile applications, and a Grand Jury is already issuing subpoenas. The Journal says, in part:

Federal prosecutors in New Jersey are investigating whether numerous smartphone applications illegally obtained or transmitted information about their users without proper disclosures, according to a person familiar with the matter…

The criminal investigation is examining whether the app makers fully described to users the types of data they collected and why they needed the information—such as a user's location or a unique identifier for the phone—the person familiar with the matter said. Collecting information about a user without proper notice or authorization could violate a federal computer-fraud law…

Online music service Pandora Media Inc. said Monday it received a subpoena related to a federal grand-jury investigation of information-sharing practices by smartphone applications…

… 56 transmitted the phone's unique device identifier to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent a user's age, gender and other personal details to outsiders. At the time they were tested, 45 apps didn't provide privacy policies on their websites or inside the apps.

In Pandora's case, both the Android and iPhone versions of its app transmitted information about a user's age, gender, and location, as well as unique identifiers for the phone, to various advertising networks. Pandora gathers the age and gender information when a user registers for the service.

Legal experts said the probe is significant because it involves potentially criminal charges that could be applicable to numerous companies. Federal criminal probes of companies for online privacy violations are rare…

The probe centers on whether app makers violated the Computer Fraud and Abuse Act, said the person familiar with the matter. That law, crafted to help prosecute hackers, covers information stored on computers. It could be used to argue that app makers “hacked” into users’ cellphones.

The elephant in the room is Apple's own approach to location information, which should certainly be subject to investigation as well. The user is never presented with a dialog in which Apple's use of location information is explained and permission is obtained. Instead, the user's agreement is gained surreptitiously, hidden away on page 37 of a 45 page policy that Apple users must accept in order to use… iTunes. Why iTunes requires location information is never explained. The policy simply states that the user's device identifier and location are non-personal information and that Apple “may collect, use, transfer, and disclose non-personal information for any purpose“.

Any purpose?

Is it reasonable that companies like Apple can proclaim that device identifiers and location are non-personal and then do whatever they want with them? Informed opinion seems not to agree with them. The International Working Group on Data Protection in Telecommunications, for example, asserted precisely the opposite as early as 2004. Membership of the Group included “representatives from Data Protection Authorities and other bodies of national public administrations, international organisations and scientists from all over the world.”

More empirically, I demonstrated in Non-Personal information, like where you live that the combination of device identifier and location is in very many cases (including my own) personally identifying. This is especially true in North America where many of us live in single-family dwellings.

[BTW, I have not deeply investigated the approach to sharing of location information taken by other smartphone providers – perhaps others can shed light on this.]

Germans woke up yesterday to a headline story on Das Erste's TV Morning Show announcing a spiffy new Internet service – Google indoors.

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes. Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had “opted in”…

… so the technicians needed to get on with their work:

Google's deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves. As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves – an example of the advanced privacy options and levels the service makes possible.

Check out the video. Navigation features within houses are amazing! From the amount of effort and wit put into it by a major TV show, I'd wager that even if Google's troubles with Germany around Street View are over, its problems with Germans around privacy may not be.

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed. I don't have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony. And given Eric Schmidt's policy of getting as close to “creepy” as possible, who wouldn't find the video at least partly believable?

Netflix, the web's top video-rental service, has been accused of violating US privacy laws in five separate lawsuits filed during the past two months, records show.

Each of the five plaintiffs allege that Netflix hangs onto customer information, such as credit card numbers and rental histories, long after subscribers cancel their membership. They claim this violates the Video Privacy Protection Act (VPPA).

Netflix declined to comment.

In a four-page suit filed on Friday, Michael Sevy, a former Netflix subscriber who lives in Michigan, accuses Netflix of violating the VPPA by “collecting, storing and maintaining for an indefinite period of time, the video rental histories of every customer that has ever rented a DVD from Netflix”. Netflix also retains information that “identifies the customer as having requested or obtained specific video materials or services”, according to Sevy's suit.

In a complaint filed 22 February, plaintiff Jason Bernal, a resident of Texas, claimed “Netflix has assumed the role of Big Brother and trampled the privacy rights of its former customers”.

Jeff Milans from Virginia filed the first of the five suits on 26 January. One of his attorneys, Bill Gray, told ZDNet Australia‘s sister site CNET yesterday that the way he knows Netflix is preserving information belonging to customers who have left the company is from Netflix emails. According to Gray, in messages to former subscribers, Netflix writes something similar to “We'd love to have you come back. We've retained all of your video choices”.

Gray said that Netflix uses the customer data to market the rental service, but this is done while risking its customers’ privacy. Someone's choice in rental movies could prove embarrassing, according to Gray, and should hackers ever get access to Netflix's database, that information could be made publicly available.

“We want Netflix to operate in compliance of the law and delete all of this information,” Gray said.

All the plaintiffs filed their complaints in US District Court for the Northern District of California. Each has asked the court for class action status. [More here].

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example,Le droit à l’oubli sur Internet). The notion is that after some time, information should simply fade away (counteracting digital eternity). The Right to be Forgotten has to be one of the most important digital rights – not only for social networks, but for the Internet as a whole.

The authors of the Social Network Users’ Bill of Rights have called some variant of this the “Right to Withdraw”. Whatever words we use, the Right is a far-reaching game-changer – a cure as important as the introduction of antibiotics was in the world of medicine.

I say “cure” because it helps heal problems that shouldn't have been created in the first place.

For example, Netflix does not need to – and should not – associate our rental patterns with our natural identities (e.g. with us as recognizable citizens). Nor should any other company that operates in the digital world.

Instead, following the precepts of minimal disclosure, the patterns should simply be associated with entities who have accounts and the right to rent movies. The details of billing should not be linked to the details of ordering (this is possible using the new privacy-enhancing technologies). From our point of view as consumers of these services, there is no reason the linking should be visible to anyone but ourselves.

All this requires a wee bit of a paradigm shift, you will say. And you're right. Until that happens, we don't have a lot of alternatives other than the Right to be Forgotten. Especially, as described in the law suits above, when we have “chosen to withdraw.”

So it’s in general overwhelmingly positive: five rights are unanimous, and another eight at 89% or higher. The one exception: the right to self-define, currently at about 65%. As I said in a comment on the earlier thread, this right is vital for people like whistleblowers, domestic violence victims, political dissidents, closeted LGBTQs. I wonder whether the large minority of people who don’t think it matters are thinking about it from those perspectives.

The voting on individual rights is still light. Right 12 clearly stands out as one which needs discussion.

I expect most people just take a quick look at the bill as a whole, say “Yeah, that makes sense” and move on. The “pro” and “against” pages at facebook ran about 500 to 1 in favor of the Bill when I looked a few days ago. In this sense the Bill is certainly right on track.

But the individual rights need to be examined very carefully by at least some of us. I'll return to Jon's comments on right 12 when I can make some time to set out my ideas.

Last week I gave a presentation at PII 2010 in Seattle where I tried to summarize what I had learned from my recent work on WiFi location services and identity. During the question period an audience member asked me to return to the slide where I recounted how I had first encountered Apple’s new location tracking policy:

My questioner was clearly a bit irritated with me, Didn’t I realize that the “unique device identifier” was just a GUID – a purely random number? It wasn’t a MAC address. It was not personally identifying.

The question really perplexed me, since I had just shown a slide demonstrating how if you go to this well-known web site (for example) and enter a location you find out who lives there (I used myself as an example, and by the way, “whitepages” releases this information even though I have had an unlisted number…).

I pointed out the obvious: if Apple releases your location and a GUID to a third party on multiple occasions, one location will soon stand out as being your residence… Then presto, if the third pary looks up the address in a “Reverse Address” search engine, the “random” GUID identifies you personally forever more. The notion that location information tied to random identifiers is not personally identifiable information is total hogwash.

My questioner then asked, “Is your problem that Apple’s privacy policy is so clear? Do you prefer companies who don’t publish a privacy policy at all, but rather just take your information without telling you?” A chorus of groans seemed to answer his question to everyone’s satisfaction. But I personally found the question thought provoking. I assume corporations publish privacy policies – even those as duplicitous as Apple’s – because they have to. I need to learn more about why.

[Meanwhile, if you’re wondering how I could possibly post my own residential address on my blog, it turns out I’ve moved and it is no longer my address. Beyond that, the initial “A” in the listing above has nothing to do with my real name – it’s just a mechanism I use to track who has given out my personal information.]

Kim Cameron is an expert in digital identity and privacy, so when his iPhone recently prompted him to read and accept Apple's revised terms and conditions before downloading a new app, he was perhaps more inclined than the rest of us to read the entire privacy policy — all 45 pages of tiny text on his mobile screen.

It's important to note that apart from writing his own blog on identity issues — where he told this story — Cameron is Microsoft's chief identity architect and one of its distinguished engineers. So he's not a disinterested industry observer in the broader sense. But he does have extensive expertise.

And he is publicly acknowledging his use of an iPhone, after all, which should earn him at least a few points for neutrality…

At this point I'll butt in and editorialize a little. I'd like to amplify on Todd's point for the benefit of readers who don't know me very well: I'm not critical of Street View WiFi because I am anti-Google. I'm not against anyone who does good technology. My critique stems from my work as a computer scientist specializing in identity, not as a person playing a role in a particular company. In short, Google's Street View WiFi is bad technology, and if the company persists in it, it will be one of the identity catastrophes of our time.

When I figured out the Laws of Identity and understood that Microsoft had broken them, I was just as hard on Microsoft as I am on Google today. In fact, someone recently pointed out the following reference in Wikipedia's article on Microsoft's Passport:

“A prominent critic was Kim Cameron, the author of the Laws of Identity, who questioned Microsoft Passport in its violations of those laws. He has since become Microsoft's Chief Identity Architect and helped address those violations in the design of the Windows Live ID identity meta-system. As a consequence, Windows Live ID is not positioned as the single sign-on service for all web commerce, but as one choice of many among identity systems.”

I hope this has earned me some right to comment on the current abuse of personal device identifiers by Google and Apple – which, if their FAQs and privacy policies represent what is actually going on, is at least as significant as the problems I discussed long ago with Passport.

But back to Todd:

At any rate, as Cameron explained on his IdentityBlog over the weekend, his epic mobile reading adventure uncovered something troubling on Page 37 of Apple's revised privacy policy, under the heading of “Collection and Use of Non-Personal Information.” Here's an excerpt from Apple's policy, Cameron's emphasis in bold.

We also collect non-personal information — data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

Here's what Cameron had to say about that.

Maintaining that a personal device fingerprint has “no direct association with any specific individual” is unbelievably specious in 2010 — and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses. And — big surprise — my iPhone, at least, came bundled with Google’s location service.

The irony here is a bit fantastic. I was, after all, using an “iPhone”. I assume Apple’s lawyers are aware there is an ‘I’ in the word “iPhone”. We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village. An iPhone is carried around by its owner. If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it.

The distinction is important because it speaks to how far the companies could go in linking together a specific device with a specific person in a particular location.

Google's FAQ, for the record, says its location-based services (such as Google Maps for Mobile) figure out the location of a device when that device “sends a request to the Google location server with a list of MAC addresses which are currently visible to the device” — not distinguishing between MAC addresses from phones or computers and those from wireless routers.

Here's what Cameron said when I asked about that topic via email.

I have suggested that the author ask Google if it will therefore correct its FAQ, since the portion of the FAQ on “how the system works” continues to say it behaves in the way I described. If Google does correct its FAQ then it will be likely that data protection authorities ask Google to demonstrate that its shipped software behaving in the way described in the correction.

I would of course feel better about things if Google’s FAQ is changed to say something like, “The user’s device sends a request to the Google location server with the list of MAC addresses found in Beacon Frames announcing a Network Access Point SSID and excluding the addresses of end user devices.”

However, I would still worry that the commercially irresistible feature of tracking end user devices could be turned on at any second by Google or others. Is that to be prevented? If so, how?

So a statement from Google that its FAQ was incorrect would be good news – and I would welcome it – but not the end of the problem for the industry as a whole.

In any event, the basic question about Apple is whether its new privacy policy is ultimately correct in saying that the company is only collecting “data in a form that does not permit direct association with any specific individual” — if that data includes such information as the phone's unique device identifier and location.

Cameron isn't the only one raising questions.

The Consumerist blog picked up on this issue last week, citing a separate portion of the revised privacy policy that says Apple and its partners and licensees “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.” The policy adds, “This location data is collected anonymously in a form that does not personally identify you and is used by Apple and our partners and licensees to provide and improve location-based products and services.”

The Consumerist called the language “creepy” and said it didn't find Apple's assurances about the lack of personal identification particularly comforting. Cameron, in a follow-up post, agreed with that sentiment.

“Though Apple states that the data is anonymous and does not enable the personal identification of users, they are left with little choice but to agree if they want to continue buying from iTunes,” Hypebot wrote.

We've left messages with Apple and Google to comment on any of this, and we'll update this post depending on the response.

And for the record, there is an option to email the Apple privacy policy from the phone to a computer for reading, and it's also available here, so you don't necessarily need to duplicate Cameron's feat by reading it all on your phone.