Category: Linkage

If you are interested in social networks, don't miss the slick video about Max Schrems’ David and Goliath struggle with Facebook over the way they are treating his personal information. Click on the red “CC” in the lower right-hand corner to see the English subtitles.

Max is a 24 year old law student from Vienna with a flair for the interview and plenty of smarts about both technology and legal issues. In Europe there is a requirement that entities with data about individuals make it available to them if they request it. That's how Max ended up with a personalized CD from Facebook that he printed out on a stack of paper more than a thousand pages thick (see image below). Analysing it, he came to the conclusion that Facebook is engineered to break many of the requirements of European data protection. He argues that the record Facebook provided him finds them to be in flagrante delicto.

The logical next step was a series of 22 lucid and well-reasoned complaints that he submitted to the Irish Data Protection Commissioner (Facebook states that European users have a relationship with the Irish Facebook subsidiary). This was followed by another perfectly executed move: setting up a web site called Europe versus Facebook that does everything right in terms using web technology to mount a campaign against a commercial enterprise that depends on its public relations to succeed.

Europe versus Facebook, which seems eventually to have become an organization, then opened its own YouTube channel. As part of the documentation, they publicised the procedure Max used to get his personal CD. Somehow this recipe found its way to reddit where it ended up on a couple of top ten lists. So many people applied for their own CDs that Facebook had to send out an email indicating it was unable to comply with the requirement that it provide the information within a 40 day period.

If that seems to be enough, it's not all. As Max studied what had been revealed to him, he noticed that important information was missing and asked for the rest of it. The response ratchets the battle up one more notch:

Dear Mr. Schrems:

We refer to our previous correspondence and in particular your subject access request dated July 11, 2011 (the Request).

To date, we have disclosed all personal data to which you are entitled pursuant to Section 4 of the Irish Data Protection Acts 1988 and 2003 (the Acts).

Please note that certain categories of personal data are exempted from subject access requests.
Pursuant to Section 4(9) of the Acts, personal data which is impossible to furnish or which can only be furnished after disproportionate effort is exempt from the scope of a subject access request. We have not furnished personal data which cannot be extracted from our platform in the absence of is proportionate effort.

Section 4(12) of the Acts carves out an exception to subject access requests where the disclosures in response would adversely affect trade secrets or intellectual property. We have not provided any information to you which is a trade secret or intellectual property of Facebook Ireland Limited or its licensors.

Please be aware that we have complied with your subject access request, and that we are not required to comply with any future similar requests, unless, in our opinion, a reasonable period of time has elapsed.

For example, as I wrote here (and Max describes here), Facebook's “Like” button collects information every time an Internet user views a page containing the button, and a Facebook cookie associates that page with all the other pages with “Like” buttons visited by the user in the last 3 months.

If you use Facebook, records of all these visits are linked, through cookies, to your Facebook profile – even if you never click the “like” button. These long lists of pages visited, tied in Facebook's systems to your “Real Name identity”, were not included on Max's CD.

Is Facebook prepared to argue that it need not reveal this stored information about your personal data because doing so would adversely affect its “intellectual property”?

It will be absolutely amazing to watch how this issue plays out, and see just what someone with Max's media talent is able to do with the answers once they become public.

The result may well impact the whole industry for a long time to come.

Meanwhile, students of these matters would do well to look at Max's many complaints:

Excessive processing of Data.Facebook is hosting enormous amounts of personal data and it is processing all data for its own purposes.
It seems Facebook is a prime example of illegal “excessive processing”.

Like Button.
The Like Button is creating extended user data that can be used to track users all over the internet. There is no legitimate purpose for the creation of the data. Users have not consented to the use.

Obligations as Processor.
Facebook has certain obligations as a provider of a “cloud service” (e.g. not using third party data for its own purposes or only processing data when instructed to do so by the user).

Regular readers will have come across (or participated in shaping) some of my work over the last year as I looked at the different ways that device identity and personal identity collide in mobile location technology.

Unfortunately the deeper problem was also immensely harder to grasp since it required both a technical knowledge of networked devices and a willingness to consider totally unpredicted ways of using (or misusing) information.

A few months ago I ran into Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, who was working on the same issues. We decided to collaborate on a very in-depth look at both the technology and policy implications, aiming to produce a document that could be understood by those in the policy community and still serve as a call to the technical community to deal appropriately with the identity issues, seeking what Ann calls “win-win” solutions that favor both privacy and innovation.

Ann's team deserves all the credit for the thorough literature research and clear exposition. Ann expertly describes the policy issues and urges us as technologists to adopt Privacy By Design principles for our work. I appreciate having had the opportunity to collaborate with such an innovative group. Their efforts give me confidence that even difficult technical issues with social implications can be debated and decided by the people they affect.

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example, Le droit à l’oubli sur Internet). The notion is that after some time, information should simply fade away (counteracting digital eternity).

Whatever words we use, the right, if recognized, would be a far-reaching game-changer – and as I wrote here, represent a “cure as important as the introduction of antibiotics was in the world of medicine”.

Against this backdrop, the following report by CIARAN GILES of the Associated Press gives us much to think about. It appears Google is fighting head-on against the “the Right to be Forgotten”. It seems to be willing to take on any individual or government who dares to challenge the immutable right of its database and algorithms to define you through something that has been written – forever, and whether it's true or not.

MADRID – Their ranks include a plastic surgeon, a prison guard and a high school principal. All are Spanish, but have little else in common except this: They want old Internet references about them that pop up in Google searches wiped away.

In a case that Google Inc. and privacy experts call a first of its kind, Spain's Data Protection Agency has ordered the search engine giant to remove links to material on about 90 people. The information was published years or even decades ago but is available to anyone via simple searches.

Scores of Spaniards lay claim to a “Right to be Forgotten” because public information once hard to get is now so easy to find on the Internet. Google has decided to challenge the orders and has appealed five cases so far this year to the National Court.

Some of the information is embarrassing, some seems downright banal. A few cases involve lawsuits that found life online through news reports, but whose dismissals were ignored by media and never appeared on the Internet. Others concern administrative decisions published in official regional gazettes.

In all cases, the plaintiffs petitioned the agency individually to get information about them taken down.

And while Spain is backing the individuals suing to get links taken down, experts say a victory for the plaintiffs could create a troubling precedent by restricting access to public information.

The issue isn't a new one for Google, whose search engine has become a widely used tool for learning about the backgrounds about potential mates, neighbors and co-workers. What it shows can affect romantic relationships, friendships and careers.

For that reason, Google regularly receives pleas asking that it remove links to embarrassing information from its search index or least ensure the material is buried in the back pages of its results. The company, based in Mountain View, Calif., almost always refuses in order to preserve the integrity of its index.

A final decision on Spain's case could take months or even years because appeals can be made to higher courts. Still, the ongoing fight in Spain is likely to gain more prominence because the European Commission this year is expected to craft controversial legislation to give people more power to delete personal information they previously posted online.

“This is just the beginning, this right to be forgotten, but it's going to be much more important in the future,” said Artemi Rallo, director of the Spanish Data Protection Agency. “Google is just 15 years old, the Internet is barely a generation old and they are beginning to detect problems that affect privacy. More and more people are going to see things on the Internet that they don't want to be there.”

Many details about the Spaniards taking on Google via the government are shrouded in secrecy to protect the privacy of the plaintiffs. But the case of plastic surgeon Hugo Guidotti vividly illustrates the debate.

In Google searches, the first link that pops up is his clinic, complete with pictures of a bare-breasted women and a muscular man as evidence of what plastic surgery can do for clients. But the second link takes readers to a 1991 story in Spain's leading El Pais newspaper about a woman who sued him for the equivalent of euro5 million for a breast job that she said went bad.

By the way, if it really is true that the nothing should ever interfere with the automated pronouncements of the search engine – even truth – does that mean robots have the right to pronounce any libel they want, even though we don't?

Germans woke up yesterday to a headline story on Das Erste's TV Morning Show announcing a spiffy new Internet service – Google indoors.

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes. Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had “opted in”…

… so the technicians needed to get on with their work:

Google's deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves. As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves – an example of the advanced privacy options and levels the service makes possible.

Check out the video. Navigation features within houses are amazing! From the amount of effort and wit put into it by a major TV show, I'd wager that even if Google's troubles with Germany around Street View are over, its problems with Germans around privacy may not be.

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed. I don't have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony. And given Eric Schmidt's policy of getting as close to “creepy” as possible, who wouldn't find the video at least partly believable?

Britain's Home Office has posted a remarkable video, showing Immigration Minister Damian Green methodically pulverizing the disk drives that once held the centralized database that was to be connected to the British ID Cards introduced by Tony Blair.

“What we're doing today is CRUSHING, the final remnants of the national identity card scheme – the disks and hard drives that held the information on the national identity register have been wiped and they're crushed and reduced to bits of metal so everyone can be absolutely sure that the identity scheme is absolutely dead and buried.

“This whole experiment of trying to collect huge amounts of private information on everyone in this country – and collecting on the central database – is no more, and it's a first step towards a wider agenda of freedom. We're publishing the protection of freedoms bill as well, and what this shows is that we want to rebalance the security and freedom of the citizen. We think that previously we have not had enough emphasis on peoples’ individual freedom and privacy, and we're determined to restore the proper balance on that.”

Readers of Identityblog will recall that the British scheme was exceptional in breaking so many of the Laws of Identity at once. It flouted the first law – User control and Consent – since citizen participation was mandatory. It broke the second – Minimal Disclosure for a Constrained Use – since it followed the premise that as much information as possible should be assembled in a central location for whatever uses might arise… The third law of Justifiable Parties was not addressed given the centralized architecture of the system, in which all departments would have made queries and posted updates to the same database and access could have been extended at the flick of a wrist. And the fourth law of “Directed Identity” was a clear non-goal, since the whole idea was to use a single identifier to unify all possible information.

Over time opposition to the scheme began to grow and became widespread, even though the Blair and Brown governments claimed their polls showed majority support. Many well-known technologists and privacy advocates attempted to convince them to consider privacy enhancing technologies and architectures that would be less vulnerable to security and privacy meltdown – but without success. Beyond the scheme's many technical deficiencies, the social fracturing it created eventually assured its irrelevance as a foundational element for the digital future.

Many say the scheme was an important issue in the last British election. It certainly appears the change in government has left the ID card scheme in the dust, with politicians of all stripes eager to distance themselves from it. Damian Green, who worked in television and understands it, does a masterful job of showing what his views are. His video posted by the Home Office, seems iconic.

All in all, the fate of the British ID Card and centralized database scheme is exactly what was predicted by the Laws of Identity:

In the past, a user's Facebook advertising would eventually be impacted by what's on her wall and in her stream, but this was a gradual shift based on out-of-band analysis and categorization.

Now, at least for participants in this test, it will become crystal clear that Facebook is looking at and listening to your activities; making assumptions about who you are and what you want; and using those assumptions to change how you are treated.

This month — and for the first time — Facebook started to mine real-time conversations to target ads. The delivery model is being tested by only 1% of Facebook users worldwide. On Facebook, that's a focus group 6 million people strong.

The closest Facebook has come to real-time advertising has been with its most recent ad offering, known as sponsored stories, which repost users’ brand interactions as an ad on the side bar. But for the 6 million users involved in this test, any utterance will become fodder for real-time targeted ads.

For example: Users who update their status with “Mmm, I could go for some pizza tonight,” could get an ad or a coupon from Domino's, Papa John's or Pizza Hut.

To be clear, Facebook has been delivering targeted ads based on wall posts and status updates for some time, but never on a real-time basis. In general, users’ posts and updates are collected in an aggregate format, adding them to target audiences based on the data collected over time. Keywords are a small part of that equation, but Facebook says sometimes keywords aren't even used. The company said delivering ads based on user conversations is a complex algorithm continuously perfected and changed. The real aim of this test is to figure out if those kinds of ads can be served at split-second speed, as soon as the user makes a statement that is a match for an ad in the system.

With real-time delivery, the mere mention of having a baby, running a marathon, buying a power drill or wearing high-heeled shoes is transformed into an opportunity to serve immediate ads, expanding the target audience exponentially beyond usual targeting methods such as stated preferences through “likes” or user profiles. Facebook didn't have to create new ads for this test and no particular advertiser has been tapped to participate — the inventory remains as is.

A user may not have liked any soccer pages or indicated that soccer is an interest, but by sharing his trip to the pub for the World Cup, that user is now part of the Adidas target audience. The moment between a potential customer expressing a desire and deciding on how to fulfill that desire is an advertiser sweet spot, and the real-time ad model puts advertisers in front of a user at that very delicate, decisive moment.

“The long-held promise of local is to deliver timely, relevant and measurable ads which drive actions such as commerce, so if Facebook is moving in this direction, it's brilliant,” said Reggie Bradford, CEO of Facebook software and marketing company Vitrue. “This is a massive market shift everyone is pivoting toward, led by services such as Groupon. Facebook has the power of the graph of me and my friends placing them in the position to dominate this medium.” [More here]

This test is important and will reveal a lot. If the system is accurate and truly real-time, the way it works will become obvious to people. It will be a simple cause-and-effect experience that leads to a clarity people have not had before around profiling. This will be good.

However, once the analysis algorithms make mistakes in pigeon-holing users – which is inevitable – it is likely that it will alienate at least some part of the test population, raising their consciousness of the serious potential problems with profiling. What will that do to their perception of Facebook?

A Facebook that looks more and more like HAL will not be accepted as “your universal internet identity” – as some of the more pathologically shortsighted dabblers in identity claim is already becoming the case. Like other companies, Facebook has many simultaneous goals, and some of them conflict in fundamental ways. More than anything else, in the long term, it is these conflicts that will limit Facebook's role as an identity provider.

Netflix, the web's top video-rental service, has been accused of violating US privacy laws in five separate lawsuits filed during the past two months, records show.

Each of the five plaintiffs allege that Netflix hangs onto customer information, such as credit card numbers and rental histories, long after subscribers cancel their membership. They claim this violates the Video Privacy Protection Act (VPPA).

Netflix declined to comment.

In a four-page suit filed on Friday, Michael Sevy, a former Netflix subscriber who lives in Michigan, accuses Netflix of violating the VPPA by “collecting, storing and maintaining for an indefinite period of time, the video rental histories of every customer that has ever rented a DVD from Netflix”. Netflix also retains information that “identifies the customer as having requested or obtained specific video materials or services”, according to Sevy's suit.

In a complaint filed 22 February, plaintiff Jason Bernal, a resident of Texas, claimed “Netflix has assumed the role of Big Brother and trampled the privacy rights of its former customers”.

Jeff Milans from Virginia filed the first of the five suits on 26 January. One of his attorneys, Bill Gray, told ZDNet Australia‘s sister site CNET yesterday that the way he knows Netflix is preserving information belonging to customers who have left the company is from Netflix emails. According to Gray, in messages to former subscribers, Netflix writes something similar to “We'd love to have you come back. We've retained all of your video choices”.

Gray said that Netflix uses the customer data to market the rental service, but this is done while risking its customers’ privacy. Someone's choice in rental movies could prove embarrassing, according to Gray, and should hackers ever get access to Netflix's database, that information could be made publicly available.

“We want Netflix to operate in compliance of the law and delete all of this information,” Gray said.

All the plaintiffs filed their complaints in US District Court for the Northern District of California. Each has asked the court for class action status. [More here].

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example,Le droit à l’oubli sur Internet). The notion is that after some time, information should simply fade away (counteracting digital eternity). The Right to be Forgotten has to be one of the most important digital rights – not only for social networks, but for the Internet as a whole.

The authors of the Social Network Users’ Bill of Rights have called some variant of this the “Right to Withdraw”. Whatever words we use, the Right is a far-reaching game-changer – a cure as important as the introduction of antibiotics was in the world of medicine.

I say “cure” because it helps heal problems that shouldn't have been created in the first place.

For example, Netflix does not need to – and should not – associate our rental patterns with our natural identities (e.g. with us as recognizable citizens). Nor should any other company that operates in the digital world.

Instead, following the precepts of minimal disclosure, the patterns should simply be associated with entities who have accounts and the right to rent movies. The details of billing should not be linked to the details of ordering (this is possible using the new privacy-enhancing technologies). From our point of view as consumers of these services, there is no reason the linking should be visible to anyone but ourselves.

All this requires a wee bit of a paradigm shift, you will say. And you're right. Until that happens, we don't have a lot of alternatives other than the Right to be Forgotten. Especially, as described in the law suits above, when we have “chosen to withdraw.”

Back in March 2006, when Information Cards were unknown and untested, it became obvious that the best way for me to understand the issues would be to put Information Cards onto Identityblog.

I wrote the code in PHP, and a few people started trying out Information Cards. Since I was being killed by spam at the time, I decided to try an experiment: make it mandatory to use an Information Card to leave a comment. It was worth a try. More people might check out InfoCards. And presto, my spam problems would go away.

At first I thought my draconian “InfoCard-Only” approach would get a lot of peoples’ hackles up and only last a few weeks. But over time more and more people seemed to be subscribing – probably because Identityblog was one of the few sites that actually used InfoCards in production. And I never had spam again.

How many people joined using InfoCards? Today I looked at my user list (see the screenshot below with PII fuzzed out). The answer: 2958 people successfully subscribed and passed email verification. There were then over 23,000 successful audited logins. Not very many for a commercial site, but not bad for a technical blog.

Of course, as we all know, the powers at the large commercial sites have preferred the “NASCAR” approach of presenting a bunch of different buttons that redirect the user to, uh, something-or-other-that-can-be-phished, ahem, in spite of the privacy and security problems. This part of the conversation will go on for some time, since these problems will become progressively more widespread as NASCAR gains popularity and the criminally inclined tune in to its potential as a gold mine… But that discussion is for another day.

Meanwhile, I want to get my hands dirty and understand all the implications of the NASCAR-style approach. So recently I subscribed to a nifty janrain service that offers a whole array of login methods. I then integrated their stuff into Identityblog. I promise, Scout's Honor, not to do man-in-the-middle-attacks or scrape your credentials, even though I probably could if I were so inclined.

From now on, when you need to authenticate at Identityblog, you will see a NASCAR-style login symbol. See, for example, the LOG IN option at the top of this page.

If you are not logged in and you want to leave a comment you will see :

Click on the string of icons and you get something like this:

Because many people continue to use my site to try out Information Cards, I've supplemented the janrain widget experience with the Pamelaware Information Card Option (it was pretty easy to make them coexist, and it leaves me with at least one unphishable alternative). This will also benefit people who don't like the idea of linking their identifiers all over the web. I expect it will help researchers and students too.

One warning: Janrain's otherwise polished implementation doesn't work properly with Internet Explorer – it leaves a spurious “Cross Domain Receiver Page” lurking on your desktop. [Update – this was apparently my problem: see here] Once I figure out how to contact them (not evident), I'll ask janrain if and when they're going to fix this. Anyway, the system works – just a bit messy because you have to manually close the stranded empty page. The problem doesn't appear in Firefox.

It has already been a riot looking into the new technology and working through the implications. I'll talk about this as we go forward.

The continuing deterioration of privacy and multi-party security due to short-sighted and unsustainable practices within our industry has begun to have the inevitable result, as reported by this article in the New York TImes.

A Commerce Department task force called for the creation of a ‘Privacy Bill of Rights’ for online consumers and the establishment of an office within the department that would work to strengthen privacy policies in the United States and coordinate initiatives with other countries.

The department’s Internet Policy Task Force, in a report released on Thursday, said the “Privacy Bill of Rights” would increase transparency on how user information was collected online, place limits on the use of consumer data by companies and promote the use of audits and other forms of enforcement to increase accountability.

The new protections would expand on the framework of Fair Information Practice Principles that address data security, notice and choice — or the privacy policies many users agree to on Web sites — and rights to obtaining information on the Internet.

“The simple concept of notice and choice is not adequate as a basis for privacy protections,” said Daniel J. Weitzner, the associate administrator for the office of policy analysis and development at the Commerce Department’s National Telecommunications and Information Administration [emphasis mine – Kim].

The article makes the connection to the Federal Trade Commission's “Do Not Track” proposal:

The F.T.C., in its report on online privacy this month, also called for improvements to the practice principles, but focused on installing a “do not track” mechanism that would allow computer users to opt out of having their information collected surreptitiously by third-party companies.

That recommendation caused concern in the online advertising industry, which has said that such a mechanism would hamper the industry’s growth and could potentially limit users’ access to free content online.

[The prospect of an online advertising industry deprived of its ability to surreptitiously collect information on us causes tears to well in my eyes. I can't continue! I need a Kleenex!]

The proposed Privacy Policy Office would work with the administration, the F.T.C. and other agencies on issues surrounding international and commercial data privacy issues but would not have enforcement authority.

“America needs a robust privacy framework that preserves consumer trust in the evolving Internet economy while ensuring the Web remains a platform for innovation, jobs and economic growth,” the commerce secretary, Gary F. Locke, said in a statement. “Self-regulation without stronger enforcement is not enough. Consumers must trust the Internet in order for businesses to succeed online.”

All of this is, in my view, just an initial reaction to behaviors that are seriously out of control. As information leakage goes, the surreptitious collection of information” to which the NYT refers is done at a scale that dwarfs Wiki Leaks, even if the subjects of the information are mere citizens rather than lofty officials of government.

I will personally be delighted when it is enshrined in law that a company can no longer get you to click on a privacy policy like this one and claim it is consent to sell your location to anyone it pleases.

James explains how the omnipresent Facebook widget works as a tracking mechanism: if you are a Facebook subscriber, then whenever you open a page showing the widget, your visit is reported toFacebook.

You don't have to do anything whatsoever – or click the widget – to trigger this report. It is automatic. Nor are we talking here about anonymized information or simple IP address collection. The report contains your Facebook identity information as well as the URL of the page you are looking at.

If you are familiar with the way advertising beacons operate, your first reaction might be to roll your eyes and yawn. After all, tracking beacons are all over the place and we've known about them for years.

But until recently, government web sites – or private web sites treating sensitive information of any kind – wouldn't be caught dead using tracking beacons.

What has changed? Governments want to piggyback on the reach of social networks, and show they embrace technology evolution. But do they have procedures in place that ensure that the mechanisms they adopt are actually safe? Probably not, if the growing use of the Facebook ‘Like’ button on these sites demonstrates. I doubt those who inserted the widgets have any idea about how the underlying technology works – or the time or background to evaluate it in depth. The result is a really serious privacy violation.

Governments need to be cautious about embracing tracking technology that betrays the trust citizens put in them. James gives us a good explanation of the problem with Facebook widgets. But other equally disturbing threats exist. For example, should governments be developing iPhone applications when to use them, citizens must agree that Apple has the right to reveal their phone's identifier and location to anyone for any purpose?

In my view, data protection authorities are going to have to look hard at emerging technologies and develop guidelines on whether government departments can embrace technologies that endanger the privacy of citizens.

Let's turn now to the details of James’ explanation. He writes:

I am all for Gov2.0. I think that it can genuinely make a difference and help bring public sector organisations and people closer together and give them new ways of working. However, with it comes responsibility, the public sector needs to understand what it is signing its users up for.

Many services that government and public sector organisations offer are sensitive and personal. When browsing through public sector web portals I do not expect that other organisations are going to be able to track my visit – especially organisations such as Facebook which I use to interact with friends, family and colleagues.

This issue has now been raised by Tom Watson MP, and the response from the Department of Health on this issue of Facebook is:

“Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system. When users sign up to Facebook they agree Facebook can gather information on their web use. NHS Choices privacy policy, which is on the homepage of the site, makes this clear.”

“We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.”

I think this response is wrong on a number of different levels. Firstly at a personal level, when I browse the UK National Health Service web portal to read about health conditions I do not expect them to allow other companies to track that visit; I don't really care what anybody's privacy policy states, I don't expect the NHS to allow Facebook to track my browsing habits on the NHS web site.

Secondly, I would suggest that the statement “Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system” is wrong. Facebook being able to capture data from sites like NHS Choices is a result of NHS Choices adding Facebook's functionality to their site.

Finally, I don't believe that the “We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.” is technically correct.

(Sorry to non-technical users but it is about to a bit techy…)

I created a clean Virtual Machine and installed HTTPWatch so I could see the traffic in my browser when I load an NHS Choices page. This machine has never been to Facebook, and definitely never logged into it. When I visit the NHS Choices page on bowel cancer the following call is made to Facebook:

So Facebook knows someone has gone to the above page, but does not know who.

Now go Facebook and log-in without ticking the ‘Keep logged in’ checkbox and the following cookie is deposited on my machine with the following 2 fields in it: (added xxxxxxxx to mask the my unique id)

datr: s07-TP6GxxxxxxxxkOOWvveg

lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

If I now close my browser and go back to Facebook, it does not log me in – but it knows who I am as my email address is pre-filled.

So even if I am not logged into Facebook, and even if I do not click on the ‘Like’ button, the NHS Choices site is allowing Facebook to track me.

Sorry, I don't think that is acceptable.

[Update: I originally misread James’ posting as saying the “keep me logged in” checkbox on the Facebook login page was a factor in enabling tracking – in other words that Facebook only used permanent cookies after you ticked that box. Unfortunately this is not the case. I've updated my comments in light of this information.

If you have authenticated to Facebook even once, the tracking widget will continue to collect information about you as you surf the web unless you manually delete your Facebook cookies from the browser. This design is about as invasive of your privacy as you can possibly get…]