Wednesday, December 30, 2009

I wanted to follow up on yesterday's post about the "terror hysteria" that's echoing across the media and out of the mouths of politicians in response to the attempted terror attack on an airline flight. Thankfully we do have a few voices of reason in the mainstream, corporate media. One of course being Rachel Maddow, as evidenced in her extremely enlightening interviewwith security expert Bruce Schneier.

As I pointed out yesterday, before we all run to hide in our closets, gladly give up our civil liberties and freedoms, support wars on nation's that did nothing to us, and sign off on wasting HUGE amounts of money on ineffectual security systems, consider this: Your chances of getting hit by lightning in one year is 500,000 to 1. Your chances of being killed by a terrorist on a plane over 10 years is 10 million to 1.

So first, let's scrap the whole meme that we should live in fear and that we must give up our constitutional rights in order to be safe from a threat that is a fraction of that posed by lightning. Once we have scrapped that fear, now we can discuss, rationally, specific security proposals being pushed by a variety of fearmongering politicians and security industry spokespeople.

Namely, for today's purposes, the use of "Whole Body Image" technologies in airports. Now, I also wrote about this in yesterdays post, as well as in more detail here.

Briefly, the technology photographs American air travelers as if stripped naked. As I wrote months ago, this gives me a little pause, considering that the USA Today reported a TSA official as saying, "You can actually see the sweat on someones back".

As is so often the case with these kinds of technologies, the concerns go far deeper than what the technology itself does with the data it collects, but rather, what happens to that data once collected. Before I get to today's article on the new clamor for this technology, I want to rehash a few key points I made in my first post on this subject months ago.

Privacy advocates simply want more oversight, full disclosure for air travelers, and legal language to protect passengers and keep TSA from changing policy down the road. For example, what's to stop TSA from using clearer images or different technology later. The computers can't store images now, but what if that changes?

The option of walking through a whole-body scanner or taking a pat-down shouldn't be the final answer, as is the case in some airports now. As the ACLU pointed out, "A choice between being groped and being stripped, I don't think we should pretend those are the only choices. People shouldn't be humiliated by their government" in the name of security, nor should they trust that the images will always be kept private. Screeners at LAX [Los Angeles International Airport could make a fortune off naked virtual images of celebrities."

There's also another important case to make against the spread of this technology to every airport in every city: is it the most effective use of our money? Its an incredibly expensive tool, and most security experts - like Bruce Schneier believe money is better spent on intelligence-gathering and investigations.

In a battle that pits privacy against security, the failed attempt to blow up Northwest Flight 253 last week has revived debate in Congress over the use of whole-body imaging technology to screen airline passengers.The machines, which cost about $170,000 each, are being used for screening in 19 airports around the country, including San Francisco and Los Angeles. But they have not been approved for widespread use....While privacy groups strongly oppose the idea, it picked up a key endorsement over the weekend from Connecticut independent Sen. Joe Lieberman, the head of the Senate'sHomeland Security Committee. "Those privacy concerns, which are frankly mild, have to fall in the face of the ability of these machines to detect material like this explosive on this individual," Lieberman said in an interview on "Fox News Sunday."

Last June, the House voted 310-118 to approve a measure introduced by Republican Rep. Jason Chaffetz of Utah to prohibit the widespread use of whole-body imaging technology as a primary source of airport screening. The measure is pending in the Senate and likely to be debated in the new year.

McClintock, who co-sponsored the legislation, said the machines are so invasive that they can detect the difference between a nickel and a dime in a person's pocket. McClintock called it "a virtual strip search" and said security officials can use less invasive methods to detect explosives, such as bomb-sniffing dogs. He said the Christmas Day incident raises questions of why a person on a terrorist watch list had been allowed to enter the country and why U.S. authorities had not revoked his visa. He said whole-body imaging should be used only in conditions where there is probable cause to assume someone might be carrying explosives....

In a background paper, the ACLU said that government officials are "essentially taking a naked picture of air passengers" and that air travelers should not be required to display highly personal details of their bodies as a prerequisite to boarding a plane. "Those images reveal not only our private body parts, but also intimate medical details like colostomy bags," the ACLU said. "That degree of examination amounts to a significant – and for some people humiliating – assault on the essential dignity of passengers that citizens in a free nation should not have to tolerate."Click here to read more.

My summation of this debate remains the same as back in June: Once again this argument seems to be wedged right between the clash of our society's insatiable desire to embrace just about any new technological innovation with the ongoing fight to protect the individuals right to privacy.

This issue also seems to fall into another common narrative for those of us that work in the privacy protection arena: Assuming we can't stop the usage of certain new technologies doesn't the public at least have the right to the strictest of oversight, a vigorous, open, and transparent public debate, and ironclad regulations in place?

Tuesday, December 29, 2009

Here we go again. With last week's attempted terror attack on an airplane heading from Amsterdam to Detroit we can expect the usual government/media playbook: over hype the threat to public safety; attempt to institute cumbersome, intrusive, and largely ineffectual new security measures that invade individual privacy, waste money and resources that could be better spent on more effective counter-terrorism tools, and do little to nothing to make us safer.

What seems to always escape the public discourse is just how astronomical the odds are that any American is going to be killed by a terrorist attack on an airplane (or in any other way for that matter). For instance, the odds I'm going to be hit by lightening this year alone are 1 in 500,000. The odds that I will be hijacked by a terrorist - if I fly 20 times a year for the next 10 years - is 10 million to 1. I like those odds...so please hold off on making flying a living nightmare!

I’ll leave it to security expert Bruce Schneier to go through a more comprehensive list of recent government efforts to “crackdown” on terrorism in our nation’s airports, but let me mention a few of my “moronic hall of fame”.

There’s the wonderfully creepy "Whole Body Imaging" technology that photographs American air travelers as if they’re being stripped naked – a tool opposed by just about all of the nation’s privacy rights organizations.

Up until recently, the machines were mostly confined to being a voluntary alternative to being patted-down by an agent. However, several airports nationwide have already begun to mandate that all passengers pass through the high-tech machines. The good news is opposition to Whole Body Imagining seems to be growing, as is often the case once it dawns on people that it’s a serious pain in the ass, makes them feel violated, and isn’t going to make them safer.

Or take for instance the emergence of "Project Hostile Intent" - the United State's new and not so privacy friendly airport/border security system (still being developed). This system, a technology worthy of a creepy sci-fi movie, has raised the eyebrows of privacy protection advocates because it would collect a variety of personal information about travelers.

Computer World wrote:The system interprets your gestures and facial expressions, analyzes your voice and virtually probes your body to determine your temperature, heart rate, respiration rate and other physiological characteristics -- all in an effort to determine whether you are trying to deceive. Fail the test, and you'll be pulled aside for a more aggressive interrogation and searches.

And Bruce Schneier at the time said: "Even if Project Hostile Intent ultimately succeeds, it will not be a panacea for preventing terrorism. The risk can be reduced, but not eliminated, he says. "If we had perfect security in airports, terrorists would go bomb shopping malls," he says. "You'll never be secure by defending targets."

Before I get to security expert Bruce Schneier's extremely enlightening article on CNN's site, let's get to his interview yesterday with Rachel Maddow:

Terrorism is rare, far rarer than many people think. It's rare because very few people want to commit acts of terrorism, and executing a terrorist plot is much harder than television makes it appear. The best defenses against terrorism are largely invisible: investigation, intelligence, and emergency response. But even these are less effective at keeping us safe than our social and political policies, both at home and abroad. However, our elected leaders don't think this way: They are far more likely to implement security theater against movie-plot threats.

…

"Security theater" refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards.Airport-security examples include the National Guard troops stationed at U.S. airports in the months after 9/11 -- their guns had no bullets. The U.S. color-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.…

Often, this "something" is directly related to the details of a recent event. We confiscate liquids, screen shoes, and ban box cutters on airplanes. We tell people they can't use an airplane restroom in the last 90 minutes of an international flight. But it's not the target and tactics of the last attack that are important, but the next attack. These measures are only effective if we happen to guess what the next terrorists are planning.

If we spend billions defending our rail systems, and the terrorists bomb a shopping mall instead, we've wasted our money. If we concentrate airport security on screening shoes and confiscating liquids, and the terrorists hide explosives in their brassieres and use solids, we've wasted our money. Terrorists don't care what they blow up and it shouldn't be our goal merely to force the terrorists to make a minor change in their tactics or targets.Unfortunately for politicians, the security measures that work are largely invisible. Such measures include enhancing the intelligence-gathering abilities of the secret services, hiring cultural experts and Arabic translators, building bridges with Islamic communities both nationally and internationally, funding police capabilities -- both investigative arms to prevent terrorist attacks, and emergency communications systems for after attacks occur -- and arresting terrorist plotters without media fanfare.

…

We'd do much better by leveraging the inherent strengths of our modern democracies and the natural advantages we have over the terrorists: our adaptability and survivability, our international network of laws and law enforcement, and the freedoms and liberties that make our society so enviable.Click here to read more.

Advancements in technology may serve certain security purposes, but more than naught, represent the continuing expansion of Big Brother's ability to monitor and record nearly everything we do - all under the guise of keeping us safe. But who is keeping us safe from those doing the watching and recording? And is the loss of freedom, privacy, and quality of life a worthwhile tradeoff for unproven protections from a terrorist threat that has a 1 in 10 million chance of killing someone who's been flying 20 times a year for 10 years?

Terrorists should be treated like common criminals. In fact, if you look at history, attack after attack is thwarted by investigators, good intelligence, working relationships with other nations, and a commitment to the rule of law. Oh, and perhaps we could stop giving misguided, vengeful, and disenfranchised individuals incentive to blow themselves and hundreds of innocent people up by not continuously bombing and occupying Muslim nations, arming their enemies, and supporting their brutal authoritarian leaders?

Police can in some cases track cell phone location by merely telling a court that the information is relevant to an investigation, a legal expert tells TPM - a fact that may partly explain how law enforcement racked up 8 million requests for GPS data from a single wireless carrier in a year. An increasingly popular and easy-to-access surveillance tool for police, GPS data is not currently protected by the Fourth Amendment, and the standards for gaining access to the information are murky and highly variable. That's partly because one of the statutes that bears on the issue was passed in the mid-1980s, before many of the technologies involved were invented. And Congress hasn't done much to update the law since.

Now to the Colbert Report: "If Congress doesn't reauthorize the Patriot Act, America's corporations are ready to step in."

As I have also recently written here, there are a number of reasons that make this Colbert clip especially relevant.

In just the past few weeks, in addition to discovering that law enforcement made 8 million requests to Sprint for customer GPS data information in a one year period, Facebook reportedly received up to 100 demands each week from the government seeking information about its users, AOL reportedly received 1,000 demands a month, and in 2006, a U.S. Attorney demanded book purchase records of 24,000 Amazon.com customers.

And let's also remember, companies like Google and Yahoo don't make public how often information about their users is demanded or disclosed. The numbers are likely astronomical.

Over the past few years we've also come to learn that millions of Americans have been wiretapped by the government without it having to produce a warrant.

And just last month, the Senate voted to renew some of the most egregious components of the Constitution eviscerating Patriot Act including:

1. Allowing broad warrants to be issued by a secretive court for any type of record, from financial to medical, without the government having to declare that the information sought is connected to a terrorism or espionage investigation.

2.Renewing the so-called “roving wiretap” provision, allowing the FBI to obtain wiretaps from the secret court, known as the FISA court, without identifying the target or what method of communication is to be tapped.

3. Renewing the so-called “lone wolf” measure that allows FISA court warrants for the electronic monitoring of a person for whatever reason — even without showing that the suspect is an agent of a foreign power or a terrorist.

And of course, the government can still essentially break into your house as long as it doesn't tell you it did...may the 4th Amendment rest in peace.

All things considered, I want to give special thanks to Stephen Colbert for bringing both attention to this topic, and some much needed humor.

Friday, December 18, 2009

As I have written here in the past, with the explosion in popularity of social networking sites like Facebook, the ability to protect ones personal privacy has become increasingly challenging.

It goes without saying that tools like Facebookreveal a considerable amount of information about a user's lifestyle, interests, and goals. Depending on the user's settings, co-workers, employers, and certain family members could have access to information about the user that may be better left unknown. Recent Facebook flaps highlights growing concerns about the increasingly sophisticated technologies used to track online activities in an effort to more precisely target advertising.

Recent Facebook flaps highlights growing concerns about the increasingly sophisticated technologies used to track online activities in an effort to more precisely target advertising. What has also become apparent is that these social networking sites have not exactly been forthcoming about how much user information they harvest, share, and with whom.

However, in recent months users have been becoming more and more conscious of privacy concerns, as Facebook has been criticized for not allowing people to permanently delete their accounts and personal information from the site as well as their use of "Beacon" (no longer in use) - a technology that tracks user's online purchases and informs their friends.

The controversy raised by Facebook's use of the Beacon technology - and the subsequent victory of privacy advocates - has helped ignite a larger debate regarding the largely hidden and growing problem of online consumer-tracking and information-sharing.

This debate has now come to a head, again, with Facebook's response last week to pressure about its privacy practices, including an ACLU petition signed by over 43,000 concerned Internet users.Facebook released a new privacy policy, modified its profile and publication privacy controls, and rolled out a "Transition Tool" to guide all 350 million Facebook users through the process of choosing new privacy settings.

As the ACLU's Nicole Ozerdetailed last weekon the California Progress Report, there are a whole lot of problems with the new policy, just as there was with the old. Nicole wrote:

We have three primary privacy concerns with the new system:1. There's more "publicly available information" that you can't control: Before the recent changes, you had the option of exposing only a "limited" profile, consisting of as little as your name and networks, to other Facebook users—and nothing at all to Internet users at large. Now your profile picture, current city, friends list, gender, and fan pages are "publicly available information," which means you have no way to prevent any other Facebook user from viewing this information on your profile, and you can only prevent Internet users from viewing this information by disabling search entirely (which you can't do through the Transition Tool). 2.Facebook is "recommending" that you loosen your privacy settings: For most users, including those who have never changed their Facebook privacy settings, the recommended settings make information less protected and more widely available than the previous default settings. For example, as of last Friday, sensitive information like relationship status and gender preference was available only to your friends by default; now Facebook encourages users to make this information available to "everyone!" 3. The "Transition Tool" does not allow most users to strengthen privacy settings: Facebook's Transition Tool gives you only two choices: keep your current settings or switch to Facebook's recommendations. And since Facebook's recommendations are less private than the previous default settings, most users have to click through to another page of privacy controls in order to strengthen their settings.But, the negative response to Facebook's new privacy policy has just been turned up a notch...a real big notch. The Electronic Privacy Information Center (EPIC) called upon the Federal Trade Commission to investigate Facebook's recent changes to its users' privacy options.

The EPIC complaint is supported by the Center for Digital Democracy, the Privacy Rights Clearinghouse, and seven other advocacy organizations, and takes issue with Facebook's now "public" treatment of such data as users' names, genders, cities, and profile photos.

In other words, by default, this information is now disclosed to search engines as well as to third-party Facebook applications. The concern revolves around how this information could be used against a user's interests.

EPIC's complaint states: “Facebook’s changes to users’ privacy settings disclose personal information to the public that was previously restricted [and] disclose personal information to third parties that was previously not available. These changes violate user expectations, diminish user privacy, and contradict Facebook’s own representations.”

Facebook recently rolled out a new privacy protocol that it promoted as giving users more control over privacy settings. With the controls, users can decide whether to make certain aspects of their Facebook profiles publicly available on the Internet, or only available to friends...But the controls were limited, and certain elements, such as the friends list, were made public by default. As originally released, users had no way to change that setting. After a swell of criticism, Facebook allowed users to make their friends list private. Another complaint was that too much information was made public by default....Specifically, EPIC asked the FTC to require Facebook to restore the previous privacy settings, allowing users to control disclosure of personal information and to fully opt out of revealing information to third-party developers. EPIC also demanded that Facebook make its data-collection practices clearer and easier to understand.

EPIC took special aim at the dangers in allowing third-party developers automatic access to much of a user's personal information. Facebook permits third-party applications to access user information at the moment a user visits an application Relevant Products/Services web site. According to Facebook, third-party applications receive publicly available information automatically, and additional information when users authorize it or connect a Facebook account.

EPIC cited Facebook's own policy to highlight how much information applications may have access to: "your political view, your activities, your interests, your musical preferences, television shows in which you are interested, movies in which you are interested, books in which you are interested...your relationship status, your dating interests, your relationship interests, your network Relevant Products/Services affiliations, your education history, your work history, your course information, copies of photos in your photo albums, metadata associated with your photo albums...a list of user IDs mapped to your friends, your social time line, notifications that you have received from other applications, and events associated with your profile."Click here to read the rest of the article.Let me conclude with the closing remarks made by the ACLU's Nicole Ozer, because I think it puts all of this in the proper, big picture perspective:

...privacy on Facebook is only one part of a bigger picture, and with your help we can build a strong movement for privacy rights on all online sites and services. We hope you'll join us and continue to Demand Your dotRights—your right to control your own personal information in the world of modern technology and online services—as we work together to upgrade privacy protections and give you real control of your personal information.

Thursday, December 17, 2009

It feels like - once all my posts on the company are combined - I've written nearly a book on Google and its continuing oppositional stance to privacy advocates concerns. Rather than rehash all of them in detail, let me go to a recent synopsis of the company's ever expanding technological empire and the rather confrontational relationship its had with privacy in order to add some perspective to the latest "privacy snafu" coming from CEO Steve Schmidt himself (as well as the company's latest "coming attraction"):

Before I get to the privacy concerns that have forced Google to delay an expansion of its Goggles service - which would have enabled camera-phone users to identify strangers on the street through a kind of biometrics based identification technology - let me quote the company's CEO from a recent appearance on CNBC:

"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."

Before you let that sink in, he also said this:

"But if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it's important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities."Now, it takes a lot to truly surprise or shock me anymore, but for the CEO of the pre-eminent global leader in information technology to come straight out and say this caught me even me off guard. Where does one even begin when answering this kind of mindless "If you haven't done anything wrong you have nothing to worry about" mantra?

People that subscribe to this kind of thinking must have never A. read the Constitution, or B. read anything about history and abuses of power, or C. simply don't have any sense of privacy as a right, not a privilege.

I'm not going to go into a diatribe today on this topic, because seriously, a book could be written. But let me quote two privacy advocates that sum Schmidt's statement up perfectly.

Schmidt's statement makes it seem as if Google...is not even concerned enough to understand basic lessons about privacy and why it's important...Schmidt's statement is painfully similar to the tired adage of pro-surveillance advocates that incorrectly presume that privacy's only function is to obscure lawbreaking...[There's an] error in logic that leads to short-sighted conceptions of privacy like Schmidt's...Google, governments, and technologists need to understand more broadly that ignoring privacy protections in the innovations we incorporate into our lives not only invites invasions of our personal space and comfort, but opens the door to future abuses of power.

Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance...Privacy is a basic human need. Too many wrongly characterize the debate as "security versus privacy." The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that's why we should champion privacy even when we have nothing to hide."

That's about as good as anyone I've read make this very fundamental case for civil liberties and privacy. On the bright side, we can all applaud AzaDotlzer, Mozilla's Director of Community Development, for posting on a Mozilla veteran's personal Web page how users can easily switch Firefox's search engine from Google to bing and why he recommends them doing so.

So putting Steve Schmidt's rather frightening, Orwellian even, view of privacy behind us, let's look at the company's latest product that's coming around the pike - but has been temporarily delayed due to...wait for it...privacy concerns of course!

I speak of the experimental Google Goggles application - which was launched last week - that allows smart-phone users to search for subjects simply by snapping a picture of them. Users can focus their phone's camera on an object and Google will try to match portions of the picture with the tens of millions of images in its database.

The delay, as you might have guessed, centers on privacy advocates fears regarding the ' facial recognition' potential of the service, which would allow users to track strangers through a photograph.

Google, which has confirmed the technology is available but has yet to decide if it will be rolled-out as part of Goggles, has now confirmed that it is blocking aspects of the application until privacy implications have been fully explored.Google spokesman Anthony House said: 'We do have the relevant facial recognition technology at our disposal. For instance with our Picasa picture service a user can tag a friend in their photo album and it will search for and tag any other pictures of that person. 'But we haven't implemented this on Google Goggles because we want to consider the privacy implications and how this feature might be added responsibly...We will have talks with privacy advocates and consumers before we consider any changes - it may be people want such a service, but we don't have a rigid timescale on when any decisions will be made.'Click here to read more.

I'm not going to lie, a product that scan faces and tell you who that person is, on its face creeps me out. In many ways this would be a stalker or muggers dream come true. I don't want to go overboard by jumping to any conclusions before I know more, but I think on an almost instinctive and intuitive level, a lot of people will view this as an invasion of privacy and even a potential threat to their safety.

There's plenty of ways to tag your friends in photos without the ability or need to identity random strangers in the street. Social networks like Facebook are meant to share photos and information with people you know, and at least ideally, with them having some say in the matter. This strikes me as going potentially far beyond that. More to come...

Wednesday, December 16, 2009

I want to alert everybody to a study recently published by Patient Privacy Rights(PPR) that rates the various Personal Health Record (“PHR”) systems currently available to consumers. A PHR can collect and store official records, labs, tests, and claims data directly deposited by providers.

In fact, as the article by Ashley Katz of (PPR) in the California Progress Report notes, A PHR can also store other health-related data such as heart rate, glucose levels, medications, allergies, exercise habits, lifestyle, sexual history, personal notes and other data you create. Most PHRS are online; some are programs that can be downloaded to your home computer. Many are free.

PHRs are designed for and marketed directly to you, the patient. You are most likely to be using a PHR right now if:

a) your employer offered it to you, or b) your insurance company offered it to you.

Before I get to more of that article about PPR's recent study, let me go back to some of what I have written hear about our nation's transition to electronic health records in the past. From a privacy perspective, the question is how "safe" will our VERY personal health data be in the coming cyber world of electronic health records, or more specifically, "Where is my data and who has access to it?"

Or perhaps even more importantly, "can my private data be traced back to me personally and sold to others?" According to a recent study by two computer scientists at the University of Texas at Austin, "re-identifying" customers was a lot easier than expected, and contradicts claims made by company's promising individual anonymity (in this case by NetFlix).

In other words, just because a customer's name, address, and other specific identifying information were not connected to their movie choices, the researchers were still able to correctly match them up through a process called "de-anonymization". Such a technique raises concerns that the same process could be used to do the same thing to individuals and their health records.

One of the most important challenges for privacy advocates has been making sure that the transition to electronic medical records includes ironclad data safeguards along with it. We know such a system will save money and improve health care (though how significant these improvements and savings will be is still in question), but what remains contentious - and rightly so - is the intrinsic threat a massive electronic database containing our most personal medical records poses to individual privacy and security.

PHRs and how the information stored in PHRs is used are not protected by any federal law. Your information can be used in many ways. Marketing is the obvious likely annoyance. But what if your employer could also see your sexual history? What if your insurance company knew how much alcohol you consumed this year or if you ate enough fruits and vegetables?

Patient Privacy Rights (PPR) sampled five PHRs and did our best to decode their privacy policies. The resulting Privacy Report Card spells out in plain English what control you actually have over your information with each PHR. The Privacy Report Card is available at www.patientprivacyrights.org and explains how each PHR earned their grade of A – F.

...

The bad news is other companies do not allow patients to control their PHRs. That is a scary thing when you consider that PHRs can store sensitive health information as well as lifestyle habits such as what you eat, how much you drink, and how often you exercise. This information can easily get into the wrong hands, especially if your PHR is offered by an employer or insurer. All PHRs claim to be “patient-centric” and claim that “privacy is important”, but it’s simply not true.

What grades did the PHRs earn?CapMed’s ICE PHR: C

Google Health: D – Platform F - Partners

Microsoft HealthVault: B – Platform F - Programs

NoMoreClipboard: A

WebMDs: C

PHRs offered by Employers/Insurers: F

...

1) Know that if your PHR is sponsored by your employer or insurer, the odds are VERY GOOD that they have access to all your information. This was quite clear after reviewing a form privacy policy for employer/insurer sponsored PHRs. Sure, not every company is out there to take advantage but personal health information can be used to discriminate, damage reputations and harm opportunities.

2) Every company and product has their own privacy policy. Even if you feel comfortable with a PHRs policy and website, click on a link and leave the site, all bets are off. Any third party that touches your data may not be held to the same standard. This is a key lesson for the Google and Microsoft tools. ...So what can be done?

1) The public needs to wake up and pay attention. Our personal health information is everywhere and being passed from one company to the next, without our permission or knowledge. If we don’t demand control, we will lose it forever.

2) We need federal laws that make Fair Information Practices the rule for all health information, including PHRs. Data shared for one purpose should be used solely for that purpose unless the patient gives consent for any new use. No single piece of data should be allowed to go to an employer, insurer or other entity without patient permission.Click here to read the article in its entirety.

Granted, regulations alone will never be the end all solution when it comes to privacy in the information age...it must be coupled with public awareness and the pressure that consumer choice can put on industry. '

But as it stands today, there still aren't uniform standards or even minimum standards for electronic medical records. Yes, there are some protections in the Health Insurance Portability and Accountability Act of 1996, as well as those in the new stimulus bill.

But key protections are still absent. The prohibition on the sale of medical records is weak and full of loopholes, nor does it apply to vendors like Microsoft or Google. Both companies have agreed to contracts that say they won't release your information, but there is no law mandating that they don't sell the information. If we've learned anything about corporate behavior in recent years, its that without ironclad, legal requirements, we shouldn't expect them to behave the way we'd expect from say, a human being.

Similarly, the breach provisions requiring companies to notify patients when electronic medical records are accessed does apply to Google and Microsoft, however, there are safe-harbor provisions that let companies off the hook from the notification requirement if the breach occurred in "good faith."

The federal law on the books only requires that patients are notified when their information was disclosed in the course of treatment but not how it was used. As a result, the patient will not know which hospital personnel looked at the information or for what purpose.

In other words, there's a lot of work still to be done on this issue. The study by PPR I have highlighted today further validates these concerns, particularly in light of Google's scores of a D and F and systems offered by employers and insurers also receiving an F. These are two HUGE providers of what will be the electronic health record "industry" that are still failing us.

Now, I think there's probably a whole lot of us out there that have worked on "subversive issues" (for instance, I worked a whole lot on electronic voting systems and the election theft of 2004), attended protests (i.e. anti-war, anti-wall street, ant-"free trade"), and/or have been generally outspoken in print, radio, or even video against a variety of government or corporate actions, that have wondered, "What kind of information does the government have on me?"

I've recently even considered filing a Freedom of Information Act (FOIA) request for my files...but just never quite got around to doing the leg work it would take to do it, let alone determining what it would take to do so and where to even look.

Thanks to the article from Carolyn, some important questions in this regard have been answered. As she notes, "With literally hundreds of agencies working in a variety of fields, the government can keep track of every citizen, resident, and more from birth to death. With the problem of identity thieves and misinformation, these files can hurt you or be a life saver. In just the past few weeks I've written about law enforcement making 8 million requests to Sprint for customer GPS data information in a one year period, Facebook reportedly receiving up to 100 demands each week from the government seeking information about its users, AOL reportedly receiving 1,000 demands a month, and in 2006, a U.S. Attorney demanded book purchase records of 24,000 Amazon.com customers.

And let's also remember, companies like Google for instance (the grandaddy of them all), don't make public how often information about their users is demanded or disclosed.

Over the past few years we've also come to learn that millions of Americans have been wiretapped by the government without it having to produce a warrant.

And just last month, the Senate renewed some of the most egregious components of the Constitution eviscerating Patriot Act, including:

Allowing broad warrants to be issued by a secretive court for any type of record, from financial to medical, without the government having to declare that the information sought is connected to a terrorism or espionage investigation.

Renewing the so-called “roving wiretap” provision, allowing the FBI to obtain wiretaps from the secret court, known as the FISA court, without identifying the target or what method of communication is to be tapped.

Renewing the so-called “lone wolf” measure that allows FISA court warrants for the electronic monitoring of a person for whatever reason — even without showing that the suspect is an agent of a foreign power or a terrorist.

And of course, the government can still essentially break into your house as long as it doesn't tell you...may the 4th Amendment rest in peace.

All things considered then, I'd say we have plenty of reason to be interested in what information the government may, or may not have on us, and for what reason?

1. F.B.I. Files: According to Reddit.com, the Federal Bureau of Investigations keeps files on every person in the entire United States.

2. Your Homeland Security File: Are you a regular international traveler? Then chances are the Department of Homeland Security has a file on you. This blogger put in a request to get their file, with amazing and impressive results.

3. C.I.A. Records: You don’t have to be a slick burglar to obtain a copy of the record the Central Intelligence Agency may or may not have on you.

4. Your Earnings: The U.S. Social Security office keeps a constant track of all earnings since you filled out your first W-2. They know how much you have made, how much taxes you have paid, and how much you stand to earn on retirement.

5.Criminal Records: This is a useful search if you have had trouble with the law and want to know what is out there on you. Also a good idea if you’ve never been in trouble to make sure your records reflect that.

6.Court Records: If you have ever gotten so much as a parking ticket, the government has court records on you. Like most government records, they are public and subject to the Freedom of Information Act.

Thursday, December 10, 2009

I'm a bit behind on this blog due to an exceptionally bad cold that has kept me sidelined nearly all week. The good news is that the ACLU's Nicole Ozer has written an excellent op-ed that I can get right to that deals with all that's happening over at Facebook these days regarding privacy - particularly the sites recent supposed privacy "upgrades".

As I have written here in the past: One thing is certain, with the explosion in popularity of social networking sites like Facebook (and that's to say nothing of company's like Google and the array of privacy challenges many of its products represent), the ability to protect ones personal privacy has become increasingly challenging. It goes without saying that tools like Facebook reveal a considerable amount of information about a user's lifestyle, interests, and goals. Depending on the user's settings, co-workers, employers, and certain family members could have access to information about the user that may be better left unknown. Recent Facebook flaps highlights growing concerns about the increasingly sophisticated technologies used to track online activities in an effort to more precisely target advertising.

Recent Facebook flaps highlights growing concerns about the increasingly sophisticated technologies used to track online activities in an effort to more precisely target advertising. What has also become apparent is that these social networking sites have not exactly been forthcoming about how much user information they harvest, share, and with whom.

However, in recent months users have been becoming more and more conscious of privacy concerns, as Facebook has been criticized for not allowing people to permanently delete their accounts and personal information from the site as well as their use of "Beacon" (no longer in use) - a technology that tracks user's online purchases and informs their friends.The controversy raised by Facebook's use of the Beacon technology - and the subsequent victory of privacy advocates - has helped ignite a larger debate regarding the largely hidden and growing problem of online consumer-tracking and information-sharing.

With that as the backdrop, here's a few choice clips from Nicole's op-ed in the California Progress Report:In response to pressure about its privacy practices, including an ACLU petition signed by over 43,000 concerned Internet users, Facebook has released a new privacy policy, modified its profile and publication privacy controls, and rolled out a "Transition Tool" to guide all 350 million Facebook users through the process of choosing new privacy settings.

We're glad to see Facebook finally put privacy front and center for every one of its users. We hope other companies will do the same. But we are concerned that the Transition Tool and other changes actually discourage or eliminate some privacy protections that Facebook users currently employ. And we're still waiting for Facebook to address the privacy issues concerning third party applications that were raised months ago in our petition. Please sign our new petition demanding that Facebook rethink some of today's changes and continue to give you more control over your own information....We have three primary privacy concerns with the new system:There's more "publicly available information" that you can't control: Before the recent changes, you had the option of exposing only a "limited" profile, consisting of as little as your name and networks, to other Facebook users—and nothing at all to Internet users at large. Now your profile picture, current city, friends list, gender, and fan pages are "publicly available information," which means you have no way to prevent any other Facebook user from viewing this information on your profile, and you can only prevent Internet users from viewing this information by disabling search entirely (which you can't do through the Transition Tool).

Facebook is "recommending" that you loosen your privacy settings: For most users, including those who have never changed their Facebook privacy settings, the recommended settings make information less protected and more widely available than the previous default settings. For example, as of last Friday, sensitive information like relationship status and gender preference was available only to your friends by default; now Facebook encourages users to make this information available to "everyone!"

The "Transition Tool" does not allow most users to strengthen privacy settings: Facebook's Transition Tool gives you only two choices: keep your current settings or switch to Facebook's recommendations. And since Facebook's recommendations are less private than the previous default settings, most users have to click through to another page of privacy controls in order to strengthen their settings. ...Even if your Facebook profile is "private," when you take a quiz or run any other application on Facebook, that app can access almost everything in your profile: your religion, sexual orientation, political affiliation, pictures, and groups. And these apps may have access to most of the info on your friends' profiles too—which means if your friend takes a quiz, they could be giving away your personal information, even if you've never used an app!

The privacy settings that address this issue remain buried behind too many layers of menus and the new controls still fail to explain what applications can really see. So we're asking you to keep up the pressure on Facebook. If you haven't done so already, please take our Facebook quiz [Facebooklogin required] to peek behind the curtain—and then share it with your friends!

I should mention that the ACLU is only one of MANY privacy rights organizations that are critical of Facebooks recently claimed upgrades. A BBC article entitled "Facebook faces criticism on privacy change" notes:

Facebook said the changes help members manage updates they wanted to share, not trick them into revealing too much. "Facebook is nudging the settings toward the 'disclose everything' position," said Marc Rotenberg, executive director of the US Electronic Privacy Information Center (Epic). "That's not fair from the privacy perspective." ...Jason Kincaid, writing on the Tech Crunch news blog, said some of the changes were made to make Facebook more palatable to search sites such as Bing and Google. Blogger Marshall Kirkpatrick was worried that the default setting for privacy was to make everything visible to everyone.

"This is not what Facebook users signed up for," he wrote. "It's not about privacy at all, it's about increasing traffic and the visibility of activity on the site." He also criticised the fact that the pop-up message that greets members asking them to change their privacy settings was different depending on how engaged that person was with Facebook.He said Facebook was "maddeningly unclear" about the effect of the changes. You can read the rest of that article here.An article in theUK's Guardian entitled "Facebook privacy change angers campaigners" notes:

The Electronic Frontier Foundation, a group that campaigns for the rights of internet users, said that while some of the changes were beneficial to the site's worldwide audience, others were "plain ugly". "These new 'privacy' changes are clearly intended to push Facebook users to publicly share even more information than before," Kevin Bankston, a senior attorney with the EFF, wrote on the organisation's blog. "Even worse, the changes will actually reduce the amount of control that users have over some of their personal data."Click here to read the rest of that article.And finally, an article in Networld entitled "Facebook users speak out against new privacy settings" points out another problem with the sites new privacy settings...users aren't happy with them either, noting:

Blogger Marshall Kirkpatrick, vice president of content development at ReadWriteWeb, called the privacy settings "near Orwellian." "The company says the move is all about helping users protect their privacy and connect with other people, but the new default option is to change from 'old settings' to becoming visible to 'everyone,'" Kirkpatrick writes. "This is not what Facebook users signed up for. It's not about privacy at all, it's about increasing traffic and the visibility of activity on the site."Click here to read more of the specific complaints made by site users.

Friday, December 4, 2009

The news I want to discuss today is about The Electronic Frontier Foundation (EFF) and the Samuelson Law, Technology, and Public Policy Clinic at the University of California, Berkeley, School of Law filing a lawsuit on Tuesday against six government agencies in order to force the disclosure of policies governing the use of social networking sites for investigations, data-collection, and surveillance.

As some might remember, EFF and the Samuelson Clinic have also recently launched a Google Book Search privacy campaign (along with the ACLU) to alert the public about the approaching launch of the corporate juggernaut's entrance into the "library business" (my term).

One thing is certain, with the explosion in popularity of social networking sites like Facebook (and that's to say nothing of company's like Google and the array of privacy challenges many of its products represent), the ability to protect ones personal privacy has become increasingly challenging. As I have asserted here in the past, it goes without saying that tools like Facebook reveal a considerable amount of information about a user's lifestyle, interests, and goals.

Depending on the user's settings, co-workers, employers, and certain family members could have access to information about the user that may be better left unknown. Recent Facebook flaps highlights growing concerns about the increasingly sophisticated technologies used to track online activities in an effort to more precisely target advertising.

But my focus today, as I discussed on Wednesday with the news that law enforcement made 8 million requests to Sprint alone for customer GPS data information, is the government's efforts to access our private information and how these companies may or may not be responding to such requests and why?

Online companies regularly receive demands for personal information about their users—with little to no judicial oversight.

Facebook reportedly receives up to 100 demands each week seeking information about its users. AOL reportedly receives 1,000 demands a month. In 2006, a U.S. Attorney demanded book purchase records of 24,000 Amazon.com customers. (In a show of loyalty to users, the company successfully fought back against the subpoena.) Other companies, like Google, don't make public how often information about their users is demanded or disclosed. No one should be forced to choose between using the Internet and keeping their personal information from being misused. We shouldn't have to pay for these seemingly free online services with personal details about our lives.

Consumers clearly want more control over personal information, so it's good business for companies to join consumers in demanding a privacy upgrade. A 2009 national telephone survey conducted by the University of California, Berkeley, and University of Pennsylvania revealed that 92% of American adults believe they should retain the right to delete their information from a site, and 69% feel there should be a law that gives people the right to know everything that a website knows about them.So with that context now to consider, let's get to the lawsuit by EFF and the Samuelson clinic, which by the way, follows over a dozen Freedom of Information Act (FOIA) requests seeking this information from the Department of Defense, the Department of Homeland Security, the Department of Justice, the Department of Treasury, the Central Intelligence Agency, the Office of the Director of National Intelligence, and other agencies.

Shane Witnov, a law student at UC Berkeley School of Law's Samuelson Law, Technology and Public Policy Clinic said the lawsuit was prompted by the need for more transparency around the government's use of social networking sites for information gathering purposes.

"Social networking Web sites can be invaluable sources of information. There is a wealth of information on there that can be really useful in crime protection," he said. At the same time, an unchecked ability to gather information from such sites could be invasive of privacy, he said. The eight-page complaint lists several media reports about law enforcement's use of social sites for surveillance purposes. One of the reports includes an Associated Press story about police searching Facebook photos for evidence of underage drinking and watching YouTube videos to identify suspected rioters.

...

"Although the Federal Government clearly uses social-networking websites to collect information, often for laudable reasons, it has not clarified the scope of its use of social-networking websites or disclosed what restrictions and oversight is in place to prevent abuse," the lawsuit said.

For instance, there is no information on how such searches are conducted, or whether they involve specific targets or are broader in scope, Witnov said. "We don't know if they are searching for the top twenty most wanted criminals or are just scanning," such sites he said. Similarly there is no information on whether such searches are being enabled by automated information gathering and data visualization tools.

One of the articles talks about the Secret Service immediately spotting the opening of social network account by a fugitive. The fact that it was "immediately spotted" is interesting, Witnov said. "That phrasing might mean nothing, or it might suggest they spotted it immediately because they had a software program constantly monitoring or looking for certain people. We would certainly like to know the answer to that," he said.Click here to read the article in its entirety.

Now these are some questions I would LOVE to get the answer to, like just what are the federal guidelines on the use of social-networking sites? What are, if any, the manuals or materials that are used to guide government authorities when they decide to make a request about accessing say, an individual'sFacebook page? Similarly, what kind of tools are these government agencies using to gather this data (like how did they know a "criminal" had JUST opened a Facebook page?)?

This lawsuit strikes at the heart of all kinds of fundamental constitutional issues and questions we must begin to address and debate. Who owns OUR data? What does it take for "authorities" to access that data? What criteria are these companies using to decide whether to give up our information or not?

Cheers, once again, to EFF and the Samuelson Clinic for their tireless work. And a reminder to everyone to check out the ACLU'SDotRights campaign.

PRIVACY REVOLT! tackles the issues at the intersection of civil liberties and technology, with news and commentary on government and corporate surveillance, identity theft, data brokers, tracking devices, and the security of consumers' financial, medical, and phone records.

Privacy Bill List

We provide tracking and analysis of the most important privacy bills moving through the California state legislature.