Tuesday, January 24, 2012

The fourth amendment isn’t completely dead after all! While
this fundamental right to privacy is admittedly in tatters, the
Supreme Court ruled yesterday that police must have a warrant in order to track someone
using a GPS device.

The case in question involved police covertly tracking a
suspected cocaine dealer's car using
a GPS device for an extended period of time without getting a warrant. The
question before the court largely centered on whether the constant, and
extended, use of a secret GPS tracking device violated the Fourth Amendment’s
protection against unreasonable searches and seizures?

Or, is such use of these devices without a warrant
acceptable on the grounds that there is no expectation of privacy when in
public places and that such tracking technology merely makes public surveillance
easier and more effective?

Clearly, a whole lot was riding on this decision for privacy
advocates. Citizens shouldn’t be concerned that trips to a friend's house, a place of worship, or a therapist's office can be tracked in real time by the
government.

Thankfully, in this case, the court agreed: attaching a GPS
device to a car and tracking its movements is a violation of the Fourth
Amendment. Unfortunately, the government will likely continue to insist that tracking
the location of cell phones is unaffected by this ruling.

As previously laid out in an article in Wired Magazine,
there is an important distinction between traditional surveillance and GPS
tracking: "Repeated visits to a church, a gym, a bar, or a bookie tell a
story not told by any single visit, as does one’s not visiting any of these
places over the course of a month. The sequence of a person’s movements can
reveal still more; a single trip to a gynecologist’s office tells little about
a woman, but that trip followed a few weeks later by a visit to a baby supply
store tells a different story."

Interestingly, though not surprising, the Court, while in
unanimous agreement that a warrant is necessary, came to that conclusion from
very different perspectives.

Certainly, the stand out Justice was Sonia Sotomayor, who
went much further than her colleagues on the issue of privacy in the digital age - even making a case for revision of the
“third-party” doctrine (i.e. we lose Fourth Amendment protection when we
disclose certain information). She wrote, “More fundamentally, it may be
necessary to reconsider the premise that an individual has no reasonable expectation
of privacy in information voluntarily disclosed to third parties. This approach
is ill suited to the digital age, in which people reveal a great deal of
information about themselves to third parties in the course of carrying out
mundane tasks. People disclose the phone numbers that they dial or text to
their cellular providers; the URLs that they visit and the e-mail addresses
with which they correspond to their Internet service providers; and the books,
groceries, and medications they purchase to online retailers.”

On the question of surveillance, she also distanced herself
from Antonin Scalia’s narrow property rights argument (i.e. by installing the
device police were violating the suspect’s private property), writing “…the
same technological advances that have made possible nontrespassory surveillance
techniques will also affect the Katz test by shaping the evolution of societal
privacy expectations. Under that rubric, I agree with Justice Alito that, at
the very least, 'longer term GPS
monitoring in investigations of most offenses impinges on expectations of
privacy.'"

As
Julian Sanchez of the CATO institute noted, the ruling was a big victory
for privacy advocates and the Fourth Amendment, writing, “This is a pretty big
deal. Fourth Amendment scholars have been warning for decades—and with
increasing alarm—that modern communications technology could turn
constitutional privacy protections into an empty formality if we’re regarded as
waiving those protections whenever we “expose” information to a third party. It
is inherent to the nature of the Internet and mobile telecommunications, after
all, that almost everything we do online—and, increasingly, much that we do
offline as well—leaves a trace in the vast databases of one corporation or
another.

Sotomayor’s concurrence signals a recognition that we need
to move beyond what privacy scholar Daniel Solove has called “The Secrecy
Paradigm,” which assumes that whatever is not totally secret (or very nearly
so) is effectively “public.” In other words, if your Internet provider has a
record of every Web site you visit, there’s no invasion of privacy when the
government decides to have a look at the list. At least one Justice, evidently,
recognizes that this is an indefensible inference—and one hopes she’s not
alone.”

Does Sotomayor's case against the third party doctrine have any significance for privacy advocates moving forward?Timothy
B. Lee of ArsTechnica says yes, writing, “Sotomayor's
discussion of the third-party doctrine has no legal significance, since she was
the only one to sign onto her concurrence. But it could prove to have greater
significance in the long run. The existence of at least one justice who is
skeptical of the doctrine will inspire privacy advocates to raise objections to
the idea in future cases. And one of those cases is likely to reach the high
court at some point in the future.”

Thursday, January 12, 2012

Rather than re-inventing the wheel today, if you want some past posts I've done on electronic health records (EHR's) and the need for strict privacy safeguards that protect consumers, you can go here, here, or here. Generally speaking, I've made the following arguments: yes, this transition from paper to EHR's is inevitable and necessary; yes, such a transition does offer numerous benefits from cost effectiveness to better care; but, and this is a big but, what remains contentious - and rightly so - is the intrinsic threat a massive electronic database containing our most personal medical records poses to individual privacy and security.

Similarly, I have also documented, one medical records data breach after another, some due to hackers/identity thieves and some as a result of gross hospital incompetence and negligence (and more). In addition, I've detailed how states, like California for instance, are trying to create a set of privacy standards for these records that often means merging state rules and federal ones.

Given the lack of consistency, for instance, between California’s Confidentiality of Medical Information Act (CMIA) and the federal HIPAA (The Health Insurance Portability and Accountability Act), there is no single, comprehensive “rule” for the use and disclosure of health information in our state.

Thus the debate taking place over what kind of privacy standards and protections should apply to EHR’s centers around a few core principles: accountability among parties involved in processing electronic transactions, consumer control over how their information is shared and the availability of access to it, transparency (so anyone who accesses files is recorded and made available to the consumer if desired), and system security to ensure a patients private information is protected from identity thieves, overzealous law enforcement, or unwanted marketers.

Now that I've briefly gone over some of the general fundamentals of this very complex issue, I want to discuss two articles that have come out in the past week or so, one about the UC Regents dragging its feet in the lawsuit against it for a medical records data breach at the UCLA Health System, and the other, a MUST READ from the Los Angeles Times Michael Hiltzik entitled (apt for this blog), "Her case shows why healthcare privacy laws exist."

Let's begin with Hiltzik's piece because it truly blows the mind, and brings home why this MATTERS. He writes:

Of all the personal information that you might want to keep private, your medical records are the most important. That's why federal and state laws carry stiff penalties, up to and including jail time, for healthcare providers who let such data loose into the wild.

So you should be aghast at how free and easy Prime Healthcare Services and two executives at Prime-owned Shasta Regional Medical Center have been with the medical chart of a patient named Darlene Courtois. They showed the entire chart to an editor of her hometown newspaper, and Prime's corporate office divulged some of her medical examination results to me (though I didn't ask for them). They didn't have her permission for those disclosures, her daughter says.

...

Here's what state and federal laws have to say: A hospital can't disclose a patient's medical information publicly, such as to a newspaper, without the patient's written authorization. The authorization has to be very specific, designating exactly which records may be disclosed and to whom.

The applicable laws are the federal Health Insurance Portability and Accountability Act of 1996, which is known as HIPAA, and the 2008 California Confidentiality of Medical Information Act. The covered records include any information about an individual's "past, present or future physical or mental health or condition," and "the provision of health care to the individual." (The language comes from the federal government's published privacy rule summary.)

There are a few limited circumstances in which a healthcare provider doesn't need permission. Chiefly these fall into the categories of "treatment, payment and healthcare operations" — in other words, charts can be seen by doctors treating the patient or insurers paying for care, or in connection with hospital functions such as evaluating doctors' competency — and regulatory activities or subpoenas.

...

Under the law, there's no such thing as an implied authorization by a patient for disclosure of personal records, said Linda Ackerman, a San Francisco expert in privacy law.

The office of civil rights of the U.S. Department of Health and Human Services, which enforces HIPAA, put it this way: "There is no 'waiver' that would apply to the release of a chart or medical record to the media without an individual's written authorization."

Several experts told me it doesn't matter if the hospital was trying to contradict misinformation provided by a patient (even if that's what Courtois did, which is debatable). Under the law, patients themselves can divulge anything they wish about their medical conditions and their treatment by a hospital. But a hospital's obligation is to keep its mouth shut. A desire to deflect bad PR is not an excuse. Even if they think they're in the right, the law says healthcare providers have to suffer in silence, the experts say.

Anthony Wright, executive director of the statewide patient advocacy group Health Access California, also mentioned the "chilling precedent" of a hospital company exposing a patient's personal information just because she criticized the company in public. Indeed, the lesson of the Courtois case is clear: Give an interview about your experience at a Prime-owned hospital, and don't be surprised if the hospital responds by exposing the most private details of your medical history to the world.

I would have to say, in addition to the blatant disregard for the privacy, and the RIGHTS of Darlene Courtois demonstrated by Prime, I find Anthony Wright's point on this serving as a "chilling effect" against patients who may speak out, to be of particular concern. I say this because all too often, as a consumer advocate, industry's from chemical to big pharma to big oil, and on down the line, we see intimidation, obfuscation, and in fact, a factoring in of the damage they cause people and the planet into their business model. I would HATE to think that EHR's could serve as yet one more tool to protect these kinds of corporate interests from proper justice and accountability.

My sense is, that in the case of Prime, its so egregious, there will be accountability, and this chilling effect will not take root. But, that is why I brought up the issue of factoring in the cost of the damage these corporate interests do into their business model: will the damages Prime faces outweigh the benefits, they, and other vultures like them, feel they might get from such intimidation?

This also is why, as Hiltzik rightly states in the articles title, "Her case shows why health care privacy laws exist", and why, INCREASED privacy protections, and increased accountability and enforcement, are also necessary...and must also exist.

The UCLA Health System reported in November 2011 that a hard drive containing more than 16,000 patients’ information had been stolen from the home of a UCLA physician on Sept. 6, 2011.

Social Security numbers and financial information were not among the documents stolen, but they did include first and last names and may have contained birth dates, medical record numbers, addresses and medical record information, according to the Health System’s statement.

The lawsuit claims the September incident was a violation of the California Confidentiality of Medical Information Act, in place to protect the privacy of patients’ personal histories and information. The suit is calling for $1,000 in damages for each patient on the hard drive. The total cost of the suit for the Health System could amount to as much as $16 million, including the legal fees associated with the case.

...

While storing information online is an increasingly common practice, and can certainly coexist with patient privacy rights, the potential for data breach is significantly higher than a paper-based system, said Tena Friery, research director at the Privacy Rights Clearinghouse, a national nonprofit organization focused on consumer privacy protection.

She also cited a 2011 study revealing that 71 percent of health care organizations had suffered a data breach in the last year.

Kabateck was also involved in a case concerning similar violations against Stanford University’s Hospital and Clinics late last year, filed on behalf of 20,000 patients whose information was released onto a public website through a third party.

Obviously, this brings me back to the same key points at the article before it...how do we prevent this MASS amounts, in some cases (as in Prime), intentional, data breaches from occurring? This, my friends, is serious business. And, as such, I would urge we seek and demand adequate penalties against those responsible for such breaches to ensure they don't keep happening going forward. This means BOTH privacy standards AND enforcement/security/accountability.

As I wrote in past posts, "If medical records fell into the wrong hands at worst they could be used for a host of purposes unrelated to improving your health: advertisers might flood our email inboxes with even more spam and patients may not feel so comfortable having an honest conversation with their doctor if it could end up for all to see. This treasure trove of personal information would also be a goldmine for insurance companies, drug companies, data mining companies, and software companies....

When it comes to the issue of e-health records certainly one question the consumers should ponder is "Where is my data and who has access to it and for what purposes?" Or perhaps even more importantly, "can my private data be traced back to me personally and sold to others?"...Clearly, what is MORE than clear now is that we need MORE attention paid to privacy, not less...and that means taking a bit more time to get this new system up and running...and more care given to the rights of patients...not hospitals, not suppliers, not the government, and not any other interest looking to profit off this transition. We can have BOTH privacy and a more efficient medical records system...there's no need to sacrifice one for the other.

Monday, January 9, 2012

A few months back I
posted a pretty extensive blog on Facial Recognition technology and the
threat it poses to individual privacy. As I've done in the past, because I know not everyone can read
every post, I'll repeat a few of my thoughts here today before I get to an
outstanding piece by Tana Ganeva of Alternet not JUST about the massive FBI database
- the "largest
biometric database in the world," - containing records for
over a hundred million people, but also the agency's plans for Next
Generation Identification (NGI), “a massive, billion-dollar upgrade that will
hold iris scans, photos searchable with face recognition
technology, palm prints, and measures of gait and voice recordings
alongside records of fingerprints, scars, and tattoos. - particularly in
the workplace (which is especially disturbing).”

For some backdrop on biometrics, you can check out a past post I did about
another article by Tana, entitled 5
Unexpected Places You Can Be Tracked With Facial Recognition Technology. As
I wrote then, this issue has particular interest to me due to California's recent fight that we (Consumer Federation of California) were deeply involved in - whether biometric
identifiers should be used by the DMV (we were able, with a host of other groups, to stop them).

As for the
larger concern over facial recognition technology,groups from the Privacy
Rights Clearinghouse (PRC) to the ACLU to the Electronic Frontier Foundation to
EPIC have all been very active in making the case that there is a very real
threat to privacy at stake in determining just how, and when, this technology
can be used.

Again, going back to a prior post, I wrote: "First, let me refresh everyone on the concept of biometric identifiers - like fingerprints,
facial, and/or iris scans. These essentially match an individual’s
personal characteristics against an image or database of images. Initially, the
system captures a fingerprint, picture, or some other personal characteristic,
and transforms it into a small computer file (often called a template). The
next time someone interacts with the system, it creates another computer file.

There are a number of reasons why such technological identifiers should
concerns us. So let's be real clear, creating a database with millions of
facial scans and thumbprints raises a host of surveillance, tracking and
security question - never mind the cost. And as you might expect, such
identifiers are being utilized by entities ranging from Facebook to the FBI. In
fact, the ACLU of California is currently asking for information about law
enforcements’ use of information gathered from facial recognition technology
(as well as social networking sites, book providers, GPS tracking devices,
automatic license plate readers, public video surveillance cameras)."

But for today’s sake, let’s hone in on the articles by Tana Ganeva in Alternet
entitled 5 Things You Should Know About the FBI's Massive New Biometric Database,
as well as a piece by the Cato Institute detailing all the ways Congress is
currently, and aggressively, pushing biometric identifying technologies.

A Reauthorization and Reform Act of 2011, has passed the
House and awaits action in the Senate. It says that “improved pilot licenses”
must be capable “of accommodating a digital photograph, a biometric identifier,
and any other unique identifier that the Administrator considers necessary.”

H.R. 1690, the MODERN Security Credentials Act, establishes
that air carriers, airport operators, and governments may not employ or contract
for the services of a person who has been denied a TWIC card. “TWIC” stands for
“Transportation Worker Identity Card,” the vain post-9/11 effort to secure
transportation facilities from bad people. TWIC cards use biometrics.

The Army deploys biometrics. Public Law 112-10, the Department of Defense and Full-Year
Continuing Appropriations Act, 2011 (cost per U.S. family: $13,500+) allowed
spending on Army field operating agencies “established to improve the
effectiveness and efficiencies of biometric activities and to integrate common
biometric technologies throughout the Department of Defense.”

H.R. 1842 is an immigration bill called the Development,
Relief, and Education for Alien Minors Act of 2011. (Senate version: S.
952) It would allow an otherwise qualified immigrant to get conditional
permanent resident status only after submitting biometric and biographic data
for use in security and law enforcement background checks.

S. 1258 does roughly the same thing with regard to any lawful
immigration status. This bill is called the Comprehensive Immigration Reform
Act of 2011, one of many attempts at comprehensive reform. In addition to
requiring immigrants to submit biometrics, it also requires the government to
issue “documentary evidence of lawful prospective immigrant status” that
includes a digitized photograph and at least one other biometric identifier.
The bill would also reinforce the use of biometrics in employer background
checks and at the border.

H.R. 2463, the Border Security Technology Innovation Act of
2011, calls for continued study of mobile biometric technologies at the border.
The Under Secretary for Science and Technology of the Department of Homeland
Security would coordinate this research with other biometric identification
programs within DHS.

H.R. 2895, the Legal Agricultural Workforce Act, would
create a nonimmigrant agricultural worker program. In the program each
nonimmigrant agricultural worker would get an identification card that contains
biometric identifiers, including fingerprints and a digital photograph.

S. 1384, The HARVEST Act of 2011, is similar. In providing
for the temporary employment of foreign agricultural workers, it calls for “a
single machine-readable, tamper-resistant, and counterfeit-resistant document”
that verifies the identity of the alien through the use of at least one
biometric identifier.

H.R. 3735, the Medicare Fraud Enforcement and Prevention
Act of 2011, would establish a biometric technology pilot program. The
five-year pilot program would use biometric technology seeking to ensure that
Medicare beneficiaries “are physically present” when receiving items and
services reimbursable under Medicare. How many biometric scanners would have to
be out there for that to work?

S.
744, the Passport Identity Verification Act, calls on the Secretary of
State to conduct a study into whether people applying for or renewing passports
should provide biometric information, including photographs that facilitate the
use of facial recognition technology.

…S. 1604, the Emergency Port of Entry Personnel and
Infrastructure Funding Act of 2011, establishes a grant program in which the
Department of Homeland Security would give cash out to state and local law enforcement
for the purchase of various technologies including “biometric devices.”

Clearly, biometrics is on the “to do” list of our Congress.
But it gets worse, and that’s where the FBI’s massive database, and its plans
to expand it, comes in.

NGI will expand the type and breadth of information FBI
keeps on all of us," says Sunita Patel of the Center for Constitutional
Rights. "There should be a balance between gathering information for law
enforcement, and gathering information for its own sake."

Here are 5 things you should probably know about NGI:

1. Face Recognition

This month, the FBI is giving police departments in 4 states
access to face recognition technology that lets them search the agency's
mugshot database with only an image of a face. Police can repay the favor
by feeding the FBI mugshots they collect from local arrests, bulking up the
agency's database with images of more and more people.

…

2. Iris Scans

Iris-scanning technology is the centerpiece of the
second-to-last stage in the roll-out of NGI (scheduled for sometime before
2014). Iris scans offer up several advantages to law enforcement, both in
terms of identifying people and fattening up databases.

The pattern of an iris is so unique it can distinguish
twins, and it allegedly stays the same throughout a person's life. Like
facial recognition, iris scans cut out the part where someone has to be
arrested or convicted of a crime for law enforcement to grab a record of their
biometric data.

…

3. Rap-Back System

A lot of the action in the FBI's fingerprint database is in
background checks for job applicants applying to industries that vet for
criminal history, like taking care of the elderly or children, hospital work,
and strangely, being a horse jockey in Michigan. As Cari Athens, writing for
the Michigan Telecommunications and Law Review points out,
if a job applicant checks out, the FBI either destroys the prints or returns
them to the employer. But that's no fun if the goal is to collect vast amounts
of biometric data!

…

4. Data Sharing Between Agencies

The roll-out of NGI advances another goal: breaking down
barriers between databases operated by different agencies. One of the
directives of the billion-dollar project is to grease information swapping
between the Department of Homeland Security, the State Department, the
Department of Justice, and the Department of Defense. The DOJ and DHS have
worked toward "interoperatibility" between their databases
for years. In 2009, the Department of Defense and DOJ also signed on
to an agreement to share biometric information.

…

5. NGI and Secure Communities (S-Comm)

One recent test run in interagency data-sharing has not gone
particularly well: Secure Communities, a DHS program that lets local law
enforcement officials run the fingerprints of people booked in jails against
the IDENT database to check their immigration status and tip off ICE to
undocumented immigrants.

Like many policies targeting America's immigrant population,
Secure Communities (S-Comm) -- pitched as protection against violent criminals
-- devolved into dragnets and mass deportations, with people getting
dragged in for minor offenses like missing business permits and even for
reporting crimes. In one incident a woman called the police about a
domestic violence incident, only to be ensnared in deportation proceedings
herself. As Marie Diamond points out in Think
Progress, DHS's immigration databases have so many errors that the program
"routinely flags citizens as undocumented immigrants."

…

What could possibly go wrong?

Advancements in the collection of biometric data are
double-edged: there's the threat of a massive government surveillance
infrastructure working too well -- e.g., surveillance state
-- and there are concerns about its weaknesses, especially in keeping data
secure.

A breach of a sophisticated, multi-modal biometric database
makes for a nightmarish scenario because the whole point of biometric data is
that it offers unique ways to ID people, so there's no easy fix -- like a password
change -- for compromised biometric data. Pointing to the dangers of
identify theft of biometric data, Patel observes that, "Unlike a password,
the algorithm of an iris can't be changed."

As I have often stated, "What concerns me is
what are the side effects of living in a society without privacy. Where are we
left when the power of corporate or government interests to monitor everything
we do is absolute?

Whether its the knowledge that everything we do on the internet is followed and
stored, that we can be wiretapped for no reason and without a warrant or
probable cause, that smart grid systems monitor our daily in home habits and
actions, that our emails can be intercepted, that our naked bodies must be
viewed at airports and stored, that our book purchases can be accessed (particularly
if Google gets its way and everything goes electronic), that street corner
cameras are watching our every move, and that RFID tags and GPS technology
allow for the tracking of clothes, cars, and phones (and the list goes
on)...what is certain is privacy itself is on life support in this
country...and without privacy there is no freedom. I also fear how such a
surveillance society stifles dissent and discourages grassroots
political/social activism that challenges government and corporate power...something
that we desperately need more of in this country, not less."

PRIVACY REVOLT! tackles the issues at the intersection of civil liberties and technology, with news and commentary on government and corporate surveillance, identity theft, data brokers, tracking devices, and the security of consumers' financial, medical, and phone records.

Privacy Bill List

We provide tracking and analysis of the most important privacy bills moving through the California state legislature.