On December 18, 2014, President Obama signed into law the Newborn Screening Saves Lives Reauthorization Act of 2014. The Act includes new timeliness and tracking measures to ensure newborn babies with deadly yet treatable disorders are diagnosed quickly. These changes responded to a Milwaukee Journal Sentinel investigation that found thousands of hospitals delayed sending babies’ blood samples to state labs. A primary purpose of newborn screening is to detect disorders quickly, so any delays increase the risk of illness, disability, and even death.

Although a major reason for the Act’s amendments is to address these problematic delays, another important addition to the Act establishes a parental consent requirement before residual newborn blood spots (NBS) are used in federally-funded research. The Act directs the Department of Health and Human Services (HHS) to update the Federal Policy for the Protection of Human Subjects (the “Common Rule”) to recognize federally-funded research on NBS as “human subjects” research. It also eliminates the ability of an institutional review board to waive informed consent requirements for NBS research.

The piece has already been commented upon by several smart people, most recently Kay Lazar of the Boston Globe. Just one day after Ornstein’s piece went to press, the Dean of Harvard Medical School Jeffrey Flier (@jflier) tweeted “How could this be allowed to happen?” only to be informed by the Chair of Surgery at Boston Medical Center, Gerard Doherty, (@GerardDoherty4) that three Harvard-affiliated hospitals are in fact currently hosting camera crews for a similar series. The ensuing conversation reminded me just how limited a platform Twitter is for tricky conversations about health care law and ethics. So I did what any self-respecting millennial would do – I went home for the holidays and asked my mom to help me understand what the internet couldn’t.

Constitutional Implications of Ebola:Civil Liberties & Civil Rights In Times of Health Crises

This public forum addresses the constitutional and public health implications of Ebola response in the United States. According to state and federal laws, patient information is deemed private and is to be held in strict confidentiality. However, in the wake of Ebola, well-established protocols to guard patient privacy have been neglected or suspended without public debate. At this forum, a panel of experts raise questions not only about how to contain the disease, but also to what extent Americans value their healthcare privacy, civil liberties, and civil rights. To what extent are Americans’ Ebola fears influenced by the origins of the disease? What liberties are Americans willing to sacrifice to calm their fears? How to balance the concern for public welfare with legal and ethical privacy principles?

As the nation braces for possibly more Ebola cases, civil liberties should be considered, including patient privacy. As news media feature headline-grabbing stories about quarantines, let’s think about the laws governing privacy in healthcare. Despite federal laws enacted to protect patient privacy, the Ebola scare brings the vulnerability of individuals and the regulations intended to help them into sharp relief.

In 1996, Congress enacted the Health Insurance Portability and Accountability Act (HIPAA) to protect patient privacy. Specifically, HIPAA’s Privacy Rule requires that healthcare providers and their business associates restrict access to patients’ health care information. For many years, the law has been regarded as the strongest federal statement regarding patient privacy. But it may be tested in the wake of the Ebola scare with patients’ names, photographs, and even family information entering the public sphere.

Ebola hysteria raises questions not only about how to contain the disease, but also to what extent Americans value their healthcare privacy. What liberties are Americans willing to sacrifice to calm their fears? How to balance the concern for public welfare with legal and ethical privacy principles? For example, will Americans tolerate profiling travelers based on their race or national origin as precautionary measures? What type of reporting norms should govern Ebola cases? Should reporting the existence of an Ebola case also include disclosing the name of the patient? I don’t think so, but the jury appears out for many.

The company is exploring creating online “support communities” that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook’s “organ-donor status initiative,” introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.

Under HIPAA, patients’ spouses and other family members have certain rights to access health information. In an important guidance document in the wake of United States v. Windsor, the Office for Civil Rights (OCR) at HHS has clarified that “spouse” under HIPAA refers to legally married same-sex spouses, even if the individual is receiving services in a jurisdiction not recognizing same-sex marriage. Continue reading →

In a post last week I compared Apple’s new mHealth App store rules with our classic regulatory models. I noted that the ‘Health’ data aggregation app and other apps using the ‘HealthKit’ API that collected, stored or processed health data would seldom be subject to the HIPAA Privacy and Security rules. There will be exceptions, for example, apps linked to EMR data held by covered entities. Equally, the FTC will patrol the space looking for violations of privacy policies and most EMR and PHR apps will be subject to federal notification of breach regulations.

Apple has now publicly released its app store review guidelines for HealthKit and they make for an interesting read. First, it is disappointing that Apple has taken its cue from our dysfunctional health privacy laws and concentrated its regulation on data use, rather than collection. A prohibition on collecting user data other than for the primary purpose of the app would have been welcome. Second, apps using the framework cannot store user data in iCloud (which does not offer a BAA), begging the question where it will be acceptable for such data to be stored. Amazon Web Services? Third, while last week’s leaks are confirmed and there is a strong prohibition on using HealthKit data for advertising or other data-mining purposes, the official text has a squirrelly coda; “other than improving health, medical, and fitness management, or for the purpose of medical research.” This needs to be clarified, as does the choice architecture. Continue reading →

On September 9 Apple is hosting its ‘Wish We Could Say More’ event. In the interim we will be deluged with usually uninformed speculation about the new iPhone, an iWatch wearable, and who knows what else. What we do know, because Apple announced it back in June, is that iOS 8, Apple’s mobile operating system will include an App called ‘Health’ (backed by a ‘HealthKit’ API) that will aggregate health and fitness data from the iPhone’s own internal sensors, 3rd party wearables, and EMRs.

What has been less than clear is how the privacy of this data is to be protected. There is some low hanging legal fruit. For example, when Apple partners with the Mayo Clinic or EMR manufacturers to make EMR data available from covered entities they are squarely within the HIPAA Privacy and Security Rules triggering the requirements for Business Associate Agreements, etc.

But what of the health data being collected by the Apple health data aggregator or other apps that lies outside of protected HIPAA space? Fitness and health data picked up by apps and stored on the phone or on an app developer’s analytic cloud fails the HIPAA applicability test, yet may be as sensitive as anything stored on a hospital server (as I have argued elsewhere). HIPAA may not apply but this is not a completely unregulated area. The FTC is more aggressively policing the health data space and is paying particular attention to deviance from stated privacy policies by app developers. The FTC also enforces a narrow and oft-forgotten part of HIPAA that applies a breach notification rule to non-covered entity PHR vendors, some of whom no doubt will be selling their wares on the app store. Continue reading →

The stakes were high in Sutter — under the California statute medical data breach claims trigger (or should trigger!) nominal damages at $1000 per patient. Here four million records were stolen.

Plaintiffs’ first argued the defendant breached a section prohibiting unconsented-to disclosure. The not unreasonable response from the court was that this provision required an affirmative act of disclosure by the defendant which was not satisfied by a theft.

A second statutory provision argued by the plaintiffs looked like a winner. This section provided, “Every provider of health care … who creates, maintains, preserves, stores, abandons, destroys, or disposes of medical information shall do so in a manner that preserves the confidentiality of the information contained therein.” Continue reading →

Art Caplan has a new opinion piece on NBCNews on the controversy over the case of Jessie Herald, in which he was offered a plea bargain that involved sterilization for a reduced sentencing. From the piece:

Jessie Lee Herald was facing five years or more in prison after a crash in which police and prosecutors said his 3-year-old son was bloodied but not seriously hurt. But Herald cut a deal. Or more accurately, the state agreed to reduce his sentence if he would agree to be cut. Shenandoah County assistant prosecutor Ilona White said she offered Herald, 27, of Edinburg, Virginia, the opportunity to get a drastically reduced sentence if he would agree to a vasectomy. It may not be immediately clear what a vasectomy has to do with driving dangerously and recklessly. It shouldn’t be. There is no connection.

Art Caplan has authored a new opinion piece on Bioethics.net on the issue of “chipping” human beings. From the piece:

There has been a great deal of fingerpointing, second-guessing and recrimination over the decision by the President to exchange five former Taliban leaders for the American soldier, Bowe Bergdahl. “You’ve just released five extremely dangerous people, who in my opinion … will rejoin the battlefield,” Senator Marco Rubio, R-Fla., and likely Presidential candidate told Fox News. Senator John McCain, R-AZ, told ABC news and many other outlets that he would never have supported the swap if he’d known exactly which prisoners would be exchanged given their former high roles in battling the U.S. in Afghanistan.

Put aside for a second whether the five Taliban leaders that were flown to Qatar for Bergdahl are now too old and too long removed from Taliban affairs to resume anything close to their old roles. Presume, instead, they will eagerly resume where they left off prior to their capture, attacking Americans and others they see as hindering Taliban goals for Afghanistan. Is it possible that the U.S. did something to these men before letting them go in the swap—surreptitiously implanting them with microchips so that they could be tracked or traced?

The President’s Council of Advisors on Science and Technology (PCAST) has issued a report intended to be a technological complement to the recent White House report on big data. This PCAST report, however, is far more than a technological analysis—although as a description of technological developments it is wonderfully accessible, clear and informative. It also contains policy recommendations of sweeping significance about how technology should be used and developed. PCAST’s recommendations carry the imprimatur of scientific expertise—and lawyers interested in health policy should be alert to the normative approach of PCAST to big data.

Here, in PCAST’s own words, is the basic approach: “In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the “what” rather than the “how,” to avoid becoming obsolete as technology advances. The policy framework should accelerate the development and commercialization of technologies that can help to contain adverse impacts on privacy, including research into new technological options. By using technology more effectively, the Nation can lead internationally in making the most of big data’s benefits while limiting the concerns it poses for privacy. Finally, PCAST calls for efforts to assure that there is enough talent available with the expertise needed to develop and use big data in a privacy-sensitive way.” In other words: assume the importance of continuing to collect and analyze big data, identify potential harms and fixes on a case-by-case basis possibly after the fact, and enlist the help of the commercial sector to develop profitable privacy technologies. Continue reading →

In a recent blog I discussed the benefits and potential draw-backs of a new “EU Regulation on clinical trials on medicinal products for human use,” which had been adopted by the European Parliament and Council in April 2014. Parallel to these legislative developments, the drug industry has responded with its own initiatives providing for varying degrees of transparency. But also medical authorities have been very active in developing their transparency policies.

In the US, the FDA proposed new rules which would require disclosure of masked and de-identified patient-level data. In the EU, the EMA organized during 2013 a series of meetings with its five advisory committees to devise a draft policy for proactive publication of and access to clinical-trial data. In June 2013 this process resulted in the publication, of a draft policy document titled “Publication and access to clinical-trial data” (EMA/240810/2013).

Following an invitation for public comments on this document, the EMA received more than 1,000 submissions from stakeholders. Based on these comments the EMA recently proposed “Terms of Use” (TOU) and “Redaction Principles” for clinical trial data disclosure.

In a letter to the EMA’s executive director Dr. Guido Rasi, dated 13 May 2014, the European Ombudsman, Emily O’Reilly, has now expressed concern about what seems to be a substantial shift of policy regarding clinical trial data transparency. Continue reading →

Privacy is never easy to think about. This week it became harder. Two pieces framed my week. First, Eben Moglen’s essay in The Guardian (based on his Columbia talks from late last year) took my breath away; glorious writing and stunning breadth combined to deliver a desperately sad (but not entirely hopeless) message about government and corporate overreaching in data collection and processing.

A wry speech posted by software developer Maciej Ceglowski also helped frame my thoughts. He wrote, “The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.” There’s the problem in a nut. Ceglowski alludes to the divide between how human (offline) memory operates (it’s “fuzzy” and “memories tend to fade with time, and we remember only the more salient events”) and the online default of remembering everything. Government and Google and, for that matter, Big Data Brokers tell us that online rules now apply across the board and ‘that’s just peachy’ because we’ll have better national security, better searches, or more relevant advertising. But, that’s backwards. Continue reading →

A resident of Spain allegedly owed back taxes triggering attachment proceedings. The local newspaper published the details of an upcoming auction of his property in early 1998. At some point the issue was settled. However, the matter was not forgotten—the newspaper was online and a Google search of the gentleman’s name returned this history. He complained to the Spanish data protection agency (AEPD) that he had a right to have older, irrelevant information erased and that Google should remove the links. The AEPD agreed and Google sued for relief. The Spanish High Court referred the interpretation of the Data Directive (95/46) to the European Court of Justice in 2010 and in 2013 the Advocate-General issued an advisory opinion supportive of Google’s position. Somewhat surprisingly the European Court of Justice has now taken the opposite view (Case C‑131/12, Google Spain SL v. AEPD, May 13, 2014). Continue reading →

We are currently seeking abstracts for academic presentations/papers on the following topics:

Stem cell therapies

Nanotechnologies

Genetic (and biomarker) tests

Gene therapies

Personalized medicine

Comparative efficacy research

Drug resistant pathogens

Globalized markets

Tobacco

GMO

Bioterrorism countermeasures

Mobile health technologies

Health IT

Drug shortages

Other related topics

Abstracts should be no longer than 1 page, and should be emailed to Davina Rosen Marano at dsr@fdli.org by Tuesday, June 3, 2014. Questions should also be directed to Davina Rosen Marano.

We will notify selected participants by the end of June. Selected participants will present at the symposium, and will be expected to submit a completed article by December 15, 2014 (after the event) to be considered for publication in a 2015 issue of FDLI’s Food and Drug Law Journal (FDLJ). Publication decisions will be made based on usual FDLJ standards.

“aims to remedy the shortcomings of the existing Clinical Trials Directive by setting up a uniform framework for the authorization of clinical trials by all the member states concerned with a given single assessment outcome. Simplified reporting procedures, and the possibility for the Commission to do checks, are among the law’s key innovations.”

Moreover, and very importantly, the Regulation seeks to improve transparency by requiring pharmaceutical companies and academic researchers to publish the results of all their European clinical trials in a publicly-accessible EU database. In contrast to earlier stipulations which only obliged sponsor to publish the end-results of their clinical trials, the new law requires full clinical study reports to be published after a decision on – or withdrawal of – marketing authorization applications. Sponsors who do not comply with these requirements will face fines.

These groundbreaking changes will enter into force 20 days after publication in the Official Journal of the EU. However, it will first apply six months after a new EU portal for the submission of data on clinical trials and the above mentioned EU database have become fully functional. Since this is expected to take at least two years, the Regulation will apply in 2016 at the earliest (with an opt-out choice available until 2018).

We live in a time when increasingly our personal information is publicly available on the internet. This personal information includes our names and phone numbers, things we’ve written and things we’ve done, along with a good deal of information that only exists because we interact with others on the internet – thoughts that we might not have otherwise externalized, or that we certainly would not have saved so that others could read.

If all of this information is publicly available, all of this information can be gathered. Already advertisers analyze our behaviors to better target products to us. It is not hard to imagine a not so distant future where the government analyzes this data to determine whether we have a DSM mental disorder. By looking at the online behaviors of those already diagnosed – the way the syndrome affects their usage patterns, the sites they visit, and how they interact with others online – it is likely that one can find statistically significant usage patterns that can distinguish individuals with a diagnosis from those without. The available data could then be mined to identify other individuals that exhibit the usage pattern and allow for presumptive diagnosis.

Where does one start with AOL CEO Armstrong’s ridiculous and unfeeling justifications for changes in his company’s 401(k) plan. Cable TV and Twitter came out of the blocks fast with the obvious critiques. And the outrage only increased after novelist Deanna Fei took to Slate to identify her daughter as one of the subjects of Armstrong’s implied criticism. Armstrong has now apologized and reversed his earlier decision.

As the corporate spin doctors contain the damage, Armstrong’s statements likely will recede from memory, although I am still hoping The Onion will memorialize Armstrong’s entry into the healthcare debate (suggested headline, “CEO Discovers Nation’s Healthcare Crisis Caused by 25 Ounce Baby”). But supposing (just supposing) your health law students ask about the story in class this week. What sort of journey can you take them on?