It allows the targeting of specific interventions — not only for large groups, but also for individual patients, says Magill, “in a sophisticated manner that is not otherwise feasible.”

There is no question that individual privacy will be compromised to some degree by computer analytics making use of vast amounts of personal information for medical care, research protocols, and health insurance.

While health data is protected by law, so much data is being collected from so many sources, “that law and custom haven’t caught up with protecting information combined into what’s called ‘big data,’” says Bonnie Kaplan, PhD, FACMI, Yale Interdisciplinary Bioethics Center Scholar and faculty in the Yale Center for Medical Informatics at the School of Medicine at Yale University in New Haven, CT.

Medical record and prescription data are being used, and even sold, for a variety of purposes. As long as it’s de-identified, patients’ permission isn’t needed.

“Besides the information people give clinicians as part of their healthcare, they also may freely give out health information via health-related social networking, web postings, and internet searches,” notes Kaplan. Their various medical devices, wearables, or smartphone apps also can generate information. Not all of this is covered by HIPAA, says Kaplan — though many people think it is. Information is consolidated and linked with other data.

“Patients may have no idea what is done with that data, and how they may be helped or hurt by it,” says Kaplan. Public health, research, entrepreneurship, and marketing uses of these data can help people. “Yet patients can be harmed when data about them are used to violate privacy; to deny employment, credit, housing, or insurance; or for identity theft and other unsavory purposes,” says Kaplan. She sees the following as important ethical and legal questions:1,2

As it gets easier to identify someone by combining information from multiple sources, is de-identification of clinical data sufficient for privacy protection?

How meaningful is consent, authorization, or permission for data release when patients have little choice to get services that require it — such as insurance, participation in research, services that provide very highly priced drugs free or at low cost, or use of health-related social networks, smartphone apps, wearables, and monitors for various health purposes?

How can the costs of collecting and curating data for beneficial purposes, such as public health, improving care, adverse event monitoring, and research, be recouped?

How can we make secondary and subsequent data use more transparent, and allow people to consent (or withdraw consent) for both anticipated and unanticipated future uses?

How can we advance beneficial data uses without compromising privacy or facilitating nefarious uses?

Kaplan says more discussion is needed about what data uses are acceptable, and what control individuals should have over data about themselves.

More transparency about data collection and uses, and about what the law does and does not protect, adds Kaplan, “can help people make wise choices about what they want to allow, and policy to be made as to harms and benefits.”

Blair Henry, a senior ethicist at Sunnybrook Health Sciences Centre and assistant professor at the University of Toronto in Ontario, Canada, says there’s a need to move away from a “zero-sum model” with privacy and research. Instead, he argues, “we need to ‘design’ privacy into our research at the front end — not wait until it’s checked at the back end by an IRB.”

Currently, there are no established standards for de-identification of health information. “We need to think about consent paradigms of the past, and how we need to adjust things for big data use,” says Henry.

Magill says tracking individual consent for all possible uses will become “increasingly complex, and perhaps impossible.” Some researchers are adding “open consent” to their informed consent processes, permitting personal data to be used for purposes beyond the immediate cause for giving the consent.

Sharona Hoffman, JD, professor of law and bioethics at Case Western Reserve University School of Law in Cleveland, observes, “Because there is so much good that can be derived from big data in the areas of research and medical advances, we may need to be a little bit less focused on privacy concerns.” Hoffman is author of Electronic Health Records and Medical Big Data: Law and Policy (Cambridge University Press, 2016).

Education is needed to get public buy-in, however. Many lack understanding about the benefits of big data, fearing invasion of their privacy. “This is not the kind of story that often attracts media attention because it’s not an immediate crisis,” says Hoffman. “But it’s important to get the word out there to counter some of the fear.”

People may not realize that big data give researchers the ability to conduct post-market monitoring of drugs and devices to track how well they’re working in actual patients, for instance. “And if they are hurting people, interventions can be quickly implemented,” says Hoffman.

With the clear potential for widespread abuse by employers, insurers, or the government, ethicists say guidance is needed to protect the privacy of individuals as much as possible. Magill argues, “There is a need for sound regulation to guide and oversee this brave new world of algorithmic-based healthcare.”

Going back to the old days is no longer possible, given the widespread use of electronic capacities in healthcare, he adds. “The genie is already out of the bottle.”