Sign up or log in to save this to your schedule and see who's attending!

Rapid developments in health self-quantification via ubiquitous computing devices point to a future in which individuals will regularly collect health and wellness information using smart phone apps and health sensors, and share it online for purposes of self-experimentation, community building, and research. Already, one-quarter of adult Internet users report tracking their own health data online, and although more than 40,000 mobile medical apps are now available on the market, the field is projected to grow 25% annually. However, self-tracking users of consumer-grade health tools may inadvertently reveal private facts that they do not intend to share with others, including their locations, movement habits, sensitive medical conditions, psychological states, and personal behaviors. These unintended data flows may support reliable health inferences due to growing, contemporary practices of data mining and profiling, leading to fears of employment and insurance discrimination, dignitary harms, and the gradual unraveling of social expectations of privacy in health information.

In the absence of clear statutory or regulatory architectures for self-generated health data, its privacy and security rests heavily on robust individual data management practices, which in turn rest in part on users? understandings of legal protections and commercial terms of service governing information flow. However, little is known about how individuals understand their privacy rights in self-generated wellness and health data under existing laws or privacy policies, or how these beliefs guide information management practices. Users value privacy but often misapprehend their legal protections. In the present case, we caution that if individuals consider self-generated wellness information to be medical in nature, they may be more likely to believe it to be protected by HIPAA and other privacy laws, leading to less privacy-protective information sharing behaviors.

In the present study, we conduct twenty in-depth, structured qualitative interviews with users of the popular self-quantification service Fitbit to answer the following questions: (1) How do self-tracking individuals understand their privacy rights in self-generated health information versus clinically generated medical information?; (2) In what ways do user beliefs about perceived privacy protections guide data management practices?; (3) How closely do user beliefs (and preferences) comport with actual existing legal protections and privacy policies? Participants also complete card-sorting task that allow for deeper reflections upon data sharing preferences, as well as answering several established survey questions designed to assess privacy knowledge.

Early data suggests that research participants regard self-generated information as quasi-medical in nature, relying upon it to actively make changes to their health. However, they are also aware that the context in which information is collected and shared does have consequences for its subsequent legal protections, and thus take pains to manage information collection and dissemination thoughtfully. Individuals exhibit highly granular sharing preferences about the nature of the information collected and the likely recipients of that information, including the purposes for which they believe information should legitimately be used. Particularly, information-sharing preferences are strongly guided by context-dependent social norms regarding appropriate second-order health information transmissions, such as confidentiality. This leads to situations in which doctors and family are trusted recipients (because it is understood that once shared, information will not continue to change hands), while commercial researchers, insurance companies, and marketers are less-preferred information recipients for the same reason. Thus, even if users are not always correct about their legal rights in self-generated health data, they aspire to guide their personal health data flows, and would benefit from commercial privacy settings and policies that afford them simple, transparent, and highly granular control over information that maps to internal representations of appropriate data flow norms.

This paper will be presented as a working paper at the 2013 Privacy Law Scholars Conference at the University of California, Berkeley; it has not been published (or submitted for publication) elsewhere.