Working groups considering fundamental questions concerning the pursuit of transparency in qualitative empirical research, which cut across the particular forms of research in which qualitative scholars engage

The focus of this working group is on evidence that comes from first-hand observations of, or interactions with human participants, including formal as well as informal, unstructured interviews; observation of/participation in meetings/events; and non-interview interactions with human subjects, including surveys. (Ethnography is the focus of a separate working group.) This working group will, in particular, focus on two potential types of transparency with such evidence: transparency about how scholars have made observations or generated evidence through research with human participants; and questions of when, why, and how this evidence can or should be made available or easily findable to others.

This working group's deliberations will consider the circumstances under which, and the reasons why, researchers might share elements of their interactions with human participants; why, when, and how researchers can/should be transparent about the process of collecting interview and survey evidence or about the nature of their engagements with research participants; the costs of and limits to transparency in these areas; and ways of being transparent with research participants themselves. This working group will also investigate and assess technologies and infrastructure that might aid scholars wishing to share evidence from first-hand observations of, or interactions with, human participants.

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

There are scholarly reasons, as others have noted, for sharing the data we collect from human subjects. Others may be able to build on our work, and knowledge thus accumulate more quickly. It may provide disincentives against misrepresenting evidence. In addition, funding bodies are increasingly requiring that data they finance the collection of be shared with the broader community.

One concern that has not yet been raised in this forum, however, is how cybersecurity should factor into our decision-making regarding the costs and benefits of transparency, and particularly graduated or intermediate forms of data sharing. Cyber “insecurity” may affect the circumstances under which data from human subjects may be shared without endangering participants.

It is important to consider the increasing insecurity of digital storage systems. When sharing transcripts or recordings with journals or a repository (even with access restrictions in place, such as limiting access to members of the scholarly community), one is relinquishing control over the materials, and thus is unable to personally ensure that materials are not broadly disseminated by accident. This is abundantly clear as instances of hacking facilitated by sophisticated phishing schemes proliferate. (I would certainly never fault a journal or QDR if they were hacked by the Chinese or wikileaks, given that the U.S. government has fallen prey to attacks.) Perhaps the way forward is to give researchers the leeway to decide what level of sharing is truly unlikely to put subjects further at risk than they are already, based on their knowledge of the context in which they work, the nature of their IRB approval, content of the interview, etc. This will likely mean sharing transcripts or notes from some interviews (perhaps those that can be realistically anonymized, or where the subject matter could in no way be conceived of as objectionable) and keeping others on one’s own computer (with names coded, etc., and nothing stored on the cloud.) Usually, references to anonymous interviews are complemented by material from other sources in articles, so it does not seem unreasonable to ask journals to be accommodating on this front.

My concerns about security do not just pertain to interview data. It is also becoming increasingly difficult to anonymize survey data in an age of big data. My colleagues in more technical disciplines maintain that there is sufficient data on most individuals in countries like the US that much survey data collected these days can be traced back to specific individuals. This is particularly the case for geo-referenced data such as that collected through cell phones. As we move towards tablet-based surveys that collect GPS coordinates, etc. it is incumbent upon us to ensure that measures we take to anonymize data are in fact effective, and if not, that survey responses would not put respondents at risk.

Thank you, Pr. Post, for highlighting cyber "insecurity" as a critical issue in the discussion of transparency in the discipline. Do you have suggestions or examples of the more effective measures to anonymize data?