Working groups considering fundamental questions concerning the pursuit of transparency in qualitative empirical research, which cut across the particular forms of research in which qualitative scholars engage

The focus of this working group is on evidence that comes from first-hand observations of, or interactions with human participants, including formal as well as informal, unstructured interviews; observation of/participation in meetings/events; and non-interview interactions with human subjects, including surveys. (Ethnography is the focus of a separate working group.) This working group will, in particular, focus on two potential types of transparency with such evidence: transparency about how scholars have made observations or generated evidence through research with human participants; and questions of when, why, and how this evidence can or should be made available or easily findable to others.

This working group's deliberations will consider the circumstances under which, and the reasons why, researchers might share elements of their interactions with human participants; why, when, and how researchers can/should be transparent about the process of collecting interview and survey evidence or about the nature of their engagements with research participants; the costs of and limits to transparency in these areas; and ways of being transparent with research participants themselves. This working group will also investigate and assess technologies and infrastructure that might aid scholars wishing to share evidence from first-hand observations of, or interactions with, human participants.

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

When is it appropriate, ethical, and feasible to make explicit the strategies used to generate evidence with human participants? Are there issue-areas or research contexts that pose particular challenges in making the data generation process open to the broader public? What aspects of the data generation process have been more amenable to sharing in your experience? What ethical considerations have limited your ability to be fully transparent about your research methods? What practical costs (resources, time, etc.) have you incurred in making your data generation process accessible?

[quote="AnastasiaSh"]When is it appropriate, ethical, and feasible to make explicit the strategies used to generate evidence with human participants? Are there issue-areas or research contexts that pose particular challenges in making the data generation process open to the broader public? What aspects of the data generation process have been more amenable to sharing in your experience? What ethical considerations have limited your ability to be fully transparent about your research methods? What practical costs (resources, time, etc.) have you incurred in making your data generation process accessible?[/quote]

My research focuses on forced labour in global supply chains, a context and topic area that create serious challenges for data generation and sharing. Because forced labour is illegal in most countries, and because companies and governments are hesitant to grant researchers access to their workforces, this issue area is extremely challenging to research. My research combines a range of qualitative methods, including ethnographic methods, participant observation, elite interviews, and interviews with vulnerable populations to understand how forced labour operates in modern industry. Each method carries specific challenges in relation to ethically disclosing the strategies used to generate evidence. For instance, in relation to research focused on workers and others vulnerable to forced labour, there is a danger of jeopardising the safety and job security of the gatekeepers (such as workers, union reps, or managers) who granted access to the research participants. In relation to elite interviews, disclosing details about the data generation process runs the risk of jeopardising the reputation of the organisations that have facilitated access to participants and can compromise anonymization techniques. In all of my research, disclosing the strategies used to generate evidence runs the risk of undermining future research projects and efforts since powerful organisations and individuals could seek to block future access, and since research participants may feel their anonymity has not been appropriately safeguarded.

Thank you. These are important points from the research on forced labor that have broader implications for qualitative research in sensitive settings with vulnerable populations. While we might not always think of gatekeepers and elites as "vulnerable," the issues that you highlight regarding these research participants' anonymity suggest the care required in making explicit the data generation process. Can we make our data generation process transparent while protecting the identities of our gatekeepers and elites who participated in our research?

AnastasiaSh wrote:When is it appropriate, ethical, and feasible to make explicit the strategies used to generate evidence with human participants? Are there issue-areas or research contexts that pose particular challenges in making the data generation process open to the broader public? What aspects of the data generation process have been more amenable to sharing in your experience? What ethical considerations have limited your ability to be fully transparent about your research methods? What practical costs (resources, time, etc.) have you incurred in making your data generation process accessible?

In my own research I primarily focus on 'process tracing' in the sense of reconstructing the causal process that leads to particular political and governance outcomes, at transnational and national levels. I am particularly focused on 'crises' in public policy and how they are resolved. In order to effectively triangulate my findings, I need to interview very specific people - usually elite politicians and civil servants, but also CEOs of private companies and reports from newspapers - who have knowledge about a particular event at a specific moment, or series of moments, in order to reconstruct the story of what happened and gain detailed causal knowledge. Often, because of the contentious 'crisis' moment in question, elites, especially those still in office, are very difficult to secure for interview without clear guarantees of confidentiality. Revealing the full transcript of the interview in this case would clearly reveal what the person knew about the specific events, and who they knew and in relation to what processes. It would be extremely easy to link the 'raw' data back to them personally if it were made available even in partial fragments. Nonetheless, this elite evidence is absolutely crucial in reconstructing some of the most important political moments of our time, and how decisions came to be made during them.

The aspects of my data most amenable to sharing are those where the interviewees have made broader assertions or statements about general aspects of their work or common practices that tend to take place in their department or organisation. These are not, however, the focus of the DA-RT initiative, which is concerned with transparency of the data that enables researchers to make causal claims. This would entail making very specific knowledge public, that would be easily traceable to the source; the individual or group of individuals there at the time of the decision. If I had to make full interviews publicly available, this would place a substantial barrier to being able to conduct process tracing research into crises in public policy, that get at these specific organisations and individuals and their decision making process. It would substantially weaken the data I could include in an article, and therefore the power of the claims I could make, which would have to be far more abstracted. Contrary to the claims of the DA-RT initiative, these rules would in fact make process tracing research less, not more, rigorous.

In process tracing research, the most important aspect is the triangulation of findings so that multiple sources are used to secure a credible account of the series of causal factors leading to a particular outcome. The 'transparency' issue here is hence a case of the 'weight of evidence' for one interpretation over another. This simply requires that a judgement is made by a reviewer about how much evidence has been used (often quantitative as well as qualitative) to make the claims. Therefore, I would suggest focusing any 'transparency' initiative on simply asking authors to report their schedule of data collection rigorously, and if making any data on interviews available, sending these anonymised to peer reviewers so they can make the judgement in confidence. Making the data available publicly would mean vast swathes of research into crisis management in public policy and political science would be near impossible to conduct.

Thank you, Dr. Wood. Your insightful response from elite-based research on 'crises' in public policy raises a number of important issues. One point I would like to highlight is the broader knowledge that the researcher develops in the course of fieldwork. Even when providing short quotes or longer excerpts from our interviews is not advisable for human subject protection reasons, this grounded knowledge greatly impacts our understanding of the issue and the processes underlying it. While we may not be able to disclose the sources of this knowledge or the ways in which we arrived at it in full, it is an essential aspect of the generation of our findings.

Evaluating data from qualitative interviews occurs on three levels – sampling, validity and reliability (Bleich and Pekkanen 2013). While validity is established through evidence triangulation, the reliability of an interviewee may be difficult to ascertain without compromising anonymity. With regard to sampling, there is scope for increasing transparency without compromising confidentiality, including in the creation of an interview appendix (Id.). But even the most faithful transcriptions cannot capture the depth of silences, confusion, laughter, or hostility during an interview. Here, carefully prepared and redacted field notes placed in a methodological appendix may capture the ways that context matters. (Due to length constraints, such appendices would differ for article- and book-length projects.) Collecting interview metadata may prove equally as important as collecting interviewees’ reflections.

In addition, the obvious challenge of confidentiality operates acutely in societies with a small professional class, concentrated over one or two metropolitan areas. In these settings, even choosing not to remain anonymous reduces the pool of people to which anonymous individuals belong. A researcher’s commitment to confidentiality, even when a respondent prefers to speak publicly, enables scholars to protect those who want – or need – to remain anonymous.

Even settings with relative political stability may later collapse into political disorder and conflict, and those in power may suddenly find themselves outside the state’s protection. Though generating accurate transcriptions is costly and time-consuming, they offer an additional layer of protection to recordings.

For interview-based research, clarify ethical concerns around interviewee anonymity, particularly in fragile political settings. The assumption that interview data needs to be shared may not be viable in volatile settings with small professional classes, particularly where seemingly innocuous data may become political weapons down the road (Lynch 2016). Furthermore, good data transparency does not necessarily produce good data analysis, which involves the careful documentation of interview context – metadata – and the construction of interview appendices. In short, thinking creatively about how to conduct and disseminate interview-based research is critical to strengthening the inferential value of qualitative data.

Note: This post is adapted from the article, “Not all Law is Public: Reflections on Data Transparency for Law and Courts Research in Africa,” by Rachel Ellett and Mark Fathi Massoud, African Politics Conference Group Newsletter, August 2016. Available at https://dialogueondartdotorg.files.word ... er12_2.pdf

Thank you for your response, especially, for drawing attention to interview context, or metadata. The article appended to your response provides further insight and is an important contribution to the conversation.

I concur with the constructive spirit of many of the contributions to this forum, especially those which have focused on identifying ways in which researchers can be more explicit about their research design and methods. Providing explicit and thorough discussions of research design and procedures (e.g. how interview subjects are recruited, the percentage of a desired sample one was able to in fact interview, etc.) is distinct from sharing “data” from interactions with human subjects. Other contributors have made very sensible suggestions regarding how standard practices for such documentation could be improved. If this is to be expected of most qualitative articles, journals that do not currently accommodate online appendices will need to be more flexible on this matter to ensure there is sufficient space for such materials, or else increase article word length limits.

aepost wrote:I concur with the constructive spirit of many of the contributions to this forum, especially those which have focused on identifying ways in which researchers can be more explicit about their research design and methods. Providing explicit and thorough discussions of research design and procedures (e.g. how interview subjects are recruited, the percentage of a desired sample one was able to in fact interview, etc.) is distinct from sharing “data” from interactions with human subjects. Other contributors have made very sensible suggestions regarding how standard practices for such documentation could be improved. If this is to be expected of most qualitative articles, journals that do not currently accommodate online appendices will need to be more flexible on this matter to ensure there is sufficient space for such materials, or else increase article word length limits.

Thank you for your response, Pr. Post. If the transparency requirements for qualitative research are indeed different, the issue of sufficient space and additional time and effort invested in the methodological discussion/appendices should be an important part of the conversation.