Working groups considering fundamental questions concerning the pursuit of transparency in qualitative empirical research, which cut across the particular forms of research in which qualitative scholars engage

To download the working group's draft report, select the "DRAFT REPORT" announcement. Please provide comments or other feedback on the draft via the first topic-thread "Comments on Draft Report ..." You may also continue to view and add to the earlier threads. Please log in first to have your post be attributable to you. Anonymous posts will display only after a delay to allow for administrator review. Contributors agree to the QTD Terms of Use.

You are NOT logged in.

To have your post appear linked to your username, please log in first. If you have just registered, please wait to receive an activation email from qtdfora.

Post a reply

Username:

Subject:

Confirmation code:In an effort to prevent automatic submissions, we require that you type the text displayed into the field underneath.

I am concerned about the lack of concern about time in the discussions about data access, particularly in terms of ethical considerations. I am troubled by the idea that respondent consent to having their identities revealed in publications might be taken as meaning that full transcripts or notes should be made public. There are three issues, at least.

First, consent forms for past research, when seeking permission to identify respondents, whether by name or by role, did NOT seek permission to publish complete transcripts. They sought permission to attribute specific material used. We cannot assume that consent to attribute quotes implies consent to attribute full interview transcripts. I could include this option when seeking consent in future research, but it would be difficult to impossible to get such consent retroactively. Some elite respondents remain accessible, but it may not be possible to track them all down, whether because they have retired from public life or are non-responsive. Some of my past respondents have, unfortunately, died. I have not seen any discussion of the ethical challenges posed when consent was granted for attribution when even the researcher had no expectation that anybody would request full transcripts.

Second, (and this point is not explicitly about time) I share Lama Mourad's concern. The researcher might see risks that the respondent does not. Lama Mourad wrote:"In these cases, I could not - if requested by an editor - state that I cannot disclose transcripts or notes because my interviewees requested anonymity. However, if by my own assessment, this poses a greater risk than the possible benefit of "full transparency" (if "full transparency" is defined as making interviews and notes available when permitted by IRB). It appears to me that ethics of research should compel to think through (and be transparent about) these tradeoffs - marginal benefit to the research vs. potentially putting participants at some level of risk (as it is implied, in some of these cases, that they may not have agreed to the same level of disclosure if the work was to be published locally)."

By the way, we should never assume that publications for non-local audiences will not reach local audiences. Where I work, the local media regularly finds and quotes from (or even downright copies [nevermind copyright]) academic presentations and publications related to local politics.

Third, circumstances can change. People might consent to having quotes attributed - or event to having a full interview transcript published - only to have circumstances (e.g., shifts in political power, regime dynamics) change in a manner such that, if asked again, they would never give such consent. I tell people that they can withdraw consent at any point, with the caveat that material that has been published is out of my hands. There are two limitations, however, of this option to withdraw consent. First, it places the burden on the respondent. If circumstances become more risky, former respondents may not think about the risks posed by interviews granted some time in the past. Further, a respondent may not be in a position to contact the researcher to withdraw consent even if they want to do so. They may have lost the contact information. It is not the case that everybody has easy access to email, for example. In repressive or conflict situations, attempting to communicate with a researcher may itself be risky. Under these sorts of situations, the researcher should be able to step back from the level of consent granted at the time research was conducted. Second, and relatedly, once a full transcript has been published, the researcher loses all ability to reduce the level of exposure. Even if a publisher agrees to remove (or allow the scholar to remove) transcripts from public archives, we all know that material that has been removed from the internet has a way of resurfacing. While this inability to retract applies to all published material, the risk seems greater with the publication of full transcripts than with the publication of quotes.

I am concerned about the lack of concern about time in the discussions about data access, particularly in terms of ethical considerations. I am troubled by the idea that respondent consent to having their identities revealed in publications might be taken as meaning that full transcripts or notes should be made public. There are three issues, at least.

First, consent forms for past research, when seeking permission to identify respondents, whether by name or by role, did NOT seek permission to publish complete transcripts. They sought permission to attribute specific material used. We cannot assume that consent to attribute quotes implies consent to attribute full interview transcripts. I could include this option when seeking consent in future research, but it would be difficult to impossible to get such consent retroactively. Some elite respondents remain accessible, but it may not be possible to track them all down, whether because they have retired from public life or are non-responsive. Some of my past respondents have, unfortunately, died. I have not seen any discussion of the ethical challenges posed when consent was granted for attribution when even the researcher had no expectation that anybody would request full transcripts.

Second, (and this point is not explicitly about time) I share Lama Mourad's concern. The researcher might see risks that the respondent does not. Lama Mourad wrote:"In these cases, I could not - if requested by an editor - state that I cannot disclose transcripts or notes because my interviewees requested anonymity. However, if by my own assessment, this poses a greater risk than the possible benefit of "full transparency" (if "full transparency" is defined as making interviews and notes available when permitted by IRB). It appears to me that ethics of research should compel to think through (and be transparent about) these tradeoffs - marginal benefit to the research vs. potentially putting participants at some level of risk (as it is implied, in some of these cases, that they may not have agreed to the same level of disclosure if the work was to be published locally)."

By the way, we should never assume that publications for non-local audiences will not reach local audiences. Where I work, the local media regularly finds and quotes from (or even downright copies [nevermind copyright]) academic presentations and publications related to local politics.

Third, circumstances can change. People might consent to having quotes attributed - or event to having a full interview transcript published - only to have circumstances (e.g., shifts in political power, regime dynamics) change in a manner such that, if asked again, they would never give such consent. I tell people that they can withdraw consent at any point, with the caveat that material that has been published is out of my hands. There are two limitations, however, of this option to withdraw consent. First, it places the burden on the respondent. If circumstances become more risky, former respondents may not think about the risks posed by interviews granted some time in the past. Further, a respondent may not be in a position to contact the researcher to withdraw consent even if they want to do so. They may have lost the contact information. It is not the case that everybody has easy access to email, for example. In repressive or conflict situations, attempting to communicate with a researcher may itself be risky. Under these sorts of situations, the researcher should be able to step back from the level of consent granted at the time research was conducted. Second, and relatedly, once a full transcript has been published, the researcher loses all ability to reduce the level of exposure. Even if a publisher agrees to remove (or allow the scholar to remove) transcripts from public archives, we all know that material that has been removed from the internet has a way of resurfacing. While this inability to retract applies to all published material, the risk seems greater with the publication of full transcripts than with the publication of quotes.

This is such an informative thread. Thank you everyone for contributing. The point I want to make has already been made in various places, so I will just put it here to support and amplify some things that have been said. I have been paranoid about protecting research participants’ identity above and beyond any other priority. I think that probably under no circumstances should a researcher make interview transcripts and field-notes available in a repository or any kind of open or closed forum. No matter how sure I feel that I have disguised identifying markers in these documents, or that there’s no substantial risk to participants, circumstances can change to completely transform the risk calculation, and the capacities of state authorities and intelligence agencies are constantly evolving – at a much higher pace than my ability to disguise markers. Here is an example: I have worked with members of the northern branch of the Islamic Movement in Israel, a social movement that was a perfectly legal (if controversial and definitely “watched”) organization at the time of my fieldwork. Then in 2015 the Israeli government outlawed the movement, making the mere fact of membership, work in, or affiliation with the movement an illegal activity. Now lets say that in 2014 I had made available redacted and pseudonym-ed interview transcripts and field notes in a repository, thinking there was little risk for participants because they were not discussing any illegal activity of interest to any government agency. They were simply talking about community work and perfectly legal social and political organizing on behalf of a number of registered NGOs affiliated with the movement. All of a sudden in 2015 things that were legal – for example tutoring children in Quran memorization at a mosque as part of volunteer work in the movement, or collecting charity for needy families on behalf of the movement – are illegal. Moreover, my material now provides potentially useful data for the Israeli Police and the Israeli Security Agency (Shabak) to comb through and try to find clues to identities and connections with the movement. My material becomes potentially useful evidence, and I have to trust that my disguising techniques somehow outsmart the human and technological sophistication of an intelligence agency. So, even though this work was conducted in a democracy (Freedom House rates Israel as Free) rather than a typical authoritarian regime, and even though I was not dealing with anything illegal, a sudden and unexpected spike in political repression can occur overnight and completely upend the researcher’s calculations about potential risks to participants. I would never have predicted that Israel would outlaw the Movement, and so I am so grateful now for my general paranoia about protecting research participants. Openness is a worthy and interesting academic preoccupation, and I actually think qualitative scholars already do so much in terms of reflexivity and honesty about research processes and methodological challenges. I don't know that we have an openness problem. Protection of human subjects is a matter of people’s lives and freedom, that is where we are faced with real and frankly quite scary challenges, and challengers (like intelligence and surveillance agencies for example), who stand to benefit greatly from the new obsession with “transparency.”

This is such an informative thread. Thank you everyone for contributing. The point I want to make has already been made in various places, so I will just put it here to support and amplify some things that have been said. I have been paranoid about protecting research participants’ identity above and beyond any other priority. I think that probably under no circumstances should a researcher make interview transcripts and field-notes available in a repository or any kind of open or closed forum. No matter how sure I feel that I have disguised identifying markers in these documents, or that there’s no substantial risk to participants, circumstances can change to completely transform the risk calculation, and the capacities of state authorities and intelligence agencies are constantly evolving – at a much higher pace than my ability to disguise markers. Here is an example: I have worked with members of the northern branch of the Islamic Movement in Israel, a social movement that was a perfectly legal (if controversial and definitely “watched”) organization at the time of my fieldwork. Then in 2015 the Israeli government outlawed the movement, making the mere fact of membership, work in, or affiliation with the movement an illegal activity. Now lets say that in 2014 I had made available redacted and pseudonym-ed interview transcripts and field notes in a repository, thinking there was little risk for participants because they were not discussing any illegal activity of interest to any government agency. They were simply talking about community work and perfectly legal social and political organizing on behalf of a number of registered NGOs affiliated with the movement. All of a sudden in 2015 things that were legal – for example tutoring children in Quran memorization at a mosque as part of volunteer work in the movement, or collecting charity for needy families on behalf of the movement – are illegal. Moreover, my material now provides potentially useful data for the Israeli Police and the Israeli Security Agency (Shabak) to comb through and try to find clues to identities and connections with the movement. My material becomes potentially useful evidence, and I have to trust that my disguising techniques somehow outsmart the human and technological sophistication of an intelligence agency. So, even though this work was conducted in a democracy (Freedom House rates Israel as Free) rather than a typical authoritarian regime, and even though I was not dealing with anything illegal, a sudden and unexpected spike in political repression can occur overnight and completely upend the researcher’s calculations about potential risks to participants. I would never have predicted that Israel would outlaw the Movement, and so I am so grateful now for my general paranoia about protecting research participants. Openness is a worthy and interesting academic preoccupation, and I actually think qualitative scholars already do so much in terms of reflexivity and honesty about research processes and methodological challenges. I don't know that we have an openness problem. Protection of human subjects is a matter of people’s lives and freedom, that is where we are faced with real and frankly quite scary challenges, and challengers (like intelligence and surveillance agencies for example), who stand to benefit greatly from the new obsession with “transparency.”

tsquatrito wrote:The challenges with openness in research are plentiful, and take on particular dynamics when involving human subjects. To me, it seems that two questions in particular need to be asked: (1) what is the purpose of openness? Is it simply for the sake of transparency and perhaps to boost credibility, or is it to obtain replicability? (2) what are the ways in which openness can be obtained? Through production of all research material and "data" or through detailed reporting of research procedures.

This is a really helpful way to think about transparency. I think that the idea of qualitative data ever being replicable is ridiculous. Even if I translated and published my interview notes, it would be difficult for anyone to "replicate" my work. Many of the people I interview I access through networks of trusted people, so because someone in their network "trusts" me, they trust me enough to talk to me. It is unlikely that they would offer the same information to another scholar who approached them without the benefit of the network, which would distort the findings. Also, even if the person I interviewed agreed to allow me to use his/her information openly, I would still hesitate to do this. Most of my research is with civil society in China, and during the 2000s, many CSO leaders felt that the environment was increasingly liberal and they would say that they didn't mind me using their names. However, all of that changed in 2012 with new leadership, and public statements made by CSO leaders and rights lawyers were then held against them as evidence of "bad intentions". It is unclear to me in an authoritarian context if it is ethical to ever make interviewee information accessible to even make replicability a possibility, and then, even if another researcher approached these same people, they would be given the same information. Also, the added burden of making my notes accessible to others (my notes are usually a combination of Chinese and English, with a personal style of abbreviation) would make publishing prohibitively time consuming, without adding the benefit of replicability or even necessarily better evaluation of my arguments.

However, as I said in another strand, I do think it would be helpful when I am reviewing colleagues' scholarship (and in writing my articles) to decide on some best practices in the field such as using more space for discussion of alternative explanations and methodology, such as why these interviewees might have this information, biases or missing information, scope conditions for this information, etc. It is hard to do this in a really thoughtful way with such small journal word counts, but perhaps developing a standard methodology appendix (that didn't count against word limits) might be a good way to do this.

[quote="tsquatrito"]The challenges with openness in research are plentiful, and take on particular dynamics when involving human subjects. To me, it seems that two questions in particular need to be asked: (1) what is the purpose of openness? Is it simply for the sake of transparency and perhaps to boost credibility, or is it to obtain replicability? (2) what are the ways in which openness can be obtained? Through production of all research material and "data" or through detailed reporting of research procedures. [/quote]

This is a really helpful way to think about transparency. I think that the idea of qualitative data ever being replicable is ridiculous. Even if I translated and published my interview notes, it would be difficult for anyone to "replicate" my work. Many of the people I interview I access through networks of trusted people, so because someone in their network "trusts" me, they trust me enough to talk to me. It is unlikely that they would offer the same information to another scholar who approached them without the benefit of the network, which would distort the findings. Also, even if the person I interviewed agreed to allow me to use his/her information openly, I would still hesitate to do this. Most of my research is with civil society in China, and during the 2000s, many CSO leaders felt that the environment was increasingly liberal and they would say that they didn't mind me using their names. However, all of that changed in 2012 with new leadership, and public statements made by CSO leaders and rights lawyers were then held against them as evidence of "bad intentions". It is unclear to me in an authoritarian context if it is ethical to ever make interviewee information accessible to even make replicability a possibility, and then, even if another researcher approached these same people, they would be given the same information. Also, the added burden of making my notes accessible to others (my notes are usually a combination of Chinese and English, with a personal style of abbreviation) would make publishing prohibitively time consuming, without adding the benefit of replicability or even necessarily better evaluation of my arguments.

However, as I said in another strand, I do think it would be helpful when I am reviewing colleagues' scholarship (and in writing my articles) to decide on some best practices in the field such as using more space for discussion of alternative explanations and methodology, such as why these interviewees might have this information, biases or missing information, scope conditions for this information, etc. It is hard to do this in a really thoughtful way with such small journal word counts, but perhaps developing a standard methodology appendix (that didn't count against word limits) might be a good way to do this.

lafujii wrote: Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.

I really like this distinction - how we can be more clear about our methodology, but not about the identities of the people we interview. So we can focus more on discussing why we think that this set of interviewees would be in a good position to know about this topic, what sort of biases might exist or missing information, and how we analyze this information (how reliable do we think our findings are?). I think doing these things would help others evaluate my work and would also help me think through what I might be able to know and what I cannot (scope conditions). However, one problem that I have with this is the word limits at most journals. It is already really difficult to explain a puzzle and methodology, review literature, introduce cases, provide evidence to support the argument in these cases, and then discuss implications in a thoughtful way in under 8,000-10,000 words for a generalist journal. To use qualitative data from a particular country and have enough space to introduce the country case and adequately describe your methodology, I wonder if this pushes qualitative work out of generalist journals and into country-specific journals so that you don't have to use space to explain the country case? Or if the general political science journals would be willing to expand the word count for qualitative work?

[quote="lafujii"] Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.[/quote]

I really like this distinction - how we can be more clear about our methodology, but not about the identities of the people we interview. So we can focus more on discussing why we think that this set of interviewees would be in a good position to know about this topic, what sort of biases might exist or missing information, and how we analyze this information (how reliable do we think our findings are?). I think doing these things would help others evaluate my work and would also help me think through what I might be able to know and what I cannot (scope conditions). However, one problem that I have with this is the word limits at most journals. It is already really difficult to explain a puzzle and methodology, review literature, introduce cases, provide evidence to support the argument in these cases, and then discuss implications in a thoughtful way in under 8,000-10,000 words for a generalist journal. To use qualitative data from a particular country and have enough space to introduce the country case and adequately describe your methodology, I wonder if this pushes qualitative work out of generalist journals and into country-specific journals so that you don't have to use space to explain the country case? Or if the general political science journals would be willing to expand the word count for qualitative work?

In inviting input into the Qualitative Transparency Deliberations, members of the Steering Committee asked a number of questions about “not just data access but also, for instance, transparency about how we’ve gathered the empirical information on which we rely, about how we have analyzed or interpreted that information” (viewtopic.php?f=10&t=56). I would like to briefly respond to some of these questions based on my recent experience of publication with the American Political Science Review.*

I am a qualitative researcher of the internal dynamics of civil war and focus on questions of social mobilization and participation in armed conflict. My current research is based on immersive fieldwork over 2011-2013 in the highly politicized and isolated environment of Abkhazia—a partially recognized, breakaway territory of Georgia,—where I conducted 150 in-depth, semi-structured interviews with a range of participants and non-participants in the Georgian-Abkhaz war of 1992-1993, engaged in daily participant observation, and collected extensive additional primary and secondary materials, including 30 interviews in Georgia and Russia.

My work speaks to the issues of sensitive human subject research in violent conflict settings brought up in other contributions, including trust in the researcher necessary for field access (viewtopic.php?f=10&t=41#p64), grounded knowledge of the context and reflexivity involved in the interpretation and analysis (viewtopic.php?f=10&t=50;viewtopic.php?f=10&t=47), and unintended consequences of making field materials publicly available (viewtopic.php?f=10&t=79; see also Parkinson and Wood, 2015).

Here I will organize my comments around three aspects of the research process, namely, data access, production transparency, and analytic transparency, addressed in the 2012 Revisions to APSA’s Guide to Professional Ethics in Political Science, the 2013 Guidelines for Data Access and Research Transparency, and the 2014 Journal Editors Transparency Statement. While I addressed the issue of data access mainly in the manuscript, production and analytic transparency were facilitated by the inclusion of the online methodological appendices. This helped me protect my respondents in an ongoing way (Fujii 2012). I take each of these issues in turn.

I. MANUSCRIPT 1. Data accessI addressed the issue of data access in two stages. First, at the time of manuscript submission, I informed the editors that in compliance with my ethics protocols, interview transcripts and participant observation notes could not be made publicly available to ensure the security and confidentiality of my respondents. Even if anonymized, disclosure of field materials through a digital depository or online appendix posed a danger of compromising the identity of my respondents as personal identifiers could be grasped in the context of high network density and relatively small size of Abkhazia. This would be especially detrimental to those respondents who participated in the war in various capacities, but also to respondents more generally who came in contact with an international researcher of Canadian-Russian-Ukrainian background in the sensitive political environment complicated by the Russian presence and current relations between Georgia and Russia. Furthermore, this could jeopardise not only the trust of my respondents in my ability to protect in an ongoing way the information that they shared with me, but also my future security as a researcher in the area—a significant issue that deserves greater attention as part of the DA-RT deliberations (see, for example, viewtopic.php?f=10&t=47&p=203#p203). As an alternative to making my materials available in full, I offered a detailed description of my data collection and analysis procedures in online methodological appendices.

The second stage of addressing data access involved providing extended, often paragraph-length, interview excerpts in support of my manuscript. In a separate post, Alan Jacobs raised an important question in this regard: “Does providing key pieces of evidence in more extensive form help readers better understand and evaluate the empirical basis of findings?” (viewtopic.php?f=10&t=70). In my case, presenting extended interview excerpts helped me address a potential problem of evidence that could be viewed as too short and out of context for the reader to evaluate. At the same time, it posed similar challenges to making interview transcripts fully available. I had to ensure that I presented extended evidence as part of the typical mobilization trajectories while protecting individual details of my respondents. Moreover, providing extended evidence required significant additional space. This experience points to a broader issue faced by qualitative researchers who rely on extensive textual data, such as interview transcripts, as the empirical basis of their findings. The conditions of publication for this type of qualitative work can impose a different set of requirements not only on the researchers (for a discussion, see, for example, viewtopic.php?f=10&t=59), but also on the editors, whose willingness to offer additional space can be decisive in facilitating the publication of our work.

II. ONLINE METHODLOGICAL APPENDICESThe issues outlined above relate to the manuscript itself as well as the (online) methodological appendices in support of the manuscript, to which I turn now. I used the appendices to provide additional details of my data generation and analysis procedures, which increased transparency of my research while protecting participants in my research by focusing on my choices in and out of the field. As a result, while the manuscript contains the central aspects of data generation and analysis, the detailed statement on my production and analytic transparency can be found in the online methodological appendices. This came at a price of an additional major piece of writing in support of the manuscript and effort and time that it took to produce. Below I provide a number of examples of production and analytic transparency tools that I used in my methodological appendices.

2. Production transparencyProduction transparency in my methodological appendices meant explaining the choices that I made in the field, particularly how I selected my research locales and respondents and how I used various interview strategies, participant observation, and additional primary and secondary materials to develop a grounded understanding of my case and address potential sources of bias in my data.

First, my long-term fieldwork benefited greatly from the exploratory field trip, when I probed the theoretical foundations and feasibility of my research and established contacts necessary for future fieldwork in Abkhazia. My selection of research locales depended on this field trip as I was able to test my initial assumptions and refine my research design based on the understanding that I developed of the spatial and temporal variation at the Georgian-Abkhaz war onset that could have produced distinct patterns of mobilization, but also the security conditions that limited my ability to conduct primary research in certain areas—an example of the back-and-forth between theory and data in the practice of research raised earlier (viewtopic.php?f=10&t=56&sid=85ff77d88e59dcec2877a515a90a1111#p185). This required me to devise creative strategies of locating comparable interviews conducted in these areas by other researchers and triangulating across extensive archival and secondary materials.

Second, I followed a number of strategies in accessing and selecting respondents in the particular conditions of my research. I worked independently, avoiding local government, non-governmental, or university affiliation, but sought permission for my research from local authorities—an important choice in the sensitive political context of Abkhazia. I devised a combined snowball and targeted selection strategy to ensure that I gained access to respondents with varied participation record along the mobilization roles continuum that I developed in advance of field research and refined during fieldwork in interaction with my data. My sustained presence and research activity allowed me to build the trust necessary for access and extend my initial networks, from which I selected subsequent respondents in each of my four research locales. I approached respondents in the participation categories not provided through the snowball sampling directly at their location of employment, which increased representativeness of my sample.

Finally, while my interviews spanned respondents’ life histories, I focused on the events of the war that took place two decades prior to my research. This created specific problems of potential bias related to reliability of recollections, endogeneity of memory to war-time processes, and homogeneity of responses due to common political loyalties (see Wood, 2004). I tackled these problems in multiple ways. My informed consent procedure stressed unavailability of benefits other than academic writing to reduce the incentives to misrepresent war-time mobilization. Respondents with a broad range of pre- and post-war political loyalties were selected to address the potential homogeneity problem. I used a combination of event and narrative questions in the interviews and drew on preceding interviews, participant observation, and the meta-data that emerged during the interviews (Fujii, 2010) to develop probes and follow-up questions. These strategies helped address issues of memory and suspected incomplete or misleading information and advance the conversation beyond the dominant narrative of conflict. Extensive triangulation, including with alternative interview archives collected by other researchers during the war and mid-way between the war and my research, allowed me to cross-check individual and collective mobilization trajectories and conflict narratives. I provided a full list of the (de facto) state, private, and news archives and libraries that I accessed and detailed how each of the sources that I used tackled the problem from different angles.

3. Analytic transparencyI employed two sets of analytic transparency tools in my methodological appendices. First, I clarified my three-stage coding strategy, whereby I coded the interviews according to broad background characteristics, recollections of events, and narratives of conflict, and provided a sample for each of the three stages of coding that I conducted. Second, I specified the sequence involved in my causal mechanism in comparison to alternative explanations in process tracing.

Overall, these practices and tools of transparency provided the foundation for the evaluation of my findings, while working to protect participants in my research. It is important to emphasize that, while the strategies that I adopted were available in the context of my research, they may not be transferable to other cases or modes of research, which suggests the need for greater trust in the researcher’ knowledge of the context (viewtopic.php?f=10&t=40&p=63#p63; on ethnographic sensibility, see Schatz, 2009) and warns against a uniform approach to evaluating transparency in qualitative research more broadly.

In inviting input into the Qualitative Transparency Deliberations, members of the Steering Committee asked a number of questions about “not just data access but also, for instance, transparency about how we’ve gathered the empirical information on which we rely, about how we have analyzed or interpreted that information” (https://www.qualtd.net/viewtopic.php?f=10&t=56). I would like to briefly respond to some of these questions based on my recent experience of publication with the American Political Science Review.*

I am a qualitative researcher of the internal dynamics of civil war and focus on questions of social mobilization and participation in armed conflict. My current research is based on immersive fieldwork over 2011-2013 in the highly politicized and isolated environment of Abkhazia—a partially recognized, breakaway territory of Georgia,—where I conducted 150 in-depth, semi-structured interviews with a range of participants and non-participants in the Georgian-Abkhaz war of 1992-1993, engaged in daily participant observation, and collected extensive additional primary and secondary materials, including 30 interviews in Georgia and Russia.

My work speaks to the issues of sensitive human subject research in violent conflict settings brought up in other contributions, including trust in the researcher necessary for field access (https://www.qualtd.net/viewtopic.php?f=10&t=41#p64), grounded knowledge of the context and reflexivity involved in the interpretation and analysis (https://www.qualtd.net/viewtopic.php?f=10&t=50; https://www.qualtd.net/viewtopic.php?f=10&t=47), and unintended consequences of making field materials publicly available (https://www.qualtd.net/viewtopic.php?f=10&t=79; see also Parkinson and Wood, 2015).

Here I will organize my comments around three aspects of the research process, namely, data access, production transparency, and analytic transparency, addressed in the 2012 Revisions to APSA’s Guide to Professional Ethics in Political Science, the 2013 Guidelines for Data Access and Research Transparency, and the 2014 Journal Editors Transparency Statement. While I addressed the issue of data access mainly in the manuscript, production and analytic transparency were facilitated by the inclusion of the online methodological appendices. This helped me protect my respondents in an ongoing way (Fujii 2012). I take each of these issues in turn.

I. MANUSCRIPT 1. Data accessI addressed the issue of data access in two stages. First, at the time of manuscript submission, I informed the editors that in compliance with my ethics protocols, interview transcripts and participant observation notes could not be made publicly available to ensure the security and confidentiality of my respondents. Even if anonymized, disclosure of field materials through a digital depository or online appendix posed a danger of compromising the identity of my respondents as personal identifiers could be grasped in the context of high network density and relatively small size of Abkhazia. This would be especially detrimental to those respondents who participated in the war in various capacities, but also to respondents more generally who came in contact with an international researcher of Canadian-Russian-Ukrainian background in the sensitive political environment complicated by the Russian presence and current relations between Georgia and Russia. Furthermore, this could jeopardise not only the trust of my respondents in my ability to protect in an ongoing way the information that they shared with me, but also my future security as a researcher in the area—a significant issue that deserves greater attention as part of the DA-RT deliberations (see, for example, https://www.qualtd.net/viewtopic.php?f=10&t=47&p=203#p203). As an alternative to making my materials available in full, I offered a detailed description of my data collection and analysis procedures in online methodological appendices.

The second stage of addressing data access involved providing extended, often paragraph-length, interview excerpts in support of my manuscript. In a separate post, Alan Jacobs raised an important question in this regard: “Does providing key pieces of evidence in more extensive form help readers better understand and evaluate the empirical basis of findings?” (https://www.qualtd.net/viewtopic.php?f=10&t=70). In my case, presenting extended interview excerpts helped me address a potential problem of evidence that could be viewed as too short and out of context for the reader to evaluate. At the same time, it posed similar challenges to making interview transcripts fully available. I had to ensure that I presented extended evidence as part of the typical mobilization trajectories while protecting individual details of my respondents. Moreover, providing extended evidence required significant additional space. This experience points to a broader issue faced by qualitative researchers who rely on extensive textual data, such as interview transcripts, as the empirical basis of their findings. The conditions of publication for this type of qualitative work can impose a different set of requirements not only on the researchers (for a discussion, see, for example, https://www.qualtd.net/viewtopic.php?f=10&t=59), but also on the editors, whose willingness to offer additional space can be decisive in facilitating the publication of our work.

II. ONLINE METHODLOGICAL APPENDICESThe issues outlined above relate to the manuscript itself as well as the (online) methodological appendices in support of the manuscript, to which I turn now. I used the appendices to provide additional details of my data generation and analysis procedures, which increased transparency of my research while protecting participants in my research by focusing on my choices in and out of the field. As a result, while the manuscript contains the central aspects of data generation and analysis, the detailed statement on my production and analytic transparency can be found in the online methodological appendices. This came at a price of an additional major piece of writing in support of the manuscript and effort and time that it took to produce. Below I provide a number of examples of production and analytic transparency tools that I used in my methodological appendices.

2. Production transparencyProduction transparency in my methodological appendices meant explaining the choices that I made in the field, particularly how I selected my research locales and respondents and how I used various interview strategies, participant observation, and additional primary and secondary materials to develop a grounded understanding of my case and address potential sources of bias in my data.

First, my long-term fieldwork benefited greatly from the exploratory field trip, when I probed the theoretical foundations and feasibility of my research and established contacts necessary for future fieldwork in Abkhazia. My selection of research locales depended on this field trip as I was able to test my initial assumptions and refine my research design based on the understanding that I developed of the spatial and temporal variation at the Georgian-Abkhaz war onset that could have produced distinct patterns of mobilization, but also the security conditions that limited my ability to conduct primary research in certain areas—an example of the back-and-forth between theory and data in the practice of research raised earlier (https://www.qualtd.net/viewtopic.php?f=10&t=56&sid=85ff77d88e59dcec2877a515a90a1111#p185). This required me to devise creative strategies of locating comparable interviews conducted in these areas by other researchers and triangulating across extensive archival and secondary materials.

Second, I followed a number of strategies in accessing and selecting respondents in the particular conditions of my research. I worked independently, avoiding local government, non-governmental, or university affiliation, but sought permission for my research from local authorities—an important choice in the sensitive political context of Abkhazia. I devised a combined snowball and targeted selection strategy to ensure that I gained access to respondents with varied participation record along the mobilization roles continuum that I developed in advance of field research and refined during fieldwork in interaction with my data. My sustained presence and research activity allowed me to build the trust necessary for access and extend my initial networks, from which I selected subsequent respondents in each of my four research locales. I approached respondents in the participation categories not provided through the snowball sampling directly at their location of employment, which increased representativeness of my sample.

Finally, while my interviews spanned respondents’ life histories, I focused on the events of the war that took place two decades prior to my research. This created specific problems of potential bias related to reliability of recollections, endogeneity of memory to war-time processes, and homogeneity of responses due to common political loyalties (see Wood, 2004). I tackled these problems in multiple ways. My informed consent procedure stressed unavailability of benefits other than academic writing to reduce the incentives to misrepresent war-time mobilization. Respondents with a broad range of pre- and post-war political loyalties were selected to address the potential homogeneity problem. I used a combination of event and narrative questions in the interviews and drew on preceding interviews, participant observation, and the meta-data that emerged during the interviews (Fujii, 2010) to develop probes and follow-up questions. These strategies helped address issues of memory and suspected incomplete or misleading information and advance the conversation beyond the dominant narrative of conflict. Extensive triangulation, including with alternative interview archives collected by other researchers during the war and mid-way between the war and my research, allowed me to cross-check individual and collective mobilization trajectories and conflict narratives. I provided a full list of the (de facto) state, private, and news archives and libraries that I accessed and detailed how each of the sources that I used tackled the problem from different angles.

3. Analytic transparencyI employed two sets of analytic transparency tools in my methodological appendices. First, I clarified my three-stage coding strategy, whereby I coded the interviews according to broad background characteristics, recollections of events, and narratives of conflict, and provided a sample for each of the three stages of coding that I conducted. Second, I specified the sequence involved in my causal mechanism in comparison to alternative explanations in process tracing.

Overall, these practices and tools of transparency provided the foundation for the evaluation of my findings, while working to protect participants in my research. It is important to emphasize that, while the strategies that I adopted were available in the context of my research, they may not be transferable to other cases or modes of research, which suggests the need for greater trust in the researcher’ knowledge of the context (https://www.qualtd.net/viewtopic.php?f=10&t=40&p=63#p63; on ethnographic sensibility, see Schatz, 2009) and warns against a uniform approach to evaluating transparency in qualitative research more broadly.

The challenges with openness in research are plentiful, and take on particular dynamics when involving human subjects. To me, it seems that two questions in particular need to be asked:

(1) what is the purpose of openness? Is it simply for the sake of transparency and perhaps to boost credibility, or is it to obtain replicability? (2) what are the ways in which openness can be obtained? Through production of all research material and "data" or through detailed reporting of research procedures

How we answer on each of these question has bearing on the other. If we are interested in replicability, ensuring adequate protection of subjects is not reconcilable with being able to replicate a study, in most instances I would argue. For example, if one needs to so heavily redact an interview transcript in order to ensure complete anonymity, it will serve little purpose for replication. However, if the purpose is to have greater transparency in general, to illustrate that the researcher took reasonable steps to ensure reliable and valid methods and findings that we can have faith in, then perhaps seeing full transcripts, for example, is not necessary. Rather what is necessary is full transparency in terms of process (what questions were asked, how interviewees were selected, etc.). In the end, I think we need to decide first what purpose transparency and openness serves. From there, how we obtain it can become clearer. Nevertheless, the ethical dimensions of our research should always be considered and weigh heavily on how we respond to these questions.

The challenges with openness in research are plentiful, and take on particular dynamics when involving human subjects. To me, it seems that two questions in particular need to be asked:

(1) what is the purpose of openness? Is it simply for the sake of transparency and perhaps to boost credibility, or is it to obtain replicability? (2) what are the ways in which openness can be obtained? Through production of all research material and "data" or through detailed reporting of research procedures

How we answer on each of these question has bearing on the other. If we are interested in replicability, ensuring adequate protection of subjects is not reconcilable with being able to replicate a study, in most instances I would argue. For example, if one needs to so heavily redact an interview transcript in order to ensure complete anonymity, it will serve little purpose for replication. However, if the purpose is to have greater transparency in general, to illustrate that the researcher took reasonable steps to ensure reliable and valid methods and findings that we can have faith in, then perhaps seeing full transcripts, for example, is not necessary. Rather what is necessary is full transparency in terms of process (what questions were asked, how interviewees were selected, etc.). In the end, I think we need to decide first what purpose transparency and openness serves. From there, how we obtain it can become clearer. Nevertheless, the ethical dimensions of our research should always be considered and weigh heavily on how we respond to these questions.

I think the question of the ethics of protecting the subjects of our research should stand at the center and in some sense, above, or prior to, the standards of 'research openness'. This is even in the wake of serious cases of research fraud and abuse, which certainly concern us all. Bad research published in major journals is nothing compared to the harm that can come to real human beings in the conduct of our research. As much as I have struggled with a difficult IRB process doing the work I have done in East Africa, usually because of their ignorance of the region, I am deeply committed to the importance of that process and outcomes it ensures. As some of the postings here note, even my presence at an office or an individual's home could be seen as evidence of an individual's opinion or commitment in an authoritarian setting, and so obtaining proper permissions, keeping my paperwork secret, not getting signatures on consent forms, not sharing my notes, not identifying respondents in any way-- these are sacred trusts. They are actually more important than someone else replicating my work. I hope that my social science contributions provide value to the discipline of political science, that they advance the field and particularly, that they add to the stream of knowledge for African politics and comparative politics. And that will require verification and engagement on some level, but that just cannot be obtained through an "openness" that jettisons ethics or the humans at the heart of my work. The balance will always fall that way for me, and I think for some core group of colleagues who work in settings where the "human subjects" we work with will demand just that.

Lahra SmithGeorgetown University

I think the question of the ethics of protecting the subjects of our research should stand at the center and in some sense, above, or prior to, the standards of 'research openness'. This is even in the wake of serious cases of research fraud and abuse, which certainly concern us all. Bad research published in major journals is nothing compared to the harm that can come to real human beings in the conduct of our research. As much as I have struggled with a difficult IRB process doing the work I have done in East Africa, usually because of their ignorance of the region, I am deeply committed to the importance of that process and outcomes it ensures. As some of the postings here note, even my presence at an office or an individual's home could be seen as evidence of an individual's opinion or commitment in an authoritarian setting, and so obtaining proper permissions, keeping my paperwork secret, not getting signatures on consent forms, not sharing my notes, not identifying respondents in any way-- these are sacred trusts. They are actually more important than someone else replicating my work. I hope that my social science contributions provide value to the discipline of political science, that they advance the field and particularly, that they add to the stream of knowledge for African politics and comparative politics. And that will require verification and engagement on some level, but that just cannot be obtained through an "openness" that jettisons ethics or the humans at the heart of my work. The balance will always fall that way for me, and I think for some core group of colleagues who work in settings where the "human subjects" we work with will demand just that.

As a PhD candidate just returning from fieldwork, this thread is echoing many of the thoughts and concerns I've been having while going through my data and analyzing it for publication. While there is no question that (considering both my own ethical standards as well as the commitments I've made in my IRB protocol) that I could not publish interview transcripts for the overwhelming majority of my interviews (most of which were not recorded, either due to the request of the interviewee or due to my own judgment of the situation), and even more so provide access to fieldnotes. However, there are a number of scenarios where the line is not quite as clear as imagined in IRB (or REB in Canada) protocols and some versions of transparency guidelines or practices.

Even though in many cases, interlocutors may have been willing to agree to be recorded, I could not be certain that they (1) would be as open (and could almost guarantee that they would not); and (2) that they would not say something that would put them (or someone else) at some level of risk, and having a recording (which by definition has identifying information) would make it much more unsafe to transport. This is particularly a concern when doing research in a conflict or post-conflict setting where checkpoints are pervasive and you have to ensure that your data is as secure as possible even when traveling between locations within the state (which was the case in my field site).

In some cases, my interlocutors told me that I could name and quote them, following that up quickly with some variation of the idea that “this would be written in Canada after all, and who would read it anyway?” This presents a particular dilemma, as it calls into question the level to which interlocutors are "informed" (or can reasonably be) on the level of risk. It's questionable, particularly but not only due to the increase in access to information online, whether interviewees (or even researchers themselves) can truly know whether and how quickly their material can spread across geographic areas or translated. In my case, I am not confident in saying that it will not.

In the small number of cases where I did record (sometimes due to a practical reality that I knew I would not be able to take notes immediately afterwards, and mostly with interlocutors I felt were more accustomed to recorders - such as journalists, and some official government representatives), many interviewees still requested that I send any quote or attribution ahead of publication. In the short period of time since returning from fieldwork, I have already been through that process for a few quotes from interviewees and they both requested to see the context and made small modifications to their quotes (both recorded). One of the reasons for this is that the research questions are related to contemporary issues, and within which people are particularly concerned about how their statements may affect ongoing dynamics. This is undoubtedly a time-consuming process but one that is critical to maintaining the ethical commitment I've made to these interviewees.

In these cases, I could not - if requested by an editor - state that I cannot disclose transcripts or notes because my interviewees requested anonymity. However, if by my own assessment, this poses a greater risk than the possible benefit of "full transparency" (if "full transparency" is defined as making interviews and notes available when permitted by IRB). It appears to me that ethics of research should compel to think through (and be transparent about) these tradeoffs - marginal benefit to the research vs. potentially putting participants at some level of risk (as it is implied, in some of these cases, that they may not have agreed to the same level of disclosure if the work was to be published locally).

All of these decisions touch specifically on the significance placed on a researcher's reflexivity, knowledge of the context (and the way in which recording, and written consent, are experienced culturally), and what others here have called a stance of "paranoid caution".

As a PhD candidate just returning from fieldwork, this thread is echoing many of the thoughts and concerns I've been having while going through my data and analyzing it for publication. While there is no question that (considering both my own ethical standards as well as the commitments I've made in my IRB protocol) that I could not publish interview transcripts for the overwhelming majority of my interviews (most of which were not recorded, either due to the request of the interviewee or due to my own judgment of the situation), and even more so provide access to fieldnotes. However, there are a number of scenarios where the line is not quite as clear as imagined in IRB (or REB in Canada) protocols and some versions of transparency guidelines or practices.

Even though in many cases, interlocutors may have been willing to agree to be recorded, I could not be certain that they (1) would be as open (and could almost guarantee that they would not); and (2) that they would not say something that would put them (or someone else) at some level of risk, and having a recording (which by definition has identifying information) would make it much more unsafe to transport. This is particularly a concern when doing research in a conflict or post-conflict setting where checkpoints are pervasive and you have to ensure that your data is as secure as possible even when traveling between locations within the state (which was the case in my field site).

In some cases, my interlocutors told me that I could name and quote them, following that up quickly with some variation of the idea that “this would be written in Canada after all, and who would read it anyway?” This presents a particular dilemma, as it calls into question the level to which interlocutors are "informed" (or can reasonably be) on the level of risk. It's questionable, particularly but not only due to the increase in access to information online, whether interviewees (or even researchers themselves) can truly know whether and how quickly their material can spread across geographic areas or translated. In my case, I am not confident in saying that it will not.

In the small number of cases where I did record (sometimes due to a practical reality that I knew I would not be able to take notes immediately afterwards, and mostly with interlocutors I felt were more accustomed to recorders - such as journalists, and some official government representatives), many interviewees still requested that I send any quote or attribution ahead of publication. In the short period of time since returning from fieldwork, I have already been through that process for a few quotes from interviewees and they both requested to see the context and made small modifications to their quotes (both recorded). One of the reasons for this is that the research questions are related to contemporary issues, and within which people are particularly concerned about how their statements may affect ongoing dynamics. This is undoubtedly a time-consuming process but one that is critical to maintaining the ethical commitment I've made to these interviewees.

In these cases, I could not - if requested by an editor - state that I cannot disclose transcripts or notes because my interviewees requested anonymity. However, if by my own assessment, this poses a greater risk than the possible benefit of "full transparency" (if "full transparency" is defined as making interviews and notes available when permitted by IRB). It appears to me that ethics of research should compel to think through (and be transparent about) these tradeoffs - marginal benefit to the research vs. potentially putting participants at some level of risk (as it is implied, in some of these cases, that they may not have agreed to the same level of disclosure if the work was to be published locally).

All of these decisions touch specifically on the significance placed on a researcher's reflexivity, knowledge of the context (and the way in which recording, and written consent, are experienced culturally), and what others here have called a stance of "paranoid caution".

[this is a repost from the marginalized and vulnerable populations thread]

right now, i am running surveys of university faculty about perceptions of climate, attitudes toward diversity, social networks on campus, etc.

the surveys ask questions about people's rank, field, and social identities. people in categories with small numbers (african american women tenured professors of computer science) are often reluctant to respond because they dont want to identified, they fear retaliation, etc.

we inform people, on the first page of the survey that: "Survey data will be de-identified, and we will not release or publicize any disaggregated information from this survey, except when the groups are sufficiently large to preserve the anonymity of respondents." we have heard that this works to allay some fears.

the big question is: if we are required to make data available as a condition of publication, how can we insure that future users of the data will respect this principle?

the same holds for qualitative research, as we are also planning to conduct in-person interviews with faculty members.

mala

[this is a repost from the marginalized and vulnerable populations thread]

right now, i am running surveys of university faculty about perceptions of climate, attitudes toward diversity, social networks on campus, etc.

the surveys ask questions about people's rank, field, and social identities. people in categories with small numbers (african american women tenured professors of computer science) are often reluctant to respond because they dont want to identified, they fear retaliation, etc.

we inform people, on the first page of the survey that: "Survey data will be de-identified, and we will not release or publicize any disaggregated information from this survey, except when the groups are sufficiently large to preserve the anonymity of respondents." we have heard that this works to allay some fears.

the big question is: if we are required to make data available as a condition of publication, how can we insure that future users of the data will respect this principle?

the same holds for qualitative research, as we are also planning to conduct in-person interviews with faculty members.

The important comments in this thread takes me back to the fundamental concerns over how we are trained and then go on to train future researchers, and how our discipline "rewards" academic work through publication and recognition. Asserting the value of reflexivity in research, as Fuji describes, not as ancillary but as a central component of a publication, seems like a good concrete first step.

In other threads questions have been posed about what journal editors can do to address some of these concerns raised by people doing non-DART-conforming research. Perhaps a check for the central reflexive component of the study, somewhere in the research puzzle section, is a way to begin that process. Is the researcher sufficiently aware of how their own positionality might be influencing their data? Are they able to articulate this in a meaningful way that engages positionality as a real component of the project rather than an add-on? Connected to this, as new PhDs are formed, cultivating such reflexivity would need to be an explicit part of their training rather than something left to the murky grounds of intuition which may or may not be valued by one's gatekeepers.

ElliotPosner wrote:

Guest wrote: I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.

This concern deserves attention. I'm going to start a new thread devoted to it.

The important comments in this thread takes me back to the fundamental concerns over how we are trained and then go on to train future researchers, and how our discipline "rewards" academic work through publication and recognition. Asserting the value of reflexivity in research, as Fuji describes, not as ancillary but as a central component of a publication, seems like a good concrete first step.

In other threads questions have been posed about what journal editors can do to address some of these concerns raised by people doing non-DART-conforming research. Perhaps a check for the central reflexive component of the study, somewhere in the research puzzle section, is a way to begin that process. Is the researcher sufficiently aware of how their own positionality might be influencing their data? Are they able to articulate this in a meaningful way that engages positionality as a real component of the project rather than an add-on? Connected to this, as new PhDs are formed, cultivating such reflexivity would need to be an explicit part of their training rather than something left to the murky grounds of intuition which may or may not be valued by one's gatekeepers.

[quote="ElliotPosner"][quote="Guest"] I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.[/quote]

This concern deserves attention. I'm going to start a new thread devoted to it.[/quote]

Guest wrote: I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.

This concern deserves attention. I'm going to start a new thread devoted to it.

[quote="Guest"] I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.[/quote]

This concern deserves attention. I'm going to start a new thread devoted to it.

Guest wrote:Much of my own research has involved interviews in business and trade sectors in the Middle East. The focus has been to understand political economies under the context of sub-state conflict, what can be termed war economies. I’ve made a lot of mistakes along the way. Many of the individuals I have interviewed carry out business in war zones. The risks are obvious. About a quarter of my meetings in the last several years have ended with you may not cite or use any part of our conversation. For the rest, I make clear the purposes of my research and I employ a modified version of Chatham House Rules. We agree on how I can identify the individual while maintaining anonymity, so usually a general description: Transportation firm manager Amman; former Iraq Oil Ministry employee, Sharjah. An additional hurdle is losing touch with some of my earlier contacts. Businessmen are nomadic under these conditions.

Finally, I feel that a related issue of transparency in research regards the use of scholarly research from the Middle East. Extensive US involvement in the region means American researchers, particularly white boys with heavy accents in Arabic, are assumed to be CIA. It does not help that some scholars in the US do advise the US intelligence community. The discipline needs more discussion of these issues.

Thanks so much for these candid comments. The first paragraph suggests that, if asked by a journal editor (trying to enforce transparency standards governing human sources), you couldn't possibly reveal more about your research participants without compromising your responsibility to protect. What about openness standards concerning reflexivity, as discussed by Lee Ann Fujii? What are the ethical implications of including in the main sections of articles/books a narrative about your thinking and actions vis-a-vis protecting human participants? Should this be part of the research transparency regime?

I'm fascinated by your second paragraph but I'm not sure how to think about the issue. One question, I suppose, is about the potential impact of increased openness for both scholars who advise intelligence communities (or other government entities) and those who do not. Should researchers bring these relationships into the open? Other contributions also raise issues about openness and perceptions about researcher ties to US intelligence and other government agencies, so I've started a separate "topic/thread."

[quote="Guest"]Much of my own research has involved interviews in business and trade sectors in the Middle East. The focus has been to understand political economies under the context of sub-state conflict, what can be termed war economies. I’ve made a lot of mistakes along the way. Many of the individuals I have interviewed carry out business in war zones. The risks are obvious. About a quarter of my meetings in the last several years have ended with you may not cite or use any part of our conversation. For the rest, I make clear the purposes of my research and I employ a modified version of Chatham House Rules. We agree on how I can identify the individual while maintaining anonymity, so usually a general description: Transportation firm manager Amman; former Iraq Oil Ministry employee, Sharjah. An additional hurdle is losing touch with some of my earlier contacts. Businessmen are nomadic under these conditions.

Finally, I feel that a related issue of transparency in research regards the use of scholarly research from the Middle East. Extensive US involvement in the region means American researchers, particularly white boys with heavy accents in Arabic, are assumed to be CIA. It does not help that some scholars in the US do advise the US intelligence community. The discipline needs more discussion of these issues.[/quote]

Thanks so much for these candid comments. The first paragraph suggests that, if asked by a journal editor (trying to enforce transparency standards governing human sources), you couldn't possibly reveal more about your research participants without compromising your responsibility to protect. What about openness standards concerning reflexivity, as discussed by Lee Ann Fujii? What are the ethical implications of including in the main sections of articles/books a narrative about your thinking and actions vis-a-vis protecting human participants? Should this be part of the research transparency regime?

I'm fascinated by your second paragraph but I'm not sure how to think about the issue. One question, I suppose, is about the potential impact of increased openness for both scholars who advise intelligence communities (or other government entities) and those who do not. Should researchers bring these relationships into the open? Other contributions also raise issues about openness and perceptions about researcher ties to US intelligence and other government agencies, so I've started a separate "topic/thread."

As someone who has also done interview-based research in authoritarian countries where people regularly "are disappeared" by the state, I cannot in good faith submit interview transcripts or lists of respondents for some of my work. It would be a clear violation of the protection of human subjects. Already I have worried that in small countries, my having even been seen entering certain offices might have caused problems for individuals who work in them, so to then also publish additional information would mean that I could not engage in interview-based research or participant observation.

I like Elliot Posner's suggestion from LeAnn's comment of reflexive explanations as part of what it could mean to be "transparent" in research, although it may still be difficult to say much in places where even explaining your process in a reflexive way might reveal identities or locations.

I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.

As someone who has also done interview-based research in authoritarian countries where people regularly "are disappeared" by the state, I cannot in good faith submit interview transcripts or lists of respondents for some of my work. It would be a clear violation of the protection of human subjects. Already I have worried that in small countries, my having even been seen entering certain offices might have caused problems for individuals who work in them, so to then also publish additional information would mean that I could not engage in interview-based research or participant observation.

I like Elliot Posner's suggestion from LeAnn's comment of reflexive explanations as part of what it could mean to be "transparent" in research, although it may still be difficult to say much in places where even explaining your process in a reflexive way might reveal identities or locations.

I am concerned that we are going to collectively sacrifice interesting questions and deep knowledge in order to valorize "openness." It is not at all clear to me that transparency is more important than answering questions that matter for politically sensitive locations and issues.

lafujii wrote:I think our obligation to protect human participants from harm need not be juxtaposed with the principle of openness. The kind of openness I endorse requires reflexivity, not the posting of sources that supposedly speak for themselves and "mean" the same to any reader, regardless of the extent of his or her contextual knowledge. To be reflexive means to discuss explicitly what the original research plan was, how things actually unfolded, including the ethical dilemmas that arose and how the researcher responded to them. To engage in this type of reflexive accounting of the research process--as it actually happened--is, in my book, to practice research openness.

One of the biggest dilemmas I faced was how far to go in anonymizing my research sites and interviewees for my first book, _Killing neighbors_. My imagined "harm" was some low-level bureaucrat tasked with figuring out where my research sites were located and who the people were whom I had quoted or referred to in the book and then going after them. Arbitrary arrest and indefinite detention were clear and present dangers in Rwanda (as they are to this day). I tried not to assume specific reasons the state would go after anyone. I just knew it could and would if it saw fit to do so.

So I worked from a level of paranoid caution to try to protect identities. I used pseudonyms for people and place names. I tried to choose pseudonyms carefully so that there was no metadata embedded in them (e.g., the same first initial). I tried to obscure any detail or reference that would indirectly point to the person (e.g., "the local teacher"). I tried to do my best but I always wondered whether I went far enough. Was there a way to ensure that even a local person would not be able to figure out where I had done research and to whom I had talked? That task seemed impossible, especially since authoritarian states have all kinds of ways to monitor locals as well as foreigners. What I did do was to discuss explicitly the steps I took and why. Some of that discussion made it into the published monograph but the rest of it came later, in a subsequent article.

Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.

Thanks, Lee Ann, for launching this discussion with such a stimulating post. I read it as a call for a pluralistic understanding of what research transparency means. Openness for many forms of qualitative work includes being explicit about how the researcher thinks about and seeks to resolve potential harm to human participants. Thus, in addition to some of the conventional ways that the term is being used (e.g., transparency in terms of making interviews public), openness can also be the reporting of reflexive processes concerning the protection of human participants. Openness as such would not be in conflict with the principle to protect human participants and therefore doesn't necessarily create dilemmas.

Lee Ann's points raise the question of whether being explicit in this way is an (ethical?) obligation in and of itself. Should being explicit about reflexivity surrounding human participants be part and parcel of the future research transparency regime?

[quote="lafujii"]I think our obligation to protect human participants from harm need not be juxtaposed with the principle of openness. The kind of openness I endorse requires reflexivity, not the posting of sources that supposedly speak for themselves and "mean" the same to any reader, regardless of the extent of his or her contextual knowledge. To be reflexive means to discuss explicitly what the original research plan was, how things actually unfolded, including the ethical dilemmas that arose and how the researcher responded to them. To engage in this type of reflexive accounting of the research process--as it actually happened--is, in my book, to practice research openness.

One of the biggest dilemmas I faced was how far to go in anonymizing my research sites and interviewees for my first book, _Killing neighbors_. My imagined "harm" was some low-level bureaucrat tasked with figuring out where my research sites were located and who the people were whom I had quoted or referred to in the book and then going after them. Arbitrary arrest and indefinite detention were clear and present dangers in Rwanda (as they are to this day). I tried not to assume specific reasons the state would go after anyone. I just knew it could and would if it saw fit to do so.

So I worked from a level of paranoid caution to try to protect identities. I used pseudonyms for people and place names. I tried to choose pseudonyms carefully so that there was no metadata embedded in them (e.g., the same first initial). I tried to obscure any detail or reference that would indirectly point to the person (e.g., "the local teacher"). I tried to do my best but I always wondered whether I went far enough. Was there a way to ensure that even a local person would not be able to figure out where I had done research and to whom I had talked? That task seemed impossible, especially since authoritarian states have all kinds of ways to monitor locals as well as foreigners. What I did do was to discuss explicitly the steps I took and why. Some of that discussion made it into the published monograph but the rest of it came later, in a subsequent article.

Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.[/quote]

Thanks, Lee Ann, for launching this discussion with such a stimulating post. I read it as a call for a pluralistic understanding of what research transparency means. Openness for many forms of qualitative work includes being explicit about how the researcher thinks about and seeks to resolve potential harm to human participants. Thus, in addition to some of the conventional ways that the term is being used (e.g., transparency in terms of making interviews public), openness can also be the reporting of reflexive processes concerning the protection of human participants. Openness as such would not be in conflict with the principle to protect human participants and therefore doesn't necessarily create dilemmas.

Lee Ann's points raise the question of whether being explicit in this way is an (ethical?) obligation in and of itself. Should being explicit about reflexivity surrounding human participants be part and parcel of the future research transparency regime?

Although I have worked with subjects who would face only humiliation and not arrest were they to be identified, I nevertheless believe that a posture of "paranoid caution" is the correct one to protect those who have taken us into their confidence.

Although I have worked with subjects who would face only humiliation and not arrest were they to be identified, I nevertheless believe that a posture of "paranoid caution" is the correct one to protect those who have taken us into their confidence.

Much of my own research has involved interviews in business and trade sectors in the Middle East. The focus has been to understand political economies under the context of sub-state conflict, what can be termed war economies. I’ve made a lot of mistakes along the way. Many of the individuals I have interviewed carry out business in war zones. The risks are obvious. About a quarter of my meetings in the last several years have ended with you may not cite or use any part of our conversation. For the rest, I make clear the purposes of my research and I employ a modified version of Chatham House Rules. We agree on how I can identify the individual while maintaining anonymity, so usually a general description: Transportation firm manager Amman; former Iraq Oil Ministry employee, Sharjah. An additional hurdle is losing touch with some of my earlier contacts. Businessmen are nomadic under these conditions.

Finally, I feel that a related issue of transparency in research regards the use of scholarly research from the Middle East. Extensive US involvement in the region means American researchers, particularly white boys with heavy accents in Arabic, are assumed to be CIA. It does not help that some scholars in the US do advise the US intelligence community. The discipline needs more discussion of these issues.

Much of my own research has involved interviews in business and trade sectors in the Middle East. The focus has been to understand political economies under the context of sub-state conflict, what can be termed war economies. I’ve made a lot of mistakes along the way. Many of the individuals I have interviewed carry out business in war zones. The risks are obvious. About a quarter of my meetings in the last several years have ended with you may not cite or use any part of our conversation. For the rest, I make clear the purposes of my research and I employ a modified version of Chatham House Rules. We agree on how I can identify the individual while maintaining anonymity, so usually a general description: Transportation firm manager Amman; former Iraq Oil Ministry employee, Sharjah. An additional hurdle is losing touch with some of my earlier contacts. Businessmen are nomadic under these conditions.

Finally, I feel that a related issue of transparency in research regards the use of scholarly research from the Middle East. Extensive US involvement in the region means American researchers, particularly white boys with heavy accents in Arabic, are assumed to be CIA. It does not help that some scholars in the US do advise the US intelligence community. The discipline needs more discussion of these issues.

I think our obligation to protect human participants from harm need not be juxtaposed with the principle of openness. The kind of openness I endorse requires reflexivity, not the posting of sources that supposedly speak for themselves and "mean" the same to any reader, regardless of the extent of his or her contextual knowledge. To be reflexive means to discuss explicitly what the original research plan was, how things actually unfolded, including the ethical dilemmas that arose and how the researcher responded to them. To engage in this type of reflexive accounting of the research process--as it actually happened--is, in my book, to practice research openness.

One of the biggest dilemmas I faced was how far to go in anonymizing my research sites and interviewees for my first book, _Killing neighbors_. My imagined "harm" was some low-level bureaucrat tasked with figuring out where my research sites were located and who the people were whom I had quoted or referred to in the book and then going after them. Arbitrary arrest and indefinite detention were clear and present dangers in Rwanda (as they are to this day). I tried not to assume specific reasons the state would go after anyone. I just knew it could and would if it saw fit to do so.

So I worked from a level of paranoid caution to try to protect identities. I used pseudonyms for people and place names. I tried to choose pseudonyms carefully so that there was no metadata embedded in them (e.g., the same first initial). I tried to obscure any detail or reference that would indirectly point to the person (e.g., "the local teacher"). I tried to do my best but I always wondered whether I went far enough. Was there a way to ensure that even a local person would not be able to figure out where I had done research and to whom I had talked? That task seemed impossible, especially since authoritarian states have all kinds of ways to monitor locals as well as foreigners. What I did do was to discuss explicitly the steps I took and why. Some of that discussion made it into the published monograph but the rest of it came later, in a subsequent article.

Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.

I think our obligation to protect human participants from harm need not be juxtaposed with the principle of openness. The kind of openness I endorse requires reflexivity, not the posting of sources that supposedly speak for themselves and "mean" the same to any reader, regardless of the extent of his or her contextual knowledge. To be reflexive means to discuss explicitly what the original research plan was, how things actually unfolded, including the ethical dilemmas that arose and how the researcher responded to them. To engage in this type of reflexive accounting of the research process--as it actually happened--is, in my book, to practice research openness.

One of the biggest dilemmas I faced was how far to go in anonymizing my research sites and interviewees for my first book, _Killing neighbors_. My imagined "harm" was some low-level bureaucrat tasked with figuring out where my research sites were located and who the people were whom I had quoted or referred to in the book and then going after them. Arbitrary arrest and indefinite detention were clear and present dangers in Rwanda (as they are to this day). I tried not to assume specific reasons the state would go after anyone. I just knew it could and would if it saw fit to do so.

So I worked from a level of paranoid caution to try to protect identities. I used pseudonyms for people and place names. I tried to choose pseudonyms carefully so that there was no metadata embedded in them (e.g., the same first initial). I tried to obscure any detail or reference that would indirectly point to the person (e.g., "the local teacher"). I tried to do my best but I always wondered whether I went far enough. Was there a way to ensure that even a local person would not be able to figure out where I had done research and to whom I had talked? That task seemed impossible, especially since authoritarian states have all kinds of ways to monitor locals as well as foreigners. What I did do was to discuss explicitly the steps I took and why. Some of that discussion made it into the published monograph but the rest of it came later, in a subsequent article.

Looking back, I think I had a good sense of the risks and dangers posed by an authoritarian state, but I did not have a clear sense of how being open about the various dilemmas I encountered, not just those related to protecting identities, and how I responded to them (which was imperfect at best) was a key part of the research process, not ancillary to it. I think openness and an ethical sensibility ideally go hand in hand. A stronger commitment to openness in terms of more sustained reflexivity would have helped me to sharpen my ethical sensibility and vice versa.

We'd like to initiate a discussion about the tensions and dilemmas between the pursuit of two principles: the ethical principle to protect human subjects and the principle of research openness. What are some of the tensions and dilemmas between these principles that have occurred in conducting and publishing your research? How did you resolve them? Would you be willing to share examples? Please feel free to anonymize these examples, but it is also helpful to have some general contextual information about the setting and methods used in your research. Have you changed the topic or method of research in response to such tensions and dilemmas? Are the challenges different for researchers not subject to US IRB protocols?

We'd like to initiate a discussion about the tensions and dilemmas between the pursuit of two principles: the ethical principle to protect human subjects and the principle of research openness. What are some of the tensions and dilemmas between these principles that have occurred in conducting and publishing your research? How did you resolve them? Would you be willing to share examples? Please feel free to anonymize these examples, but it is also helpful to have some general contextual information about the setting and methods used in your research. Have you changed the topic or method of research in response to such tensions and dilemmas? Are the challenges different for researchers not subject to US IRB protocols?