tag:blogger.com,1999:blog-5257782925655545192015-03-23T12:18:56.486-04:00Institutional Review BlogNews and commentary about Institutional Review Board oversight<br>of the humanities and social sciencesZachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.comBlogger534125tag:blogger.com,1999:blog-525778292565554519.post-28129324107029498992015-03-23T12:18:00.004-04:002015-03-23T12:18:56.506-04:00NPRM Jumps White House Fence<a href="http://aishealth.com/newsletters/reportonresearchcompliance">Report on Research Compliance</a> has spotted a <a href="http://www.reginfo.gov/public/do/eoDetails?rrid=124965">notice on the Office of Information and Regulatory Affairs website</a> suggesting that a proposed rule on "Human Subjects Research Protections: Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators" has reached OIRA and is awaiting <a href="http://www.reginfo.gov/public/jsp/Utilities/EO_12866.pdf">EO 12866 Regulatory Review</a>. <br /><br />Don't ask me what this means in terms of a timetable, but it sounds as though the process is moving forward.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-48858514841828634742015-03-11T10:58:00.003-04:002015-03-11T10:58:49.339-04:00QI Focus Groups: Ditch Generalizability CriterionFocus groups of professionals engaged in quality improvement (QI) or comparative effectiveness research (CER) report that the Common Rule's "generalizable knowledge" standard does not provide clear guidance.<br /><br />[Whicher, Danielle, Nancy Kass, Yashar Saghai, Ruth Faden, Sean Tunis, and Peter Pronovost. “The Views of Quality Improvement Professionals and Comparative Effectiveness Researchers on Ethics, IRBs, and Oversight.” <em>Journal of Empirical Research on Human Research Ethics</em>, Published online before print, February 23, 2015, doi:<a href="http://dx.doi.org/10.1177/1556264615571558">10.1177/1556264615571558</a>.]<br /><a name='more'></a> <br /><br />The focus groups<br /><br /><blockquote>generally concluded that intent to produce generalizable knowledge or the related intent to publish were not useful criteria for distinguishing what activities should be subject to IRB oversight. Although some participants stated their local IRBs relied on these criteria, most participants felt they were conceptually confusing and ethically inappropriate. Some stated it may be hard to know, early in an activity, whether the results will be worth publishing. Others mentioned it is conceptually hard to distinguish local learning from learning generalizable to other situations as generalizability is a matter of degree. Scholars similarly have argued that it is difficult to ascertain intent and that generalizability is not a binary concept, but falls along a spectrum. Indeed, some suggest eliminating the intent to produce generalizable knowledge criterion from determinations about oversight.<br /><br />Instead, many participants suggested that considering the risk of harm to participants in a QI or CER activity makes more sense when determining what should be subject to ethical oversight, a view consistent with recommendations in the literature. (Citations omitted.)<br /></blockquote><br />The article misstates the language of the Common Rule, which defines research not by whether it is <em>intended</em> to produce generalizable knowledge, but rather by whether it is "<em>designed</em> to develop or contribute to generalizable knowledge." (Emphasis added.) That said, I doubt this distinction would have made a difference to the focus groups participants. Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-52012192409253625832015-03-02T13:51:00.000-05:002015-03-02T13:55:24.162-05:00IRB Asked, USB or FireWire. Silly, but Is It Bullying?Caleb Carr, assistant professor of communication at Illinois State University, argues that abusive IRBs are best thought of as bullies.<br /><br /><blockquote>Though IRBs are a legally required element of many higher education institutions and an important ethical part of all, their overextension of unchecked power is creating a hostile work environment for many social scientists, and calling them for what they are – systemic bullies – can empower administrators and faculties to finally respond to the increasing calls for IRB reform.<br /></blockquote><br />[Carr, Caleb T. “Spotlight on Ethics: Institutional Review Boards as Systemic Bullies.” <em>Journal of Higher Education Policy and Management</em> 37 (2015): 1–16. doi:<a href="http://dx.doi.org/10.1080/1360080X.2014.991530">10.1080/1360080X.2014.991530</a>.]<br /><a name='more'></a> <br />Drawing on the work of Dan Olweus, Carr defines bullying as "the infliction of repeated, unwanted harm towards an individual, often resulting in physical or emotional harm," consisting of "a prolonged behaviour rather than a single incident, such as when a target is subjected to regular harm rather than an isolated instance. Additionally, bullying behaviours are dependent on an imbalance of strength, either physical or asymmetrical power." <br /><br />Carr has no trouble showing that IRBs enjoy asymmetrical power and sometimes use that power inflict harm on researchers. But I wonder about the element of repetition. As Carr notes in a personal horror story, IRBs are famously inconsistent:<br /><br /><blockquote>The author of this article once received a protocol review from IRB for a student’s master’s thesis research, requesting explication of the mode of data transfer between a voice recorder and a computer. Inquiry revealed the IRB reviewer wanted the data transfer detailed to the level of whether USB or FireWire would be used to connect the digital voice recorder and researcher’s computer. Though anecdotal, this capricious (previous and subsequent reviews did not request this level of specificity for data transfer) and unrelated (it was never addressed how transfer cable type affected human subjects’ safety or protections) review concern is qualitatively reflected in the experience of social science researchers, who increasingly report widely varying research protocol reviews from IRBs.<br /></blockquote><br />Can a system that arbitrarily demands such information once, and then never again, be guilty of the repeated, prolongued behavior that marks a bully? <br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-46526092296258652582015-03-01T08:54:00.000-05:002015-03-01T08:54:43.072-05:00University of Queensland Punishes Researchers, Won't Say WhyThe University of Queensland demoted a professor and blocked him and another researcher from publishing findings, based on charges that they had not obtained necessary ethics clearances. But the university will not explain its conduct.<br /><br />[Jorge Branco. “<a href="http://www.brisbanetimes.com.au/queensland/uq-suppressed-bus-racism-study-academics-20150226-13q5lu.html">UQ Suppressed Bus Racism Study: Academics</a>.” <em>Brisbane Times</em>, February 27, 2015. Thanks to Michelle Meyer for tweeting this to my attention.]<br /><a name='more'></a><br /><h2>A Study of Racial Discrimination</h2>The <em>Brisbane Times</em> explains the initial study:<br /><br /><blockquote>In 2013, Dr Redzo Mujcic and Professor Paul Frijters, from the university's School of Economics, published an early working paper finding strong evidence of discrimination against black-skinned people on Brisbane buses.<br /><br />Their study, inspired by US civil rights figure Rosa Parks' experience of racial discrimination on a bus, saw 29 testers from different gender and ethnic groups asking bus drivers to let them on for free because their Go Cards were empty.<br /><br />The researchers found white testers were twice as likely to be given a free ride than black testers (72% to 36%), among a host of other findings relating to group theory.<br /><br />They proposed this was due to people being more likely to discriminate against those who were less like them, or not in their "in-group".<br /></blockquote><br />As Ian Ayres explains in the <em><a href="http://www.nytimes.com/2015/02/24/opinion/research-shows-white-privilege-is-real.html?_r=0">New York Times</a></em>, "This elegant experiment follows in a tradition of audit testing, in which social scientists have sent testers of different races to, for example, bargain over the price of new cars or old baseball cards. But the Australian study is the first, to my knowledge, to focus on discretionary accommodations."<br /><br />Frijters had gained approval by the the university's department of economics. But after the research was complete, Senior Deputy Vice-Chancellor Deborah Terry told Frijters that he should have sought approval from a university ethics committee. The university demoted him to assistant professor, though he has been restored to his previous rank.<br /><br /><h2>Australia's National Statement Allows This Sort of Thing</h2><br />Multiple provisions of Australia's <a href="http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/e72.pdf">National Statement on Ethical Conduct in Human Research</a> allow for covert research and research designed to expose illegal behavior, as well as for department-level review.<br /><br />The National Statement specifically states that <br /><br /><blockquote>Research that is intended to study or expose illegal activity or that is likely to discover it must be reviewed and approved by a Human Ethics Research Committee (HREC) rather than by one of the other processes of ethical review described in paragraphs 5.1.7 and 5.1.8 (page 78), <em>except where that research uses collections of non-identifiable data and involves negligible risk, and may therefore be exempted from ethical review</em>. (emphasis added)</blockquote><br />That sounds like this case. The study may be embarrassing to the bus company, but there's no way to show that individual bus drivers misbehaved.<br /><br />Weighing in after the fact, bioethicist Dr Andrew Crowden says that "the research was more than low risk because it involved deception of participants (the bus drivers) and therefore should have been reviewed by the UQ HREC (Human Research Ethics Committee)." But Chapter 2.1 of, which deals with harms, does not define low risk in terms of deception or disclosure, so Crowden has his categories confused. Nor is clear that the students deceived the bus drivers. If they had boarded with truly exhausted cards, would that have obviated the need for ethics committee review?<br /><br />(Crowden would be better off citing sections 2.3.6 and 2.3.7, which seem to state that only an HREC can waive the requirement of consent.)<br /><br /><h2>The University of Queensland Refuses to Explain Its Decisions</h2><br />To be sure, the university might be able to demonstrate that Mujcic and Frijters deliberately violated university policies. Or, as seems more likely, that they made a good-faith effort to follow confusing rules, which need to be clarified.<br /><br />Instead, the university went silent. Frijters submitted a public interest disclosure, and, after waiting six months without a reply, went public with his story. <br /><br />The <em>Brisbane Times</em> reports, "UQ vice-chancellor and president Professor Peter Hoj issued a statement saying the university was unable to comment because of the confidentiality of the investigation but was confident it had responded appropriately." Terry, the former UQ administrator who reprimanded Frijters, also declined to comment. <br /><br />This is the real scandal. I have quoted Jack Katz before, and I will quote him again:<br /><br /><blockquote>Legality changes the interaction environment of decisionmaking by creating a series of processes in which the reviewed become capable of examining and publicly criticizing the review to which they are subjected, both on a retail, case-by-case basis, and on a wholesale, policymaking level. [Jack Katz, "Toward a Natural History of Ethical Censorship," <em>Law & Society Review</em> 41 (December 2007), 805.]<br /></blockquote><br />The University of Queensland has failed not only its researchers but also everyone who has an interest in research ethics and in the fair provision of public services. An ethics process without transparency is no ethics process at all.<br /><br /><br /><br /><br /><br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-39004052685766133942015-01-23T11:44:00.000-05:002015-01-23T11:44:03.473-05:00Atran: IRBs Block Understanding of TerrorismInterviewed by <em>Nature</em>, anthropologist Scott Atran reminds us that human subjects rules have impeded his efforts to understand the origins of violence like the attack on <em>Charlie Hebdo</em>. <br /><br />[Reardon, Sara. “Looking for the Roots of Terrorism.” <em>Nature</em>, January 15, 2015. doi: <a href="http://dx.doi.org/10.1038/nature.2015.16732">10.1038/nature.2015.16732</a>. h/t Donald Pollock]<br /><a name='more'></a> <br />Atran explains:<br /><br /><blockquote>If you really want to do a scientific study with jihadis — I do it — you have to convince them to put down their guns, not talk to one another, and answer your questions. Some people, if you ask them if they would give up their belief in God if offered a certain amount of money, they will shoot you. So you can't ask that question.<br /><br />It’s not just because it’s dangerous. It’s because human subjects reviews at universities and especially the [US] defence department won't let this work be done. It’s not because it puts the researcher in danger, but because human subjects [research ethics] criteria have been set up to defend middle class university students. What are you going do with these kind of protocols when you talk to jihadis? Get them to sign it saying, “I appreciate that the Defense Department has funded this work,” and by the way if you have any complaints, call the human subjects secretary? This sounds ridiculous and nothing gets done, literally.<br /><br /><em>Have you run into such difficulties with your fieldwork?</em><br /><br />As an example, I got permission, before the [three] Bali bombers [who carried out a set of simultaneous attacks in 2002] were executed, to interview them. They were going to be shot because they blew up 200 people. I couldn’t get human subjects approval because “you have to bring a lawyer, and besides we won't allow anyone to interview prisoners.” I said why? “You can never be sure you're not violating their right to speech.”<br /></blockquote><br />For more detail, see <a href="http://www.institutionalreviewblog.com/2007/05/scott-atran-research-police-how.html">Scott Atran, "Research Police – How a University IRB Thwarts Understanding of Terrorism"</a>.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-63160255209815259932015-01-18T18:01:00.000-05:002015-01-18T18:01:30.867-05:00Library Administrator Mistakes FOIA Request for Human Subjects Research<a href="http://www.institutionalreviewblog.com/2011/07/alarmist-views-on-harvard-facebook.html">Sometime human-subjects alarmist</a> Michael Zimmer sent requests for public documents to 30 public libraries. Though most librarians welcome requests for information, in the age of the Common Rule, you can't take anything for granted.<br /><br />[Zimmer, Michael. “<a href="http://www.michaelzimmer.org/2015/01/09/new-project-on-privacy-and-cloud-computing-in-public-libraries-and-some-aftermath/.">New Project on Privacy and Cloud Computing in Public Libraries (and Some Aftermath)</a>.” MichaelZimmer.org, January 9, 2015. h/t Rebecca Tushnet]<br /><br />Zimmer reports:<br /><br /><blockquote>One library administrator seemed to take some umbrage with my project and approach. That director emailed a larger list of library directors asking if anyone else had received my records request, noting that “There is no promise of anonymizing the data or offer to opt out of the study, which is a typically included in studies these days” and expressing surprise that my IRB would approve such a methodology. (I learned of this concern due to that director’s email being forwarded to a privacy list hosted by the ALA that I’m a subscriber to.) I’ve since replied that this methodology doesn’t involve human subjects, and follows common approaches to obtaining government information (such as the Fordham Center for Law and Information Policy’s excellent research on privacy and cloud computing in public schools). I’ll reach out to this director personally, and hopefully the concerns will be put to rest.<br /></blockquote>Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-69431346672151570802015-01-11T21:14:00.000-05:002015-01-11T21:14:17.621-05:00Research Ethics Scales and MeasuresDr. Elizabeth Yuko kindly points me to Research Ethics Scales and Measures <a href="http://researchethicsmeasures.org/">http://researchethicsmeasures.org/</a>, a website run by Fordham University's Center for Ethics Education.<br /><br />The website features a bibliography of publications about empirical assessments of researchers' and participants' experiences with human subjects research. Many concern medical research, particularly dealing with HIV, but they may be of interest to social researchers as well.<br /><br />For additional pieces on this theme, please search this blog for the tag "<a href="http://www.institutionalreviewblog.com/search/label/empirical%20research">empirical research</a>." Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-69786980816710321772014-12-31T22:01:00.000-05:002014-12-31T22:01:20.344-05:00Horror Story BuffetWe end the year with two collections of IRB horror stories.<br /><br />[Varma, R. “Questioning Professional Autonomy in Qualitative Inquiry.” <i>IEEE Technology and Society Magazine</i> 33, no. 4 (winter 2014): 57–64. <a href="http://dx.doi.org/10.1109/MTS.2014.2363983">doi:10.1109/MTS.2014.2363983</a>; Glenda Droogsma Musoba, Stacy A. Jacob, and Leslie J. Robinson, The Institutional Review Board (IRB) and Faculty: Does the IRB Challenge Faculty Professionalism in the Social Sciences? <i>Qualitative Report</i> 19 (2014), Article 101, 1-14, <a href="http://www.nova.edu/ssss/QR/QR19/musoba101.pdf">http://www.nova.edu/ssss/QR/QR19/musoba101.pdf</a>] <a name='more'></a><br /><br />From Varma:<br /><br /><ul><li>"According to a researcher, the IRB did not understand why his research questions were not converted into a hypothesis to be easily tested. Additionally, the IRB was not in agreement with his need to conduct face-to-face interviews with human subjects. Alternatively, the IRB expressed that administering an anonymous survey could collect the same information."</li> <li>"The IRB told a researcher that the snowball sampling that he had proposed was similar to collecting data from friends. In his experience, purposive sampling, interviews, and small sample size do not generally fall in line with IRB approval standards. They tend to favor surveys with a large sample that is selected randomly."</li> <li>"The IRB took over eight months to approve an application to study the selection of majors in institutions of higher education."</li> <li>"In the study on teaching mathematics in a developing country . . . the IRB contested that subjects may feel bored or tired during interviews."</li> <li>"earlier informed consents were brief, approximately 100 to 200 words. Now they consist of . . . multiple headings [each with] a brief write up."</li> <li>"In a developing country [participants] became apprehensive in reading the statement about possible concerns about interview, and the idea that they could call/contact the person listed on the consent form, who was located in the United States. They considered this a physical burden on them due to about a 10-hour time difference between their country and the United States. Furthermore, this meant they were being asked to use their personal funds to make long-distance phone calls."</li></ul><br />From Musoba, Jacob, and Robinson:<br /><br /><ul><li>An IRB insists that a researcher get a nearby college's approval before administering a ten-minute survey to students in a course; the college blocks it, "citing the burden to students. However, a member of the institution’s committee shared with Andrea that the focus of the research committee’s discussion was the potential of the research findings reflecting negatively on the college."</li> <li>"The staff member told Sam to use a quantitative survey in lieu of the qualitative design submitted for approval. It took two firm rejections by Sam for the staff member to back down--one a simple no and the second an assertion of a faculty member’s authority to decide research methodology."</li> <li>"Virtually identical research projects were to be conducted at two different research sites . . . The research team saw the studies as parallel with the same survey instrument, distributed the same way, and identical “compensation processes,” yet the process that was approved in the first application was denied in the second because it was deemed coercive." (So much for Laura Stark's "local precedents.")</li> <li>"A doctoral student was reprimanded for copying one sentence from the approved informed consent document into the approved recruitment letter at the request of the research site director. As a result, participants were given information about the study earlier as a result of the change, increasing their opportunity to be fully informed . . . The professor as principal investigator and the doctoral student were summoned for an emergency meeting and were reprimanded by IRB staff with threats that the doctoral student would not be able to use the data. The intentionally humbling nature of the interchange seemed disproportionate to the mistake."</li></ul><br />While Varma notes that "IRBs are not functioning constructively," she does not propose specific remedies. Musoba, Jacob, and Robinson argue that "faculty committee members must take ownership of the review process or academic researchers must handle contact between the committee and researchers." They correctly note that greater faculty involvement need not be limited to membership on the IRB itself; researcher evaluations of IRB staff, for example, could signal areas needing improvement.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-25053606202048809022014-12-31T21:56:00.000-05:002014-12-31T21:56:41.386-05:00Nursing Professors Want IRB Oversight of Interviews with BereavedTwo professors of nursing warn that "Psychological harm is indeed a risk when interviewing individuals who may be in a fragile state and researchers should not have unfettered access to them." But they offer no evidence that IRBs offer appropriate protection without restricting legitimate research that may directly benefit the people being interviewed.<br /><br />[Florczak, Kristine L., and Nancy M. Lockie. “IRB Reformation Is Unfettered Access the Answer?” <i>Nursing Science Quarterly</i> 28, no. 1 (January 2015): 13–17. <a href="http://dx.doi.org/10.1177/0894318414558621">doi:10.1177/0894318414558621</a>.]<br /><br />Florczak and Lockie rely on the story of "Katie," as in this passage:<br /><br /><blockquote>Katie knew from conducting numerous interviews that they were not innocuous. Her participants frequently broke down and expressed myriad emotions from anger to fear but most often a profound overwhelming sadness. Dyregrov and colleagues (2011) added credence to Katie’s assumption that interviews are other than insipid conversations. They said that bereavement interviews can unearth painful memories resulting in the participants becoming emotionally exhausted and distressed.</blockquote><br />It is not clear from the essay if "Katie" is a pseudonym, a composite, or an entirely fictional creation.<br /><br />Florczak and Lockie do cite Kari Madeleine Dyregrov, Gudrun Dieserud, Heidi Marie Hjelmeland, Melanie Straiton, Mette Lyberg Rasmussen, Birthe Loa Knizek, and Antoon Adrian Leenaars. “Meaning-Making Through Psychological Autopsy Interviews: The Value of Participating in Qualitative Research for Those Bereaved by Suicide,” <i>Death Studies</i> 35, no. 8 (September 2011): 685–710. <a href="http://dx.doi.org/10.1080/07481187.2011.553310">doi:10.1080/07481187.2011.553310</a>. And that study did indeed report that "Some bereaved cried or were upset when talking about their loss."<br /><br />But Florczak and Lockie do not report Dyregrov et al.'s equally important findings that "very few people felt distressed when discussing the suicide and almost all of the participants felt no different or better than usual at the 4-week follow-up" and that "The majority of informants (62%) responded with unambiguous, highly positive statements that were numerous, varied, and spontaneous." This led Dyregrov et al. to warn that "Too often ethical boards delay or stop research projects with vulnerable populations, influenced by presumed rather than empirically documented vulnerability."<br /><br />Dyregrov et al. attribute the positive results to "the value of talking about the circumstances with a professional who has insight into the reasons and processes around suicides." This suggests that a credentialling system, rather than review of individual protocols, might better serve research participants.<br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-27148526567743851422014-12-27T21:32:00.004-05:002014-12-27T21:32:55.706-05:00Canada Embraces Ethical PluralismThe Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Social Sciences and Humanities Research Council of Canada have released <a href="http://www.pre.ethics.gc.ca/eng/resources-ressources/news-nouvelles/nr-cp/2014-12-18/">a new version of TCPS2</a>. Though it is not a dramatic change from the 2010 edition, it serves a reinder of how much more nimble the Canadian system is compared to the rigid U.S. regulations.<br /><br />I was particularly interested in the new acknowledgement that Research Ethics Boards (REBs) do not possess a monopoly on ethical judgment: <br /><br /><blockquote>"Activities outside the scope of research subject to REB review (see Articles 2.5 and 2.6), as defined in this Policy, may still raise ethical issues that would benefit from careful consideration by an individual or a body capable of providing some independent guidance, other than an REB. These ethics resources may be based in professional or disciplinary associations, particularly where those associations have established best practices guidelines for such activities in their discipline."</blockquote><br />Back in 2012, <a href="http://www.jstor.org/stable/10.1525/jer.2013.8.1.3 ">I traveled to Canada</a> to argue that "Scholarly associations know more about the ethics of particular forms of research than do national regulatory bodies," and should be more involved in articulating ethical standards and practices. Coincidence? Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-46394674799528068552014-12-26T10:19:00.000-05:002014-12-26T10:19:47.826-05:00National Science Foundation Charged with "Non-Biomedical Science Perspective"Rereading the e-mails mysteriously "obtained" by Public Citizen, I noticed that the White House has asked the National Science Foundation “to ensure that the ‘non-biomedical perspective is covered" in the forthcoming Notice of Proposed Rulemaking (NPRM), revising the Common Rule. Moreover, NSF "will identify places in the current regulatory text and preamble where edits are necessary to make the NPRM consistent with the January 2014 National Academy of Sciences' report that evaluated the applicability of the ideas presented in the 2011 ANPRM to the social and behavioral sciences."<br /><br />[Margo Schwab to Andrea Palm, “Annotated draft reg text for Common Rule,” 29 October 2014, reproduced in Michael Carome, “<a href="http://www.citizen.org/documents/2232.pdf">Letter to Secretary Burwell Re: Common Rule NPRM</a>,” November 20, 2014.]<br /><br />This strikes me as hopeful news. The January 2014 report, <a href="http://www.institutionalreviewblog.com/search/label/NAS">though lacking in some respects</a>, makes some sound recommendations for reform. And the NSF, which played only a minor part in writing the 1981 and 1991 regulations, is given a greater role in this round. As the sponsor of a great deal of social science research, NSF is indeed better positioned to take on this role than HHS or any other Common Rule agency.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-85769018697947552882014-12-12T22:27:00.001-05:002014-12-12T22:27:39.827-05:00Blog Day<a href="http://www.institutionalreviewblog.com/2006/12/introduction.html">Eight years</a> of the Institutional Review Blog. <br /><br />As <a href="https://www.insidehighered.com/news/2007/01/19/irb">Inside Higher Ed</a> reported at the time<br /><br /><blockquote>Schrag said that the problems with IRBs will probably remain for some time. “I think the regulations themselves are poorly drafted, with terms that are not well defined, and I anticipate problems until they are amended,” he said. “Perhaps until then, I’m going to have to keep up the blog.”<br /></blockquote><br />Can't be soon enough.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-51803221530769098562014-11-25T21:11:00.000-05:002014-11-25T21:11:33.665-05:00Was OHRP Ever an Independent Watchdog?Public Citizen is upset that NIH will get to write much of the NPRM. I don't understand why that matters.<br /><a name='more'></a><br />As <a href="http://www.citizen.org/documents/2232.pdf">Public Citizen's November 20 letter to HHS secretary Sylvia Mathews Burwell</a> puts it,<br /><br /><blockquote>NIH has been assigned responsibility for revising the preamble of the NPRM. The preamble will be the longest and most important part of the NPRM, as it will contain sections describing, among other things: (a) the summary and analysis of the public comments on the ANPRM; (b) the government’s response to those comments; (c) the resolution of key policy disagreements that were during the earlier drafting of the NPRM; (d) the proposed changes to the Common Rule; and (e) the rationale for making those changes, all of which ultimately will have a major impact on the actual final content of the proposed revised Common Rule regulatory text. <br /></blockquote> <br />It continues, "We suspect that NIH orchestrated such involvement in a deliberate attempt to undermine OHRP’s regulatory authority and to achieve changes to the Common Rule that it desires. This shift of authority from the regulator to the regulated is unacceptable."<br /><br />I think this overstates the difference between NIH and OHRP. <br /><br />Not mentioned in the Public Citizen letter is the fact that the <em>existing</em> Common Rule was also written by NIH, back in 1981, when the Office for Protection from Research Risks was part of NIH. So at worst, we'd be going from one set of NIH-drafted regs to another.<br /><br />And while the Office for Human Research Protections has, since its creation in 2000, been separate from NIH, I don't know that it has ever taken the independent position imagined by the Public Citizen message. Consider, for example, the <a href="http://www.hhs.gov/asl/testify/t000928.html">September 28, 2000, statement by Greg Koski</a>, OHRP's first director. He explains OHRP's mission as coordinating regulation and guidance across agencies, not of providing independent oversight of NIH or any other funding agency. <br /><br />If anything, HHS created OHRP to be less confrontational than its NIH predecessor, following Gary Ellis's drastic enforcement actions of 1998-2000. Nor can Public Citizen point to a time when OHRP acted as an aggressive, independent watchdog; to the contrary, its letter bemoans what it considers OHRP's inaction on the SUPPORT study controversy.<br /><br />I think of OHRP as an agency that has <a href="http://www.institutionalreviewblog.com/2010/12/menikoff-passes-buck.html">passed the buck</a> on IRB reform efforts, <a href="http://www.institutionalreviewblog.com/2007/07/ohrp-reprimand-puts-forms-over.html">focused on form rather than substance</a> in overseeing institutions, <a href="http://www.institutionalreviewblog.com/2010/03/twenty-six-percent-of-boxes-go.html">communicated its work in opaque "determination letters" that say little about the underlying ethical challenges</a>, and has been slow to act on <a href="http://www.hhs.gov/ohrp/sachrp/mtgings/mtg10-08/present/carome.html">SACHRP recommendations</a> or public responses to Federal Register notices. <br /><br />My question, then, is when was the golden age of OHRP to which Public Citizen would like to return? And if there was none, why does it matter who drafts the NPRM?<br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-62691798510805099362014-11-25T20:54:00.000-05:002014-11-25T21:12:03.827-05:00Internal E-Mails Suggest NPRM is ComingAccording to an <a href="http://www.citizen.org/documents/2232.pdf">open letter to HHS secretary Sylvia Mathews Burwell</a>, Public Citizen obtained "very recent internal emails" among officials at the Office of Management and Budget and the Department of Health and Human Services, showing that the latter is actively working on a Notice of Proposed Rulemaking (NPRM) to revise the Common Rule.<br /><br /><a href="http://www.institutionalreviewblog.com/2014/11/was-ohrp-ever-independent-watchdog.html">I'll write separately about the substantive issue</a> raised by Public Citizen. For now, the news is that as of November 13, 2014, senior officials were actively working to write an NPRM. <br /><br /><a href="http://www.imdb.com/title/tt0101669/quotes">Well I, for one, am very interested to see what's going to happen next.</a>Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-61965941117550437522014-11-07T14:12:00.000-05:002014-11-07T14:12:11.742-05:00New Book on Research ConfidentialityTed Palys and John Lowman have published <em>Protecting Research Confidentiality: What Happens When Law and Ethics Collide</em>. <br /><br />[Palys, Ted, and John Lowman. <em><a href="http://www.lorimer.ca/adults/Book/2703/Protecting-Research-Confidentiality.html">Protecting Research Confidentiality: What Happens When Law and Ethics Collide</a></em>. Toronto: James Lorimer & Company, 2014.]<br /><br /><a href="http://www.institutionalreviewblog.com/search?q=palys&max-results=20&by-date=true">Over the years</a>, I've learned a great deal from these two scholars about the ethics and law of research confidentiality in the social sciences, and I look forward to reading this compendium of what they have learned from their studies and their own struggles with their university.<br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-7071138628082615842014-11-04T08:57:00.001-05:002014-11-04T08:57:47.590-05:00OHRP Claims to Be "Working Very Hard" on NPRMWriting for the Chronicle of Higher Education, Christopher Shea notes that though two years passed between the 2012 <a href="http://www.institutionalreviewblog.com/2012/05/against-armchair-ethics-some.html">Future of Human Subjects Research Regulation conference at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School</a> and the <a href="http://www.institutionalreviewblog.com/2014/07/new-book-on-human-subjects-research.html">publication of the conference volume in July 2014</a>, the delay of the next step in regulatory reform--a notice of proposed rulemaking (NPRM)--means that the book remains timely.<br /><br />[Shea, Christopher. “<a href="http://chronicle.com/article/New-Rules-for-Human-Subject/149767/">New Rules for Human-Subject Research Are Delayed and Debated</a>.” Chronicle of Higher Education, November 3, 2014.]<br /><br />One also hopes that it won't be timely forever. Shea writes,<br /><br /><blockquote>A spokesman for the Office for Human Research Protections, which is part of the Department of Health and Human Services, could not provide a timetable but told The Chronicle late last month, "I can assure you that this continues to be an HHS priority, and all the relevant parties are still working very hard on this."<br /></blockquote><br />Or, as they might have put it, "<a href="https://www.youtube.com/watch?v=yoy4_h7Pb3M">We have top men working on it right now</a>."<br /><br /><a href="http://2.bp.blogspot.com/-F4G3zCtrl64/VFjauLw0jGI/AAAAAAAAHiU/Q8HfvISK418/s1600/raiders_of_the_lost_ark_warehouse_scen_450.jpg" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-F4G3zCtrl64/VFjauLw0jGI/AAAAAAAAHiU/Q8HfvISK418/s320/raiders_of_the_lost_ark_warehouse_scen_450.jpg" /></a>Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-50909791021350024322014-10-30T18:15:00.000-04:002014-10-30T18:15:16.370-04:00University of Washington IRB Demanded Dangerous Consent FormThe recent <i>Nature</i> story on ethics consultancies includes an example of counterproductive interference by an intransigent IRB. <br /><br />[Dolgin, Elie. “Human-Subjects Research: The Ethics Squad.” <i>Nature</i> 514, no. 7523 (October 21, 2014): 418–20. doi:<a href="http://dx.doi.org/10.1038/514418a">10.1038/514418a</a>.]<br /><a name='more'></a> <br /><blockquote>Amy Hagopian, a global-health researcher at the University of Washington in Seattle, found herself turning to an ethics consultant for help with a study in Iraq to find out how many people had died as a result of the US-led conflict that began there in 2003. Her team needed to obtain informed consent from participants, but the researchers on the ground in Iraq were concerned that including the University of Washington's name on the consent forms — a requirement for IRB approval — would make it difficult to get the data they needed. “They feared that being associated with American institutions would get them killed”, says Hagopian. “They dug in their heels and refused” to carry the form.<br /><br />Hagopian wanted to strip the university's name from the consent document, but the IRB insisted that it was an important part of informed consent, which is meant to protect participants, not the investigators. The impasse brought Hagopian and her team to [ethics consultant Benjamin] Wilfond. He concluded that it would be ethical to remove mention of the institution, for three main reasons: first, research subjects would also be placed at risk by signing a document linking them to the University of Washington; second, apart from the link to the United States, the research involved minimal risk to the participants; and third, the study would not happen unless the name of the institution was removed.<br /><br />The IRB eventually agreed with Wilfond. The researchers went ahead with the study and found that nearly half a million people had died from causes attributable to the Iraq war between 2003 and 2011 — a figure much greater than most previous estimates. “We couldn't have done this without him,” Hagopian says of Wilfond.<br /></blockquote><br />Of course, PRIM&R claims that "<a href="http://www.institutionalreviewblog.com/2011/11/prim-irbs-dont-write-consent-forms.html">IRBs neither interact with subjects nor write consent forms</a>."Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-31515907227930749192014-10-29T23:18:00.002-04:002014-10-29T23:18:49.570-04:00Ethics Consultancies—A Non-Coercive Alternative to IRBs?<a href="http://www.institutionalreviewblog.com/2007/01/why-not-make-irb-review-voluntary.html">For some time</a>, I've thought that the real problem with IRBs may be the coercive power granted to them. This relieves them of the need to make arguments strong enough to persuade researchers, and in some cases leads them instead to make demands based weak or even wrongheaded thinking.<br /><br />This week, <i>Nature</i> reports on an alternative (or supplementary) model, the ethics consultancy. <br /><br />[Dolgin, Elie. “Human-Subjects Research: The Ethics Squad.” <i>Nature</i> 514, no. 7523 (October 21, 2014): 418–20. doi:<a href="http://dx.doi.org/10.1038/514418a">10.1038/514418a</a>.]<br /><br /><a name='more'></a><br />As reporter Elie Dolgin explains,<br /><br /><blockquote>Ethical dilemmas in research are nothing new; what is new is that scientists can go to formal ethics consultancies such as [Tomas] Silber's to get advice. Unlike the standard way that scientists receive ethical guidance, through institutional review boards (IRBs), these services offer non-binding counsel. And because they do not form part of the regulatory process, they can weigh in on a wider range of issues — from mundane matters of informed consent and study protocol to controversial topics such as the use of experimental Ebola treatments — and offer more creative solutions.<br /></blockquote><br />Technically, that's a non sequitur; there's no regulatory prohibition preventing IRBs from weighing in on controversial topics or offering creative solutions, only a bar to "consider[ing] possible long-range effects of applying knowledge gained in the research." In practice, the coercive nature of IRB review often makes researchers less receptive to IRB counsel and chokes off productive discussions.<br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-41518663096937175502014-09-28T21:23:00.001-04:002014-09-29T07:30:05.683-04:00Briefly Noted: Whitney, Bell, ElliottLacking time for full comment, I briefly note the publication of these two important, critical essays. Citations omitted from the quoted passages.<br /><a name='more'></a> <br /><h2>The Shell Game</h2><br />[Whitney, Simon N. "The Shell Game: How Institutional Review Boards Shuffle Words." <i>Journal of Translational Medicine</i> 12, no. 1 (August 14, 2014): 201. <a href="http://dx.doi.org/10.1186/1479-5876-12-201">doi:10.1186/1479-5876-12-201</a>.]<br /><br /><blockquote>Popular IRB guides ignore . . . subtleties and mangle the standard definitions. One handbook claims that "coercion means that a person is to some degree forced, or at least strongly pushed, to do something that is not good for him or her to do. In discussions of research regulation the term 'undue influence' is often used to describe the concept of coercion". This manual thus expands the narrow concept of coercion to include persuasion.<br /><br />A second handbook agrees: "Coercion can be subtle: persuasion, argument, and personality can be used to compel an individual to act in a certain way.... Coercion—including all the subtle forms—has no place in research". There is, of course, no such thing as subtle coercion. A guide to IRB management and function claims that in recruitment for clinical trials, "the possibilities for misinforming or disinforming potential subjects abound" and "the possibilities for inadvertent, unintentional coercion, or undue influence are also high". Inadvertent or unintentional coercion is oxymoronic.<br /><br />With encouragement from these guides, IRBs reject the standard meaning of the word and use "coercion" to refer to any statement, however innocuous, that might encourage trial participation. Some IRBs believe, for instance, that it is coercive for a consent form to mention that a study is funded by the National Institutes of Health.</blockquote><br /><h2>Censorship in the Name of Ethics</h2><br />[Bell, Kirsten, and Denielle Elliott. "Censorship in the Name of Ethics: Critical Public Health Research in the Age of Human Subjects Regulation." <i>Critical Public Health</i> 24, no. 4 (September 3, 2014): 385–91. <a href="http://dx.doi.org/10.1186/1479-5876-12-201">doi:10.1080/09581596.2014.936727</a>.]<br /><br /><blockquote>Although the extent of the problems continue to be debated, the last few years have witnessed a growing institutional awareness that change is indeed necessary. For example, in December 2010, Canada's Interagency Panel on Research Ethics released revised national human ethics research guidelines that aimed to be more social science 'friendly'. Similarly, the US Office of Human Research Protections is currently toying with the possibility of sweeping changes to its national regulations. The proposed framework specifically highlights the over-regulation of social and behavioral research and the 'unwarranted variability across institutions... in how the requirements are interpreted and implemented'. Under the proposed regulations, many types of social science and behavioral research with 'competent adults' would be exempt from review.<br /><br />However, somewhat ironically, just as those tasked with oversight have started to talk of scaling back research ethics regimes (or at least reining in their scope), elsewhere we see movement in entirely the opposite direction. Beyond the ways requirements for ethics review have become tied up with publication (and funding), an ever-expanding array of organizations have begun to develop their own procedures around ethics review. Although their impetus is typically a desire to ensure the research needs of the populations they serve are met, their proliferation illustrates the ways in which the existing problems have tended to produce more oversight and regulation rather than less. In many respects, this speaks to the self-perpetuating aspect of audit culture, whereby its rituals of verification create the very mistrust they are designed to dispel.<br /></blockquote>Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-92067437946571727932014-09-23T22:09:00.000-04:002014-09-23T22:09:24.663-04:00IRBs and America's Worst Colleges<iframe width="480" height="392" src="http://www.ustream.tv/embed/recorded/53023052?v=3&amp;wmode=direct" scrolling="no" frameborder="0" style="border: 0px none transparent;"> </iframe><br /><br /><a href="http://www.ustream.tv" style="font-size: 12px; line-height: 20px; font-weight: normal; text-align: left;" target="_blank">Broadcast live streaming video on Ustream</a><br /><br />I talk about IRB's as part of the New America Foundation's "<a href="http://www.edcentral.org/watch-live-americas-worst-colleges/">America's Worst Colleges</a>" event.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-37349290408279202702014-08-27T21:53:00.004-04:002014-08-27T21:53:56.112-04:00You Can't Ask ThatThe <i>Washington Monthly</i>'s fall college issue features my essay on IRBs. Nothing that will surprise regular readers of this blog, but perhaps this will reach new readers.<br /><br />[Schrag, Zachary M. “<a href="http://www.washingtonmonthly.com/magazine/septemberoctober_2014/features/you_cant_ask_that051759.php?page=all">You Can’t Ask That</a>.” <i>Washington Monthly</i>, September/October 2014.]Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com1tag:blogger.com,1999:blog-525778292565554519.post-60702398322419925532014-08-25T15:31:00.000-04:002014-08-25T15:31:06.381-04:00Bell: Ethnography Shouldn't Be Like Victorian SexWriting in <em>American Anthropologist</em>, Kirsten Bell argues that ethnography should not be seen as a violation to which an informant must consent, and "although the concept of informed consent has now been enshrined in the AAA Code of Ethics for more than 15 years, the reality is that it is not an appropriate standard with which to judge ethnographic fieldwork."<br /><br />[Bell, Kirsten. "Resisting Commensurability: Against Informed Consent as an Anthropological Virtue." <em>American Anthropologist</em>, July 21, 2014, <a href="http://dx.doi.org/10.1111/aman.12122">doi:10.1111/aman.12122</a>.]<br /><a name='more'></a><br /><br />The need for informed consent, Bell argues, is premised on the idea that research is "an intrinsically risky enterprise."<br /><br /><blockquote>Research is often quite explicitly configured as a violation or invasion: biomedical research violates the physical integrity of the body, and social science research violates the individual' s privacy. Thus, one textbook on ethical issues in behavioral research warns: "The central ethical issues in field research are likely to revolve around potential invasions of privacy." This constitution of research as a "violation" or "invasion" helps to explain why informed consent is deemed so central to contemporary conceptions of research ethics. After all, to consent is quite literally to acquiesce to being "done to." In this framing, research is a violation to which, like sex, one must willingly consent (but presumably not actively participate in, like the Victorian bride counseled to "lie back and think of England"). Informed consent to research participation, like conceptions of consent to sexual intercourse, is thus based on certain underlying assumptions about the nature of the protagonists in this encounter.<br /></blockquote><br />Bell would prefer a view of ethnography that acknowledges that "ethnographic research can proceed ethically in the absence of a mutually agreed-upon understanding of its aims and that this absence is to a certain extent unavoidable." She finds such acknowledgement not in recent versions of the American Anthropological Association's Statement on Ethics, but rather in its 1971 <a href="http://www.aaanet.org/cmtes/ethics/AAA-Statements-on-Ethics.cfm"> Principles of Professional Responsibility</a>, which state merely that "the aims of the investigation should be communicated as well as possible to the informant."<br /><br />As Bell notes, the phrase, "as well as possible," concedes that the level of understanding implied by the standard of "informed consent" is likely impossible. Better to face that reality, she suggests, than trivialize ethics by holding ethnographers to an unreachable standard.<br /><br />---<br /><br />Note. In her acknowledgements, Bell writes that her essay is in part an effort to explain her views to "Laura-Lee Balkwill, a policy analyst from Canada's Secretariat on Responsible Conduct of Research, who braved the den of frustrated social scientists at the [2012 Ethics Rupture] conference in an effort to try to understand our concerns." I second Bell's praise of Balkwill, whose thoughtful questions and gracious skepticism helped many of us to reexamine our assumptions and refine our arguments.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-12753653338088731352014-07-21T22:08:00.000-04:002014-07-21T22:08:55.603-04:00Most IRB Chairs Can't Recognize Exempt Research or Non-ResearchA study of criminal justice researchers' knowledge of IRB rules has found that IRB chairs can't agree on what makes a project exempt from review and think that IRB review is needed for public records. The authors of the study, one of whom is an IRB chair, seem not to realize the significance of these findings.<br /><br />[Tartaro, Christine, and Marissa P. Levy. "Criminal Justice Professionals' Knowledge of Institutional Review Boards (IRBs) and Compliance with IRB Protocol." <i>Journal of Criminal Justice Education</i> 25, no. 3 (2014): 321–41. doi:<a href="http://dx.doi.org/10.1080/10511253.2014.902982">10.1080/10511253.2014.902982</a>.]<br /><a name='more'></a><br /><h2>"Correct" Answers in Scare Quotes</h2><br />Christine Tartaro and Marissa P. Levy, both professors of criminal justice at the Richard Stockton College of New Jersey, sought to learn what their fellow criminologists knew about IRB rules and procedures.<br /><br />To do this, they devised seven hypothetical research scenarios and posed them in survey form to two groups. First, IRB chairs--from whom they got 164 responses--and second, to U.S.-based, academic members of the Academy of Criminal Justice Sciences (ACJS). Of the 1,174 potential respondents in the latter group, they received 323 from respondents who work at institutions with IRBs.<br /><br />For reasons not well explained, the authors labeled the consensus view of the IRB chairs as the "correct" answer to a given scenario.<br /><br /><blockquote>The correct answers for the scenario responses were gained by reviewing each scenario with the Chair [i.e., Levy] and comparing her answers to those of a national survey of IRB Chairs which produced responses from 164 IRB Chairs. While we acknowledge that many of these answers are subject to interpretation, it is the IRB Chairs who ultimately decide the level of review required for each protocol based on his or her interpretation of the federal guidelines. As such, IRB Chairs' interpretations of federal guidelines as applied to the research scenarios is the closest one can get to identifying "correct" answers. All IRB Chairs were given the same amount of information from which to draw their conclusions and determine "correct" answers. IRB Chairs were divided on the correct course of action for two of the seven research scenarios, so those scenarios were excluded for the current analysis.</blockquote><br />The scare-quotes around "correct" and the exclusion of two scenarios show that the authors are at least somewhat aware that the disagreements among IRB chairs pose serious questions about the clarity of the scenarios and of the regulations themselves. Rather than label any response "correct," it might have been better to describe the chairs' view as just that: the chairs' view.<br /><br />Unforuntately, the study authors do not indicate how the IRB chairs split on these cases. What constituted enough of a consensus for the authors to label an answer "correct"? And, more importantly, what can the IRB chairs' responses tell us about the consistency of IRB rulings?<br /><br /><h2>IRB Chairs Can't Recognize Non-Research</h2><br />Assuming that a "correct" label represents a fairly broad consensus among Levy and the IRB chairs surveyed, it is striking that members of this group can't recognize non-human subjects research when they see it.<br /><br />Here are Tartaro and Levy's descriptions of scenarios four and five:<br /><br /><blockquote>For scenario four, "You want to give students a non-graded quiz at the beginning of the semester. At the end, you plan to give students the same quiz to see how much they learned. You aren't going to publish the results. How will you handle this?" Ninety-five percent correctly chose "proceed without IRB approval," 3% believed it was necessary to apply for an exempt or expedited review and 2% selected full IRB review. For scenario five, the instructor from the previous scenario planned to use the assessment data for a conference presentation or publication. The correct answer is to apply for an IRB exempt/expedited review, and 65% of respondents chose that option. Sixteen percent believed that no IRB review was necessary, and 19% were in favor of a full IRB review. </blockquote><br />Let's compare that "correct answer" to OHRP's <a href="http://answers.hhs.gov/ohrp/questions/7286">2010 guidance on quality improvement</a>:<br /><br /><blockquote>The intent to publish is an insufficient criterion for determining whether a quality improvement activity involves research. The regulatory definition under 45 CFR 46.102(d) is "Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge." Planning to publish an account of a quality improvement project does not necessarily mean that the project fits the definition of research; people seek to publish descriptions of nonresearch activities for a variety of reasons, if they believe others may be interested in learning about those activities.<br /></blockquote><br />So while OHRP does not think that publication triggers IRB review, Tartaro and Levy, the consensus of IRB chairs, and something like 84 percent of the researcher respondents all think that it does.<br /><br />A starker case is scenario two:<br /><br /><blockquote>For scenario two, respondents were asked to imagine that, "You are seeking information from a local police department about the date and location of each report of a stolen car over a period of a year. You are not requesting any identifying information on the owner or car. How would you handle this study?" The appropriate course of action, according to IRB rules, is to apply for exempt or expedited status from the IRB, and 66% answered this correctly. Twenty-seven percent responded that proceeding without IRB approval was appropriate, and 8% thought that a full IRB review was necessary.<br /></blockquote><br />Folks, while <a href="http://www.rcfp.org/rcfp/orders/docs/POLICE.pdf">laws about police records vary by state</a>, police blotters (including reports of stolen cars) are public records in most or all states. And even if they weren't, if you are collecting data that does not include identifiable private information about a living individual, you are not conducting human subjects research as defined by the Common Rule.<br /><br />It's appalling that that 74 percent of the researchers surveyed didn't know this, and more appalling that Levy and her fellow IRB chairs did not.<br /><br /><h2>IRB Chairs Can't Recognize Exempt Research</h2><br />It's also distressing that the IRB chairs could not agree on what constitutes exempt research.<br /><br />The description of the first scenario states that "IRB Chairs were widely in agreement that some level of IRB review was necessary, but they were split on the extent of that review." Tartaro and Levy do not report how the split broke down among those favoring exemption (which Tartaro and Levy regard as "some level of IRB review"), expedited review, or full review. Thus, Tartaro and Levy present as wide agreement what may in fact have been substantial disagreement.<br /><br />For scenario two (the public records of stolen cars) Tartaro and Levy report that "The appropriate course of action, according to IRB rules, is to apply for exempt or expedited status from the IRB." So not only did IRB members fail to spot the non-human subjects research, but having mistaken it for human subjects research, they couldn't agree on whether it was exempt.<br /><br />The same goes for scenario five, the quality improvement ("The correct answer is to apply for an IRB exempt/expedited review"). In these three scenarios, the IRB chairs could not come to consensus on whether a given activity was exempt, forcing the Tartaro and Levy to accept both exempt and expedited as "correct" answers.<br /><br />Then there were the two scenarios excluded from the study because "IRB Chairs were divided on the correct course of action." Tartaro and Levy do not state whether those scenarios involved intepreting IRB regulations.<br /><br />So of the 4-6 IRB-related scenarios Tartaro and Levy began with, the IRB chairs were able to agree on only one: no IRB involvement is needed to quiz your students on how much they have learned in a semester. For the remaining five, they could not come to consensus on whether the project as described is exempt under the Common Rule.<br /><br />Tartaro and Levy do not remark on this finding. Yet their results stand as a potential counterpoint to the claims of IRB apologists like <a href="http://www.institutionalreviewblog.com/2013/03/rivera-faculty-researchers-are.html">Suzanne Rivera, who thinks that researchers are incompetent to determine exemption.</a> Are researchers poorer interpreters of 45 CFR 46.101 than the IRB chairs in this study?<br /><br /><h2>Criminologists Disagree on Ethics</h2><br />For reasons not explained, one of the five scenarios in the article presents a question of ethics, not of regulatory procedures.<br /><br /><blockquote>Scenario three involved the ethics of identifying research participants. Participants were asked to consider the following: "As you prepare to present findings at a conference, your colleague presents you with slides that she￼wants to use. On one of the slides, your colleague has a picture of an offender that she took during field observations ... How do you respond?" Ninety-three percent chose, "tell your colleague that including the photo would be unethical," while 7% would have advised the colleague to include the photo in the presentation.<br /></blockquote><br />Tartaro and Levy present this finding without comment, not claiming that it involves the IRBs or offering a "correct" response from the IRB chairs. For my part, I would have to say that the scenario offers insufficient information to give an answer.<br /><br />The <a href=""http://www.acjs.org/pubs/167_671_2922.cfm">ACJS code of ethics</a> makes clear that "Confidential information provided by research participants should be treated as such by members of the Academy, even when this information enjoys no legal protection or privilege and legal force is applied." But it is not clear that the photograph in the scenario is confidential. Are we talking about a photograph taken in someone's living room? Or on a public sidewalk? If the latter, I would note that newspapers regularly feature photographs of people committing offenses, such as <a href="http://www.nydailynews.com/new-york/nyc-crime/nypd-chokehold-staten-island-man-eric-garner-stripped-shield-gun-article-1.1873033">the apparently illegal chokehold used by NYPD Officer Daniel Pantaleo against Eric Gardner last week</a>. (See also the Gericault-esque photographs of unauthorized migration by <a href="http://www.nytimes.com/2014/07/20/world/americas/on-southern-border-mexico-faces-crisis-of-its-own.html?module=Search&mabReward=relbias%3Aw%2C%7B%221%22%3A%22RI%3A10%22%7D#slideshow/100000003006741/100000003006747">Meridith Kohut</a> for the New York Times.)<br /><br /><h2>Criminologists Change Wording, but Otherwise Follow Rules</h2><br />In addition to asking about the hypothetical scenarios, Tartaro and Levy surveyed the researchers about their own practices. They found that 41 percent had, in the past three years, undertaken "activities against IRB rules," but they acknowledge that their survey mistakenly listed researching "public records or other data not involving human participants" as such an infraction. The 41 percent is therefore an overstatement.<br /><br />A better figure may be the 27 percent who "made a minor change to the wording of a survey or consent form after IRB approval without reporting back to the IRB." This jibes with the finding that "Twenty-one percent indicated that researchers should be able to make a minor wording change to a previously approved survey or consent form without reporting it to the IRB."<br /><br />Much smaller numbers (less than 5 percent) made substantial changes without consulting the IRB, conducted human subjects research before receiving approval or "purposefully left out information or was vague about an aspect of the study that you anticipated that the IRB would challenge." This last group (3.3 percent, or nine of the 245 respondents to that question) included a respondent who explained, "My IRB doesn't understand qualitative research that uses grounded theory." Unfortunately, the analsyis compares the responses given by researchers according to various attributes, e.g., has the researcher served on the IRB, it does not break down the replies by quantitative or qualitative research.<br /><br /><h2>Why Blame the Researchers?</h2><br />Tartaro and Levy conclude that "the results of this study indicate that academic researchers in criminal justice lack a uniform understanding of IRB rules." They could just as well have concluded that IRB chairs lack a uniform understanding of IRB rules. <br /><br />But really, the fault lies not with the researchers or the chairs, but with a set of poorly written rules, layered with contradictory guidance from federal officials, and desperately in need of revision. It has been nearly 20 years since <a href="http://www.institutionalreviewblog.com/2007/08/guidance-creep.html">the Office for Protection from Research Risks decided that exempt projects were not really exempt</a>. Until the regulations are rewritten, there may be no "correct" answers on what they mean.<br /><br /><br /><br />Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-88376147137346627752014-07-16T14:03:00.001-04:002014-07-16T14:03:41.559-04:00UCSD Frees Oral History and JournalismThe University of California, San Diego, has determined that most projects by historians and journalists need not be submitted to the IRB.<br /><br /><a name='more'></a><br />The announcement comes in the form of a "fact sheet," titled "<a href="http://irb.ucsd.edu/History_Journalism.pdf">Oral History/Journalism Projects</a>," which is itself undated but was <a href="http://irb.ucsd.edu/factsheets.shtml">apparently released on 1 October 2013</a>.<br /><br />The fact sheet explains that most oral histories and journalistic projects do not meet the federal definition of human subjects research.<br /><br /><blockquote>Historians and journalists typically use collected information to explain past or current events but not to create theories, principles, or statements of relationships that are predictive of future events or that can be widely applied. Such activities would not be considered “generalizable knowledge.”<br /><br />Where, however, projects at UCSD are, in fact, designed to develop or contribute to "generalizable knowledge," such projects must be submitted to the HRPP. Upon submission, such projects may be granted exemption from IRB review, handled through the expedited review process, or reviewed by the full IRB, as appropriate.<br /><br />Oral history projects conducted by, or under the supervision of, UCSD faculty, staff or students should be conducted in accordance with the guidelines established by the Oral History Association for the ethical and professional practice of oral history. <br /></blockquote><br />This policy builds on <a href="http://www.columbia.edu/cu/irb/policies/documents/OralHistoryPolicy.FINAL.012308.pdf">Columbia University's pioneering policy of 2007</a>, which also excludes oral history on the grounds that it lacks predictive value and therefore is not designed to produce generalizable knowledge. And, like Columbia, UCSD expects its historians to adhere to the Oral History Association guidelines. But whereas Columbia took more than four pages to explain its reasoning, UCSD has managed to boil its policy down to one page.<br /><br />Bravo!Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0tag:blogger.com,1999:blog-525778292565554519.post-57540492322871694522014-07-15T20:08:00.000-04:002014-07-15T20:08:29.808-04:00New Book on Human Subjects Research RegulationMIT Press has published <i><a href="https://mitpress.mit.edu/books/human-subjects-research-regulation">Human Subjects Research Regulation: Perspectives on the Future</a></i>, eds. I. Glenn Cohen and Holly Fernandez Lynch.<br /><br />The volume emerges from the May 2012 conference, "The Future of Human Subjects Regulation," sponsored by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. (See <a href="http://www.institutionalreviewblog.com/2012/05/against-armchair-ethics-some.html ">Against Armchair Ethics: Some Reflections from Petrie-Flom</a>.)<br /><br />My own contribution is a chapter entitled, "What Is This Thing Called Research?" I have a preliminary version online at <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2182297">SSRN</a>.<br /><br />Though published three years after the ANPRM, the book has hit print before an NPRM. Pity.Zachary M. Schraghttp://www.blogger.com/profile/07101709506166167477noreply@blogger.com0