Protecting Internal Review Boards and the Subjects They Govern

This is a morality tale as to how human experimentation and the internal review boards (IRBs) charged with oversight can never be taken for granted, requiring our constant scrutiny and conscience.

Like far too many examples involving regulations or institutionalized protocols, ethical guidelines for research involving human subjects have been borne out of disasters. For human subjects, the most recent and glaring milestones (or rather pits) go back to the Nazi experimentation on concentration-camp prisoners in the 1940s and the Tuskegee Syphilis Study initiated in 1932.

In the latter example, 399 African-American sharecroppers in Macon County, Ala., were recruited by medical researchers at the Tuskegee Institute into a study in which the effects of untreated syphilis were to be studied. The problem was that the inductees were never informed of their diagnosis. Instead, they were offered free meals, free medical exams and free burial insurance. Even though the patients had agreed to participate and be "treated," they were never informed of the real purpose of the study.

As penicillin became available and found to be effective, the decision was made to withhold such treatment to the study participants so as to be able to follow the disease's natural progression. Even if the men left the study or joined the military, penicillin was withheld, thanks to coordination between the Public Health Service (which supplied much of the funding of the study) and local health services. Somehow, the study was able to drag on until 40 years later (1972), when a flurry of news articles finally broke loose and condemned the study.1

Somehow this entire incident slipped under the radar, since the previous Nazi experiments came to a humiliating and well-deserved ending with the 1947 Nuremberg Military Tribunal, known as the "Doctors' Trial." As the result of these proceedings, it was deemed that voluntary consent of the human subject was an absolutely essential component of clinical research. This stipulated that participants in such research had: the capacity to consent, freedom from coercion, comprehension of the risks and benefits involved, minimal risk and harm, the services of qualified investigators using appropriate research designs, and the freedom to withdraw at any time.

This was by no means the final and definitive ruling. Starting in 1964 and followed by at least seven revisions and updates in 1975, 1983, 1989, 1996, 2000, and 2004, the Declaration of Helsinki consistently sought to fill in the gaps to shore up the protection of human subjects in clinical research. Here are some of the provisos added over the years:

As a rule, consent should be obtained in writing.

A specifically appointed independent committee is needed for consideration, comment or guidance. This entity is what evolved into the IRB.

Refusal to participate should never interfere with a therapeutic relationship.

The physician must curtail the investigation if hazards are found to outweigh potential benefits.

Whenever a minor child is able to offer consent, it must be provided in addition to the legal guardian.

After the study, patients must have access to the best treatment identified by the study.

During the course of the trial, the independent oversight committee must be informed of adverse events, funding, sponsors, institutional affiliations, incentives for subjects, and conflicts of interest.

The needs of economically and medically disadvantaged populations must be recognized.

No national, ethical, legal or regulations pertaining to research on humans in countries shall be allowed to reduce or eliminate protections stated in the declaration.

The patient has the right to withdraw without reprisals.

Legally incapacitated groups shall not be included unless the research is necessary to promote the health of the population represented.

But this is where it gets interesting. Despite these numerous provisos, an investigation approved by the Johns Hopkins IRB slipped through the ethical research safety net in assessing cost-effective methods of household lead-paint abatement. Conducted by the Kennedy Krieger Institute (KKI), this 1992 study purposely exposed minority children to lead paint to study its effects. It included measuring the lead blood levels of children living in houses that had received one of three lead-abatement interventions.2 Despite the fact that the majority of children experienced reductions in blood lead levels, it was revealed that the researchers had encouraged the landlords of the lead-abated houses to rent to families with young children.3 Not surprisingly, this case spawned a lawsuit, public outrage and a ruling by the Maryland Court of Appeals, which rightfully suggested that this study represented a "new Tuskegee."

This entire narrative is compellingly depicted in a recent article by Bozeman, et al., published in the American Journal of Public Health. The authors point out that despite the best of intentions, ethical lapses of this nature can slip through the institutionalized organizational bureaucracy of the IRBs involving their composition, formation, management or deliberations.4 This is not a new development, the problem having been brought to light previously.5

What is so beguiling is that, for all its shortcomings, the original Tuskegee experiment was not totally unaware of social conscience. Bozeman and his co-authors point out that it did include members of minority populations in every aspect of the study's design (members of the funding institution, researchers and research participants).6 It is also clear that this was one of the first systematic medical studies that paid any significant attention to problems directly and differentially affecting African Americans, as opposed to the majority of clinical studies.7,8

In its defense, the KKI study also had much to say. In focusing research upon the needs of the underserved poor and minority populations, this research seemed to be precisely aligned with the objectives of government agencies such as the National Institutes of Health.

But the overarching problem pointed out by Bozeman seems to have been the huge diversity of applications appearing before the IRB, stretched to the breaking point. There appears to have been an increasing demand for informed consent for study groups at little or no risk as a result of the research, and superficial and procedural points seem to have gained precedence over substantive ones.9 The vulnerability of minority groups appears to have been more associated with simply the capacity to provide consent, while the actual justice in their actual selection has been overlooked.10

Solutions to this ongoing problem, as cited by Bozeman, include reducing the IRB's preoccupation with relative trivia, stabilizing and rationalizing the requirements for IRB members, and standardizing the IRBs themselves. Reducing individuals to abstractions, figures or abstracts in a study, in IRBs as well as the subjects themselves, ignores the authors' assertion that, in the final analysis, "there is no good substitute for identification and empathy with the people who will be recruited for and exposed to the research."4

In other words, the humanity and welfare of the IRB should be every much a part of scientific study and protection as the clinical subjects themselves.

Comments are encouraged, but you must follow our User Agreement
Keep it civil and stay on topic. No profanity, vulgar, racist or hateful comments or personal attacks. Anyone who chooses to exercise poor judgement will be blocked. By posting your comment, you agree to allow MPA Media the right to republish your name and comment in additional MPA Media publications without any notification or payment.