Human Experimentation: An Introduction to the Ethical Issues

In January 1944, a 17-year-old Navy seaman named Nathan Schnurman volunteered to test protective clothing for the Navy. Following orders, he donned a gas mask and special clothes and was escorted into a 10-foot by 10-foot chamber, which was then locked from the outside. Sulfur mustard and Lewisite, poisonous gasses used in chemical weapons, were released into the chamber and, for one hour each day for five days, the seaman sat in this noxious vapor. On the final day, he became nauseous, his eyes and throat began to burn, and he asked twice to leave the chamber. Both times he was told he needed to remain until the experiment was complete. Ultimately Schnurman collapsed into unconsciousness and went into cardiac arrest. When he awoke, he had painful blisters on most of his body. He was not given any medical treatment and was ordered to never speak about what he experienced under the threat of being tried for treason. For 49 years these experiments were unknown to the public.

The Scandal Unfolds

In 1993, the National Academy of Sciences exposed a series of chemical weapons experiments stretching from 1944 to 1975 which involved 60,000 American GIs. At least 4,000 were used in gas-chamber experiments such as the one described above. In addition, more than 210,000 civilians and GIs were subjected to hundreds of radiation tests from 1945 through 1962.

Testimony delivered to Congress detailed the studies, explaining that “these tests and experiments often involved hazardous substances such as radiation, blister and nerve agents, biological agents, and lysergic acid diethylamide (LSD)....Although some participants suffered immediate acute injuries, and some died, in other cases adverse health problems were not discovered until many years later—often 20 to 30 years or longer.”1

These examples and others like them—such as the infamous Tuskegee syphilis experiments (1932-72) and the continued testing of unnecessary (and frequently risky) pharmaceuticals on human volunteers—demonstrate the danger in assuming that adequate measures are in place to ensure ethical behavior in research.

Tuskegee Studies

In 1932, the U.S. Public Health Service in conjunction with the Tuskegee Institute began the now notorious “Tuskegee Study of Untreated Syphilis in the Negro Male.” The study purported to learn more about the treatment of syphilis and to justify treatment programs for African Americans. Six hundred African American men, 399 of whom had syphilis, became participants. They were given free medical exams, free meals, and burial insurance as recompense for their participation and were told they would be treated for “bad blood,” a term in use at the time referring to a number of ailments including syphilis, when, in fact, they did not receive proper treatment and were not informed that the study aimed to document the progression of syphilis without treatment. Penicillin was considered the standard treatment by 1947, but this treatment was never offered to the men. Indeed, the researchers took steps to ensure that participants would not receive proper treatment in order to advance the objectives of the study. Although, the study was originally projected to last only 6 months, it continued for 40 years.

Following a front-page New York Times article denouncing the studies in 1972, the Assistant Secretary for Health and Scientific Affairs appointed a committee to investigate the experiment. The committee found the study ethically unjustified and within a month it was ended. The following year, the National Association for the Advancement of Colored People won a $9 million class action suit on behalf of the Tuskegee participants. However, it was not until May 16, 1997, when President Clinton addressed the eight surviving Tuskegee participants and others active in keeping the memory of Tuskegee alive, that a formal apology was issued by the government.

While Tuskegee and the discussed U.S. military experiments stand out in their disregard for the well-being of human subjects, more recent questionable research is usually devoid of obvious malevolent intentions. However, when curiosity is not curbed with compassion, the results can be tragic.

Unnecessary Drugs Mean Unnecessary Experiments

A widespread ethical problem, although one that has not yet received much attention, is raised by the development of new pharmaceuticals. All new drugs are tested on human volunteers. There is, of course, no way subjects can be fully apprised of the risks in advance, as that is what the tests purport to determine. This situation is generally considered acceptable, provided volunteers give “informed” consent. Many of the drugs under development today, however, offer little clinical benefit beyond those available from existing treatments. Many are developed simply to create a patentable variation on an existing drug. It is easy to justify asking informed, consenting individuals to risk limited harm in order to develop new drug therapies for a condition from which they are suffering or for which existing treatments are inadequate. The same may not apply when the drug being tested offers no new benefits to the subjects because they are healthy volunteers, or when the drug offers no significant benefits to anyone because it is essentially a copy of an existing drug.

Manufacturers, of course, hope that animal tests will give an indication of how a given drug will affect humans. However, a full 70 to 75 percent of drugs approved by the Food and Drug Administration for clinical trials based on promising results in animal tests, ultimately prove unsafe or ineffective for humans.2 Even limited clinical trials cannot reveal the full range of drug risks. A U.S. General Accounting Office (GAO) study reports that of the 198 new drugs which entered the market between 1976 and 1985, 102 (52 percent) caused adverse reactions that premarket tests failed to predict.3 Even in the brief period between January and August 1997, at least 53 drugs currently on the market were relabeled due to unexpected adverse effects.4

In the GAO study, no fewer than eight of the drugs in question were benzodiazepines, similar to Valium, Librium, and numerous other sedatives of this class. Two were heterocyclic antidepressants, adding little or nothing to the numerous existing drugs of this type. Several others were variations of cephalosporin antibiotics, antihypertensives, and fertility drugs. These are not needed drugs. The risks taken to develop these drugs by trial participants, and to a certain extent by consumers, were not in the name of science, but in the name of market share.

As physicians, we necessarily have a relationship with the pharmaceutical companies that produce, develop, and market drugs involved in medical treatment. A reflective, perhaps critical posture towards some of the standard practices of these companies—such as the routine development of unnecessary drugs—may help to ensure higher ethical standards in research.

Unnecessary Experimentation on Children

Unnecessary and questionable human experimentation is not limited to pharmaceutical development. In experiments at the National Institutes of Health (NIH), a genetically engineered human growth hormone (hGH) is injected into healthy short children. Consent is obtained from parents and affirmed by the children themselves. The children receive 156 injections each year in the hope of becoming taller.

Growth hormone is clearly indicated for hormone-deficient children who would otherwise remain extremely short. Until the early 1980s, they were the only ones eligible to receive it; because it was harvested from human cadavers, supplies were limited. But genetic engineering changed that, and the hormone can now be manufactured in mass quantities. This has led pharmaceutical houses to eye a huge potential market: healthy children who are simply shorter than average.

Short stature, of course, is not a disease. The problems short children face relate only to how others react to their height and their own feelings about it. The hGH injection, on the other hand, poses significant risks, both physical and psychological.

These injections are linked in some studies to a potential for increased cancer risk,5-8 are painful, and may aggravate, rather than reduce, the stigma of short stature.9,10 Moreover, while growth rate is increased in the short term, it is unclear that the final net height of the child is significantly increased by the treatment.

The Physicians Committee for Responsible Medicine worked to halt these experiments and recommended that the biological and psychological effects of hGH treatment be studied in hormone-deficient children who already receive hGH, and that non-pharmacologic interventions to counteract the stigma of short stature also be investigated. Unfortunately, the hGH studies have continued without modification, putting healthy short children at risk.

Use of Placebo in Clinical Research

Whooping cough, also known as pertussis, is a serious threat to infants, with dangerous and sometimes fatal complications. Vaccination has nearly wiped out pertussis in the U.S. Uncertainties remain, however, over the relative merits and safety of traditional whole-cell vaccines versus newer, acellular versions, prompting the NIH to propose an experiment testing various vaccines on children.

The controversial part of the 1993 experiment was the inclusion of a placebo group of more than 500 infants who get no protection at all, an estimated 5 percent of whom were expected to develop whooping cough, compared to the 1.4 percent estimated risk for the study group as a whole. Because of these risks, this study would not be permissible in the U.S. The NIH, however, insisted on the inclusion of a placebo control and therefore initiated the study in Italy where there are fewer restrictions on human research trials. Originally, Italian health officials recoiled from these studies on ethical as well as practical grounds, but persistent pressure from the NIH ensured that the study was conducted with the placebo group.

The use of double-blind placebo-controlled studies is the “gold standard” in the research community, usually for good reason. However, when a well-accepted treatment is available, the use of a placebo control group is not always acceptable and is sometimes unethical.11 In such cases, it is often appropriate to conduct research using the standard treatment as an active control. The pertussis experiments on Italian children were an example of dogmatic adherence to a research protocol which trumped ethical concerns.

Placebos, Ethics, and Poorer Nations

The ethical problems that placebo-controlled trials raise are especially complicated in research conducted in economically disadvantaged countries. Recently, attention has been brought to studies conducted in Africa on preventing the transmission of HIV from mothers to newborns. Standard treatment for HIV-infected pregnant women in the U.S. is a costly regimen of AZT. This treatment can save the life of one in seven infants born to women with AIDS.12 Sadly, the cost of AZT treatment is well beyond the means of most of the world’s population. This troubling situation has motivated studies to find a cost-effective treatment that can confer at least some benefit in poorer countries where the current standard of care is no treatment at all. A variety of these studies is now underway in which a control group of HIV-positive pregnant women receives no antiretroviral treatment.

Such studies would clearly be unethical in the U.S. where AZT treatment is the standard of care for all HIV-positive mothers. Peter Lurie, M.D., M.P.H., and Sidney Wolfe, M.D., in an editorial in the New England Journal of Medicine, hold that such use of placebo controls in research trials in poor nations is unethical as well. They contend that, by using placebo control groups, researchers adopt a double standard leading to “an incentive to use as research subjects those with the least access to health care.”13 Lurie and Wolfe argue that an active control receiving the standard regimen of AZT can and should be compared with promising alternative therapies (such as a reduced dosage of AZT) to develop an effective, affordable treatment for poor countries.

Control Groups and Nutrition

Similar ethical problems are also emerging in nutrition research. In the past, it was ethical for prevention trials in heart disease or other serious conditions to include a control group which received weak nutritional guidelines or no dietary intervention at all. However, that was before diet and lifestyle changes—particularly those using very low fat, vegetarian diets—were shown to reverse existing heart disease, push adult-onset diabetes into remission, significantly lower blood pressure, and reduce the risk of some forms of cancer. Perhaps in the not-too-distant future, such comparison groups will no longer be permissible.

The Ethical Landscape

Ethical issues in human research generally arise in relation to population groups that are vulnerable to abuse. For example, much of the ethically dubious research conducted in poor countries would not occur were the level of medical care not so limited. Similarly, the cruelty of the Tuskegee experiments clearly reflected racial prejudice. The NIH experiments on short children were motivated to counter a fundamentally social problem, the stigma of short stature, with a profitable pharmacologic solution. The unethical military experiments during the Cold War would have been impossible if GIs had had the right to abort assignments or raise complaints. As we address the ethical issues of human experimentation, we often find ourselves traversing complex ethical terrain. Vigilance is most essential when vulnerable populations are involved.

References

Frank C. Conahan of the National Security and International Affairs Division of the General Accounting Office, reporting to the Subcommittee of the House Committee on Government Operations.