Elizabeth Chase is originally from Fredericksburg, VA. She just finished herBSPH in biostatistics and BA in history at the University of North Carolina-Chapel Hillin May 2017. She is currently working on her PhD in biostatistics at the University ofMichigan, focusing on cancer research, another topic that doesn’t go over well at parties.When she isn’t working on problem sets and reading about high-fatality chronic disease,she enjoys reading, cooking, running, and worrying that she’s gotten the diseases shespends so much time researching.

When the Sloan-Kettering cancer scandal broke in 1963, observers alternated between confusion and horror. A prominent American doctor, Dr. Chester Southam, had led a study in which he injected 300 Americans, many of them Jewish and elderly, with cancer cells to determine its effect on their immune systems.[1] There was near-universal agreement that the study had problems, especially Southam’s decision to omit the word “cancer” when getting patient permission to perform cell injections. However, it soon became apparent that there was a larger issue: under American law, Southam’s gross violation of informed consent was not wrong.[2] He was dragged through several disciplinary hearings, none of which had any effect on his research funding, publications, tenured position, or overall reputation. Five years later, Southam was elected president of the American Association for Cancer Research.[3]

The Sloan-Kettering cancer scandal was one of many incidents in the 1960s and 1970s that made American and British observers increasingly aware of ethical problems in medical research, particularly regarding informed consent. Doctors like Southam failed to fully inform patients, used patients who were legally unable to consent, and coerced patient consent, to disastrous effect. They disproportionately targeted vulnerable patient groups like the elderly, the disabled, racial and ethnic minorities, and women. Perhaps most galling, they faced minimal consequences for their actions.

The Sloan-Kettering scandal was the first of several research scandals to confront the American public, at the same time that similar revelations appalled the British. After public outcry, the American government drastically increased its regulation of medical research. Despite similar problems in the United Kingdom, the British government did not tighten regulation to the same extent, due to differences in medical culture, domestic politics, and severity of scandal. By the early 1970s, the American government had articulated clear rules for human research, while in the United Kingdom, doctors continued to rely on their own judgment.

United States

In the years after World War II, American medical research exploded. The National Institutes of Health (NIH), previously a small laboratory of the US Public Health Service, was given its own budget, which grew from $700,000 in 1945 to $436 million in 1965. By the mid-1960s, American medical research led the world.[4] Unfortunately, American medical ethics had not kept apace with the increase in spending. By 1960, less than 10% of American medical schools had an ethical code, and the NIH had no ethical guidelines for its researchers.[5] Under American law, medical experimentation of any kind was still technically illegal, making it difficult for legal scholars or researchers to discuss ethical codes to regulate it.[6] In 1962, Congress considered a law that would require informed consent for all drug testing on humans, but it failed to pass, because Congress did not want to risk inhibiting the life-saving medical miracles that American research produced.[7]

The next year, Congress began to realize that they had made a mistake. As the Sloan-Kettering Cancer scandal horrified the American public, American doctors felt deeply conflicted.[8] There was no doubt that the experiments had been valuable, and most doctors had the utmost confidence in Dr. Southam—two of his colleagues remarked that “if the same procedure had been followed by almost any scientist other than Dr. Southam they would have thought it unethical, their regard for him was so high.”[9] The risk to the patients was not substantial; previous research had demonstrated that cancer cell injections rarely resulted in cancer.[10] However, there was serious concern about Southam’s failure to fully inform his patients of the procedure, regardless of his belief that it would be harmless.[11]

The fact that some doctors even mildly condemned Southam’s incomplete consent was significant. The prevailing medical doctrine of the time taught that there was a near-sacred doctor-patient trust, in which the doctor acted in the patient’s best interest and the patient followed the doctor’s orders.[12] Most physicians did not see the point in explaining diagnoses and treatments to patients, which they believed would be difficult, frightening, and generally unhelpful to patient recovery.[13] In experimental situations, things became more complicated. Medical research was usually done for some patients’ benefit, but often the patients undergoing the experiment received no direct benefit from the research. Informed consent emerged as a solution to this conflict-of-interest, allowing patients to decide if they wanted to risk personal health for public gain. Nonetheless, many doctors remained skeptical of their patients’ ability to comprehend what they were consenting to, and regarded informed consent as administrative nonsense done for appearances only.[14] Therefore, informed consent seemed an unnecessary impediment to important research.[15]

Predictably, many patients disagreed. As the furor over Sloan-Kettering grew, leaders at the NIH began to fear that if they did not respond to public concerns, Congress might intervene by imposing strict regulation or budget cuts.[16] In February 1966, the Surgeon General and the NIH Office for the Protection from Research Risks (OPRR) released a new statement that required all research institutions receiving federal funding to draft an ethical code and establish an Institutional Review Board (IRB), both of which had to be submitted for federal approval.[17] IRBs were responsible for reviewing research prior to and throughout experimentation and for keeping written documentation of informed consent for all experiments.[18] Until doctors received IRB approval for their research, there would be no funding.[19]

Informed consent was robustly protected under the new guidelines. Procedures had to be explained to each patient in simple and comprehensible language, in a calm setting, free of “coercion or undue influence,” and with ample time to make the decision.[20] In addition, patients had to be told that the research was experimental, what the purpose of the research was, the duration of the experiment, the procedure, any hazards to their health, the plan to protect confidentiality, any compensation they might receive, other treatment options outside of the experiment, and that they could withdraw from the experiment at any time.[21] Patients could not consent to waive any of their civil or political rights.[22] In addition, patients were to have access to one of the lead doctors, whom they could contact at any time with questions and concerns.[23] The NIH OPRR warned researchers to take special precautions when obtaining consent from “vulnerable populations, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged,” and outlined additional guidelines for obtaining consent from these groups.[24]

The NIH OPRR policy was a good step toward protecting human research subjects, although it only applied to federally funded research. In addition, it did not prohibit conflicts-of-interest in IRB membership, so doctors could approve their own or their friends’ research. It put more emphasis on review than on the ethics of the researchers themselves, and the policy was not enforced well.[25] There were no punishments for researchers who deviated from their IRB approved plan, aside from the chance of revoked funding.[26] Nonetheless, the new policy allayed public complaints about lack of regulation, for which the NIH would soon find itself grateful as a new scandal emerged in the form of Dr. Henry K. Beecher.

Beecher was the Dorr Professor of Research in Anesthesia at Harvard, and was one of the most respected doctors in the world. By 1966, he had published over 200 papers and several books, some of which centered on problems with research ethics.[27] In 1959, he wrote a lengthy article for the Journal of the American Medical Association (JAMA) providing ethical advice for young researchers.[28] As the 1960s continued, Beecher became increasingly worried about medical ethics. In the spring of 1965, he spoke at a conference in Michigan about his concerns, announcing that violations of research ethics were “by no means rare, but are almost, one fears, universal.”[29] His speech at the conference garnered some media attention and outrage from physicians, with one observer calling it “a gross and irresponsible exaggeration.” Others worried that it would harm American medical research.[30]

Beecher was unconcerned. In 1966, he wrote his famous “Ethics and Clinical Research,” published in the New England Journal of Medicine (NEJM), which generated wide press coverage and public shock.[31] In it, he tried a new approach, using twenty-two specific examples of American experiments where ethical boundaries had been overstepped. He did not include names or citations in the article, fearing libel charges,[32] but stated that all of the studies had been published, many in prestigious medical journals, after being performed by reputable researchers at many of the nation’s top institutions. He estimated that one in six studies was unethical, and noted that only one in twenty-five included any mention of informed consent.[33] He attributed the ethical issues to the rapid expansion of American medical research and the intense pressure on young researchers who needed successful publications in order to get tenure, declaring that there was a growing “separation between the interests of science and the interests of the patient.”[34] A tone of incredulous fury pervaded Beecher’s discussion of his examples, but he did not believe that doctors’ transgressions were deliberate, instead attributing it to “thoughtlessness and carelessness, not a willful disregard of the patient’s rights.”[35] Despite his concerns about ethics, Beecher did not support an increase in regulation, because he thought that medical codes were inadequate to preventing every misstep and would be more hindrance than help.[36] Instead, he argued that medical schools needed to do a better job of training responsible and compassionate researchers, and he also chastised medical journals for continuing to publish clearly unethical studies.[37]

Beecher’s fears were realized in 1972, when three high profile and racially charged medical scandals broke in quick succession. In early 1972, it emerged that a San Antonio family planning clinic had participated in a study to determine the effects of birth control on female mood. In order to compare effects, the clinic gave seventy-six impoverished Mexican-American women placebo birth control without their knowledge or consent. Ten unplanned pregnancies resulted.[38] Later that year, another study came out, this one from the University of Cincinnati General Hospital. Researchers had irradiated hundreds of terminally ill cancer patients without their consent to determine the effects of radiation on the human condition. The vast majority of the research subjects were black and poor.[39]

Finally, in the summer of 1972, the pièce de résistance reached the public after a researcher leaked the Tuskegee scandal to the press. The Tuskegee scandal was a small study done by the US Public Health Service in which researchers monitored several hundred poor, black men living in Tuskegee, Alabama who had syphilis. The study began in 1932, and at its conception, there was nothing ethically wrong with it. However, after penicillin was discovered as a syphilis treatment in 1942, researchers could have cured the subjects and ended the study. Instead, they failed to inform test subjects of the existence of penicillin, actively prevented them from seeking treatment elsewhere, and continued to monitor them until 1972, when the press found out about the study.[40] As Americans watched in shock, a federal investigation determined that the study had resulted in at least twenty-eight preventable deaths, and more than a hundred of the men had severe damage done to their central nervous and cardiovascular systems that could have been avoided.[41]

Public and congressional response was swift and merciless. One senator called the study “a moral and ethical nightmare,”[42] and a writer for the Boston Globe quipped, “It has become increasingly clear that the most popular practice tools in many laboratory experiments seem to be rats, cats, dogs and black people.”[43] A columnist for the New York Times wrote sadly:

The ethics of the study would have been questioned regardless of who the subjects were, but the fact that Federal doctors had selected poor, uneducated men—and not one of them a white man—further inflamed the issue. As one white Southerner remarked, “The worst segregationist in Alabama would never have done this.”[44]

After the federal Tuskegee Panel completed its investigation in June 1973, it declared the study “ethically unjustified” and accused the federal government of failing research subjects that should have been under its protection.[45] Congress acted swiftly in response. In 1974, the National Research Act passed with ease. It made IRBs mandatory at all institutions, not just federally funded ones, and established a commission to oversee medical ethics and informed consent according to the instructions given in the NIH OPRR policy. The National Research Act significantly increased the enforcement power of the NIH OPRR policy and reminded medical researchers that there could be consequences for ethical transgressions.[46] The American government decided that American doctors could not be trusted to regulate themselves.

United Kingdom

Like in the United States, British medical research was small prior to World War II. However, post-war medical research did not grow as in the US. Some research institutions prospered, overseen by the Medical Research Council (MRC), but it was not a broad trend.[47] However, British healthcare developed rapidly. In 1946, Parliament passed the National Health Service (NHS) Act, which established national healthcare beginning in 1948.[48] The passage of the NHS Act was somewhat difficult, especially because clinicians were afraid that they would lose autonomy when the national government took over healthcare provision. As a result, the act contained an agreement stipulating that the government would take an “arms-length” approach about overseeing individual clinical decisions.[49] This clause assuaged clinician fears.

Unlike in the US, there was substantial discussion of medical ethics throughout the 1950s in the UK; however, it accomplished little. In 1953, the MRC released a memo on human subjects research, but it did not discuss informed consent or other staples of medical ethics, had no enforcement mechanism, and was largely ignored.[50] The MRC had considered requiring informed consent, but it met sharp backlash from physicians, who argued that it would destroy the trust of the doctor-patient relationship.[51] Parliament held several hearings with the Minister of Health on medical ethics, who refused to consider ethical guidelines, in large part due to the agreement in the NHS Act about not dictating individual physicians’ actions. In 1955, the Minister at the time, Iain MacLeod, stated firmly, “Only the clinicians in charge could say what is right and proper. It would be entirely improper for me to try to lay down what ethical principles should govern the conduct of professionals in the work they do in hospitals.”[52] Later, in 1959, Health Minister Derek Walkersmith stated that ethical concerns “are not susceptible to control by legislation,” discouraging Parliament from considering the matter further.[53] As a result, British medical ethics remained unclear.[54]

Despite the vagueness of ethics, there was ongoing British medical research that deeply unsettled many doctors. It was something of an open secret among physicians that Hammersmith Teaching Hospital, a large research institution in London, was taking significant shortcuts with ethics. Researchers at Hammersmith often got incomplete or nonexistent consent from patients, coerced medical students into being research subjects, experimented on children, the mentally ill, the elderly, and the dying, and regularly endangered patient lives in their pursuit of new knowledge.[55] Visiting researchers from Australia and Europe were appalled at the practices they witnessed at Hammersmith, but nobody spoke up.[56] Theodore Fox, the editor of the Lancet, had been threatening an inquiry into Hammersmith’s research practices since 1953, but he had not gone through with it.[57]

Eventually, an outsider revealed Hammersmith’s wrongdoings. Maurice Pappworth was a doctor and a member of the Royal College of Physicians (RCP), but he did not perform research or see patients. Instead, he was a highly sought-after tutor for the RCP board examinations, some of the most difficult exams facing aspiring British doctors. Pappworth had heard of many of Hammersmith’s transgressions from his students, who were terrified of the studies for which they were expected to volunteer.[58] In 1962, Pappworth wrote an article citing many of the Hammersmith incidents, in addition to several British and American transgressions. He attempted to have it published in Lancet, but it was rejected as too controversial, so he had it published in Twentieth Century, a popular magazine of the time.[59] He named names, arguing that the perpetrators of the ethical breaches deserved public shaming.[60] It immediately became a sensation and was picked up by other popular papers and magazines. Pappworth received letters thanking him for his bravery.[61] The response from doctors was distinctly less positive. Physicians lambasted Pappworth, arguing that he had betrayed his profession and was “a trouble maker who lacked the polish of a gentleman.”[62]

Although many doctors refused to even discuss Pappworth’s concerns, viewing them as uninformed attacks from an outsider,[63] some physicians engaged with them indirectly. In 1963, the British Medial Journal featured a biostatistician’s critique of the World Medical Association’s proposed ethical code, arguing that it was too stringent and would hamper research. The biostatistician, Sir Austin Bradford Hill, declared that all new treatments required some risky experimentation and stated that there were some situations in which informed consent should not be required.[64] Helen Hodgson, who had founded the Patients Association in 1963 after reading Pappworth’s article, wrote a scathing letter-to-the-editor in response.[65] She noted: “The advice to doctors seems to be, ‘The doctor/patient relationship depends on the patient’s confidence in you. If you cannot tell him something without losing his confidence, don’t tell him.’ The question of truth does not enter into consideration,”[66] concluding darkly, “One wonders whether Sir Austin believes that there is some inherent difference in the German and British characters which made it possible for Nazi doctors to disgrace their profession but would make it impossible for British doctors to ever do likewise.”[67] Another doctor jumped into the fray, writing that worried patients should stop using federal healthcare, as it was only “hospital class” patients who risked experimentation.[68] His remarks sparked a slew of furious reprimands from laypeople, who commented that doctors might feel differently if their salaries were less generous and did not permit private healthcare, with one writing bitterly, “Patients…are human beings, not percentages or mere consumers.”[69]

The parliamentary hearings did not make much headway. Hammersmith Hospital denied Pappworth’s claims, and the Ministry of Health resisted the idea of ethical codes, arguing that they were anti-British and unhelpful.[70] Under pressure from Parliament, the MRC released another memo on human subjects, although it was extremely vague.[71] The MRC asked the RCP to consider experimental ethics, but the College refused.[72] In 1966, several doctors asked the RCP to consider experimental ethics again. Although their concerns were somewhat motivated by morals, the primary cause was the United States’ NIH OPRR policies. Many British researchers received American funding, and in order to continue their research, they needed some kind of institutional approval system in place.[73] The RCP reluctantly released a statement in July 1967 recommending that all research hospitals form a Research Ethics Committee (REC). However, there were no clear guidelines about what RECs should or should not approve.[74] The report accomplished very little.[75]

The RCP report appeared even more foolish when Pappworth’s book, Human Guinea Pigs, was published a month later.[76] Pappworth helpfully divided his book into sections, grouping ethical violations by patient class: children, infants, pregnant women, the mentally disabled, prisoners, the dying, medical students, and patients under duress. He described more than 200 unethical studies, about half of which occurred in the UK. He deplored the state of informed consent in many of the studies, stating, “Two essential pieces of information are often deliberately withheld from ‘the consenting volunteer,’ namely, that the procedure is experimental and its consequences are unpredictable.”[77] He also remarked several times that animal experimentation was better regulated than human experimentation in the UK, an unfortunate observation that was supported by solid evidence.[78] At the end of the book, Pappworth acknowledged his aversion to medical codes, but concluded sadly that “the voluntary system of safeguarding patients’ rights has failed and new legislative procedures are absolutely necessary.”[79] He proposed a mandatory system of research ethics committees (RECs) that would report to the General Medical Council, which would make annual reports to Parliament.[80]

Although the public read and commented on Pappworth’s book, most British medical professionals continued to ignore him. Regulatory change proceeded at a sluggish pace. In 1968, the MRC reluctantly established its own system of RECs after the NIH refused to sponsor an American researcher’s visit to the UK because of the lack of ethical guidance.[81] It appeased the American government, but the MRC policy was incredibly unclear for British researchers. When a British hospital asked for additional guidance, the Ministry of Health responded tersely, “Clinical decisions are for the clinician to make: ethical questions are for the profession to consider…it would not be in patients’ interests if hospital authorities were to interfere.”[82] No major changes were made to British medical ethics policy until the late 1980s and 1990s, when the UK made reforms in order to comply with the standards of the European Community (later the European Union).[83] Even when the UK announced the reforms, the report stated that the authors “deliberately refrained from dictating a right solution.”[84] At least on paper, the UK believed that the doctor should have significant latitude in making ethical decisions.

Comparisons

The US and the UK both witnessed medical research scandals in the 1960s, but the American legislative and regulatory response was much higher than that of the UK. There are several explanations for this disparity.

There were major differences in the medical cultures of the UK and US during the 1960s. Rothman has argued convincingly that the medical scandals of the 1960s gained such notoriety in part because the doctor-patient relationship was already fraying, particularly in the United States. Physician income grew rapidly during the 1950s and 1960s, which caused resentment and suspicion of doctors.[85] At the same time, doctor house calls decreased, so there was less opportunity for doctors and patients to develop a relationship. These trends were accelerated in the US and made American doctors more distant from their patients.[86]

As medicine became increasingly specialized, patients saw multiple doctors to receive care, which made medicine more confusing and impersonal to outsiders.[87] Doctors in the UK were highly resistant to specialization, whereas it thrived in the US, in large part because specialization was necessary to support the quantity and quality of American medical research.[88] Because there was more research occurring in the US, there were more opportunities for ethical abuses, and when these abuses occurred, the studies were generally larger and more people were affected.[89] The difference in degree of specialization and medical research highlights a broader difference between British and American medicine: a more entrenched British medical conservatism.[90] British medicine had a longstanding tradition of distrust toward change, both in adopting new medical innovations and in accepting medical regulation.[91] For example, the United States had been regulating drugs and other pharmaceuticals since the early 1900s; it took the United Kingdom until 1968 to do the same.In addition, American medicine may have been unusually amenable to regulation. Even today, American regulation of research, health, drugs, and medical training is some of the strictest in the world.[92]

In terms of domestic affairs, there were some significant differences between the politics of 1960s Britain and America. As discussed, the UK had created its healthcare program in the 1940s, with some unintended consequences. The NHS Act’s hands-off compromise about clinical decision-making restricted the Ministry of Health and Parliament’s ability to react to clinical abuses of power.[93] The US government had no such reservation. American hospitals were more closely tied to the government than British hospitals. By the 1960s, only 7% of American hospitals were for-profit, and the vast majority of hospitals were run using federal/state revenue or by independent charities.[94] In Britain, the majority of hospitals were privately owned throughout the 1960s, until the federal government began a concerted effort to take over hospital funding.[95] As a result, the American government may have felt more responsible for the misdeeds of hospitals, and felt more power to make reforms. In Britain, hospitals were independent entities, which may have inhibited legislative control.

In both the UK and US, ethical transgressions disproportionately hurt the disadvantaged. In the UK, it was the “hospital class” of public healthcare recipients who suffered the brunt of ethical breaches. Individuals with private healthcare—usually those with higher incomes—were not at risk of experimentation.[96] In the US, discrimination took on a racial component, with poor, non-white minorities at the mercy of the experimenters. As a result, medical ethics was pulled into the American civil rights activism of the 1960s, which increased the urgency of the issue to policymakers.[97] In some cases, American feminists also became involved, as in the San Antonio contraceptive scandal, and some feminist thinkers used the doctor’s power over patients as a prime example of paternalistic behavior that needed reform.[98] In the United States, medical ethics were seen in a larger framework of racial and gender discrimination, and pulled into the larger backlash against authority that characterized the 1960s. In the UK, the Patients Association formed, but it appeared as an isolated movement for a single issue: patients’ rights. The hospital class did not have any clear advocates. If anything, medical experimentation’s association with economic class made the case for regulation less compelling, as some observers blamed the victims of medical experimentation for being too cheap to pay for private healthcare.[99]

Finally, the increase in regulation may have been stronger in the US simply because the American scandals were worse. UK medical transgressions put patients at risk and took inappropriate shortcuts, but few of the ethical misdeeds had the consequences of the Tuskegee syphilis study or the San Antonio contraception scandal.[100] American researchers were in greater need of regulation than their British counterparts, at least in terms of measurable outcomes. All of these factors combined to result in a more substantial increase in American medical regulation, while British medical regulation stayed much the same until later in the 20th century.

Conclusions

At the start of the 1960s, both British and American medicine was subject to little regulatory oversight, and doctors exercised significant autonomy in making clinical decisions. Many doctors viewed patient opinion and consent as inconsequential and unnecessary. After a decade of scandal and disgrace, public trust of doctors declined and, in the United States, doctors were subject to an abundance of new regulation and ethical mandate, especially surrounding informed consent. In the United Kingdom, the public lost trust in their physicians, but British doctors escaped the 1960s relatively free of ethical regulation.

By the end of the 20th century, doctors in the US and UK had lost independence and power, which patients and the federal government assumed in their stead. These reforms were able to curtail many of the medical ethical abuses of the 1950s and 1960s. However, regulation of medical ethics remains limited. There is scarce criminal law on medical experimentation in either country. Ironically enough, in the United States, the majority of this criminal law centers on human fetus research;[101] in the United Kingdom, animal experimentation takes the bulk.[102] In their present state, ethical regulations serve more as a warning than a punishment. Patients continue to rely on the goodwill of their physicians, a trust that is sometimes betrayed.

Bibliography

Annas, George and Michael Grodin. “Where Do We Go From Here?” In The Nazi Doctors and the Nuremberg Code, edited by George Annas and Michael Grodin, 307-14. Oxford: Oxford University Press, 1992.

Hazelgrove, Jenny. “British Research Ethics After the Second World War: The Controversy at the British Postgraduate Medical School, Hammersmith Hospital.” In Twentieth Century Ethics of Human Subjects Research: Historical Perspectives on Values, Practices, and Regulations, edited by Volker Roelcke and Giovanni Miao, 181-97. Stuttgart: Franz Steiner Verlag, 2004.

Hedgecoe, Adam. “’A Form of Practical Machinery’: The Origins of Research Ethics Committees in the UK, 1967-1972.” Journal of Medical History 53 (2009): 331-50. Accessed October 22, 2015.

Majorana, Ronald. “Thaler Says Poor In City Hospitals Are ‘Guinea Pigs:’ In State Senate Speech He Tells of Experiments on Patients Without Consent.” New York Times, January 11, 1967. Accessed October 27, 2015. http://query.nytimes.com/ gst/abstract.html?res=9E0DE5D8133BE63ABC4952DFB766838C679EDE.

Mold, Alex. “Patient Groups and the Construction of the Patient-Consumer in Britain: An Historical Overview.” Journal of Social Policy 39 (2010): 505-21.

Shortt, S.E. “Physicians, Science, and Status: Issues in the Professionalization of Anglo-American Medicine in the Nineteenth Century.” Journal of Medical History 27 (1983): 51-68. Accessed November 4, 2015.

Tulchinsky, Theodore and Elena Varavikova. The New Public Health: An Introduction for the 21st Century.” New York: Academic Press, 2000.

U.S. Department of Health. National Institutes of Health. Office for Protection from Research Risks. Part 46—Protection of Human Subjects. 1966. https://history. nih.gov/research/downloads/45CFR46.pdf.