Historical appendix

Any discussion of research ethics needs to acknowledge that, in the past, researchers have done awful things in the name of science. One of the most awful was the Tuskegee Syphilis Study. In 1932, researchers from the US Public Health Service (PHS) enrolled about 400 black men infected with syphilis in a study to monitor the effects of the disease. These men were recruited from the area around Tuskegee, Alabama. From the outset the study was non-therapeutic; it was designed to merely document the history of the disease in black males. The participants were deceived about the nature of the study—they were told that it was a study of “bad blood”—and they were offered false and ineffective treatment, even though syphilis is a deadly disease. As the study progressed, safe and effective treatments for syphilis were developed, but the researchers actively intervened to prevent the participants from getting treatment elsewhere. For example, during World War II the research team secured draft deferments for all men in the study in order to prevent the treatment the men would have received had they entered the Armed Forces. Researchers continued to deceive participants and deny them care for 40 years. The study was a 40-year deathwatch.

The Tuskegee Syphilis Study took place against a backdrop of racism and extreme inequality that was common in the southern part of the US at the time. But, over its 40-year history, the study involved dozens of researchers, both black and white. And, in addition to researchers directly involved, many more must have read one of the 15 reports of the study published in the medical literature (Heller 1972). In the mid-1960s—about 30 years after the study began—a PHS employee named Robert Buxtun began pushing within the PHS to end the study, which he considered morally outrageous. In response to Buxtun, in 1969 the PHS convened a panel to do a complete ethical review of the study. Shockingly, the ethical review panel decided that researchers should continue to withhold treatment from the infected men. During the deliberations, one member of the panel even remarked: “You will never have another study like this; take advantage of it” (Brandt 1978). The all white panel, which was mostly made up of doctors, did decide that some form of informed consent should be acquired. But, the panel judged the men themselves incapable of providing informed consent because of their age and low level of education. The panel recommended, therefore, that the researchers receive “surrogate informed consent” from local medical officials. So, even after an a full ethical review, the withholding of care continued. Eventually, Robert Buxtun took the story to a journalist, and in 1972 Jean Heller wrote a series of newspaper articles that exposed the study to the world. It was only after widespread public outrage that the study was finally ended and care was offered to the men who had survived.

Table 6.4: Partial time line of the Tuskegee Syphilis Study, adapted from Jones (2011).
Date Event
1932 approximately 400 men with syphilis are enrolled in the study; they are not informed of the nature of the research
1937-38 PHS sends mobile treatment units to area, but treatment is withheld for men in study
1942-43 PHS intervenes to prevent men from being drafted for WWII in order to prevent them from receiving treatment
1950s Penicillin becomes a widely available and effective treatment for syphilis; men are still not treated (Brandt 1978)
1969 PHS convenes an ethical review of the study; panel recommends that the study continue
1972 Peter Buxtun, a former PHS employee, tells a reporter about the study; and press breaks the story
1972 US Senate holds hearings on human experimentation, including Tuskegee Study
1973 Government officially ends the study and authorizes treatment for survivors
1997 US President Bill Clinton publicly and officially apologizes for the Tuskegee Study

Victims of this study include not just the 399 men, but also their families: at least 22 wives, 17 children, and 2 grandchildren with syphilis may have contracted the disease as a result of the withholding of treatment (Yoon 1997). Further, the harm caused by the study continued long after it ended. The study—justifiably—decreased the trust that African Americans had in the medical community, an erosion in trust that may have led African-Americans to avoid medical care to the determent of their health (Alsan and Wanamaker 2016). Further, the lack of trust hindered efforts to treat HIV/AIDS in the 1980s and 90s (Jones 1993, Ch. 14).

Although it is hard to imagine research so horrific happening today, I think there are three important lessons from the Tuskegee Syphilis Study for people conducting social research in the digital age. First, it reminds us that there are some studies that simply should not happen. Second, it shows us that research can harm not just participants, but also their families and entire communities long after the research has been completed. Finally, it shows that researchers can make terrible ethical decisions. In fact, I think it should induce some fear in researchers today that so many people involved in this study made such awful decisions over such a long period of time. And, unfortunately, Tuskegee is by no means unique; there were several other examples of problematic social and medical research during this era (Katz, Capron, and Glass 1972; Emanuel et al. 2008).

In 1974, in response to the Tuskegee Syphilis Study and these other ethical failures by researchers, the US Congress created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and tasked the committee to develop ethical guidelines for research involving human subjects. After four years of meeting at the Belmont Conference Center, the group produced the Belmont Report, a slender but powerful document that has had a tremendous impact on both abstract debates in bioethics and the everyday practice of research.

The Belmont Report has three sections. In the first section—Boundaries Between Practice and Research—the Belmont Report sets out its purview. In particular, it argues for a distinction between research, which seeks generalizable knowledge, and practice, which includes everyday treatment and activities. Further, it argues that the ethical principles of the Belmont Report apply only to research. It has been argued that this distinction between research and practice is one way that the Belmont Report is misfit to social research in the digital age (Metcalf and Crawford 2016; boyd 2016).

The second and third parts of the Belmont Report lay out three ethical principles—Respect for Persons; Beneficence; and Justice—and describe how these principles can be applied in research practice. These are the principles that I described in more detail in the chapter.

The Belmont Report sets broad goals, but it is not a document that can be easily used to oversee day-to-day activities. Therefore, the US Government created a set of regulations that are colloquially called the Common Rule (their official name is Title 45 Code of Federal Regulations, Part 46, Subparts A - D) (Porter and Koski 2008). These regulations describe the process for reviewing, approving, and overseeing research, and they are the regulations that Institutional Review Boards (IRBs) are tasked with enforcing. To understand the difference between the Belmont Report and the Common Rule, consider how each discusses informed consent: the Belmont Report describes the philosophical reasons for informed consent and broad characteristics that would represent true informed consent while the Common Rule lists the eight required and six optional elements of an informed consent document. By law, the Common Rule governs almost all research that receives funding from the US Government. Further, many institutions that receive funding from the US Government typically apply the Common Rule to all research happening at that institution, regardless of the funding source. But, the Common Rule does not automatically apply at companies that do not receive research funding from the US Government.

I think that almost all researchers respect the broad goals of ethical research as expressed in the Belmont Report, but there is widespread annoyance with the Common Rule and the process of working with IRBs (Schrag 2010; Schrag 2011; Hoonaard 2011; Klitzman 2015; King and Sands 2015; Schneider 2015). To be clear, those critical of IRBs are not against ethics. Rather, they believe that the current system does not strike an appropriate balance or could better achieve its goals through other methods. This chapter, however, will take these IRBs as given. If you are required to follow the rules of an IRB, then you should follow them. However, I would encourage you to also take a principles-based approach when considering the ethics of your research.

This background very briefly summarizes how we arrived at the rules-based system of IRB review in the United States. When considering the Belmont Report and the Common Rule today, we should remember that they were created in a different era and were—quite sensibly—responding to the problems of that era, in particular breaches in medical ethics during and after the Second World War (Beauchamp 2011).

In addition to ethical efforts by medical and behavioral scientists to create ethical codes, there were also smaller and less well known efforts by computer scientists. In fact, the first researchers to run into the ethical challenges created by digital age research were not social scientists; they were computer scientists, specifically researchers in computer security. During the 1990s and 2000s computer security researchers conducted a number of ethically questionable studies that involved things like taking over botnets and hacking into thousands of computers with weak passwords (Bailey, Dittrich, and Kenneally 2013; Dittrich, Carpenter, and Karir 2015). In response to these studies, the US Government—specifically the Department of Homeland Security—created a blue-ribbon commission to write a guiding ethical framework for research involving information and communication technologies (ICT). The results of this effort was the Menlo Report (Dittrich, Kenneally, and others 2011). Although the concerns of computer security researchers are not exactly the same as social researchers, the Menlo Report provides three important lessons for social researchers.

First, the Menlo Report reaffirms the three Belmont principles—Respect for Persons, Beneficence, and Justice—and adds a fourth principle: Respect for Law and Public Interest. I described this fourth principle and how it should be applied to social research in the main chapter (Section 6.4.4).

Second, the Menlo Report calls on researchers to move beyond a narrow definition of “research involving human subjects” from the Belmont Report to a more general notion of “research with human-harming potential.” The limitations of the scope of the Belmont Report are well illustrated by Encore. The IRBs at Princeton and Georgia Tech ruled that Encore was not “research involving human subjects,” and therefore not subject to review under the Common Rule. However, Encore clearly has human-harming potential; at its most extreme, Encore could potentially result in innocent people being jailed by repressive governments. A principles-based approaches mean that researchers should not hide behind a narrow, legal definition of “research involving human subjects,” even if IRBs allow it. Rather, they should adopt a more general notion of “research with human-harming potential” and they should subject all of their own research with human-harming potential to ethical consideration.

Third, the Menlo Report calls on researchers to expand the stakeholders that are considered when applying the Belmont principles. As research has moved from a separate sphere of life to something that is more embedded in day-to-day activities, ethical considerations must be expanded beyond just specific research participants to include non-participants and the environment where the research takes place. In other words, the Menlo Report calls for researchers to broaden their ethical field of view beyond just their participants.

This historical appendix provides a very brief review of research ethics in the social and medical science, as well as computer science. For a book length treatment of research ethics in medical science, see Emanuel et al. (2008) or Beauchamp and Childress (2012).