6.4.2 Beneficence

Beneficence is about understanding and improving the risk/benefit profile of your study, and then deciding if it strikes the right balance.

The Belmont Report argues that the principle of Beneficence is an obligation that researchers have to participants, and that it involves two parts: (1) do not harm and (2) maximize possible benefits and minimize possible harms. The Belmont Report traces the idea of “do not harm” to the Hippocratic tradition in medical ethics, and it can be expressed in a strong form where researchers “should not injure one person regardless of the benefits that might come to others” (Belmont Report 1979). However, the Belmont Report also acknowledges that learning what is beneficial may involve exposing some people to risk. Therefore, the imperative of doing no harm can be in conflict with the imperative to learn, leading researchers to make occasionally difficult decisions about “when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks.” (Belmont Report 1979)

In practice, the principle of Beneficence has been interpreted to mean that researchers should undertake two separate processes: a risk/benefit analysis and then a decision about whether the risks and benefits strike an appropriate ethical balance. This first process is largely a technical matter requiring substantive expertise, and the second is largely an ethical matter where substantive expertise may be less valuable or even detrimental.

A risk/benefit analysis involves both understanding and improving the risks and benefits of a study. Analysis of risk should include two elements: the probability of adverse events and the severity of those events. During this stage, for example, a researcher could adjust the study design to reduce the probability of an adverse event (e.g., screen out participants who are vulnerable) or reduce the severity of an adverse event if it occurs (e.g., make counseling available to participants who request it). Further, during this process researchers need to keep in mind the impact of their work not just on participants, but also on non-participants and social systems. For example, consider the experiment by Restivo and van de Rijt (2012) on the effect of awards on Wikipedia editors (discussed in Chapter 4). In this experiment, the researchers gave awards to some editors that they considered deserving and then tracked their contributions to Wikipedia compared to a control group of equally deserving editors to whom the researchers did not give an award. In this particular study, the number of awards they gave was small, but if the researchers had flooded Wikipedia with awards it could have disrupted the community of editors without harming any of them individually. In other words, when doing risk/benefit analysis you should think about the impacts of your work not just on participants but on the world more broadly.

Next, once the risks have been minimized and the benefits maximized, researchers should assess whether the study strikes a favorable balance. Ethicists do not recommend a simple summation of costs and benefits. In particular, some risks render the research impermissible no matter the benefits (e.g., the Tuskegee Syphilis Study described in the Historical Appendix). Unlike the risk/benefit analysis, which is largely technical, this second step is deeply ethical and may in fact be enriched by people who do not have specific subject-area expertise. In fact, because outsiders often notice different things from insiders, IRBs in the US are required to have at least one non-researcher. In my experience serving on an IRB, these outsiders can be helpful for preventing group-think. So if you are having trouble deciding whether your research project strikes an appropriate risk/benefit analysis don’t just ask your colleagues, try asking some non-researchers; their answers might surprise you.

Applying the principle of Beneficence to the three examples highlights the fact that there is often substantial uncertainty about risks before a study begins. For example, the researchers did not know the probability or magnitude of the adverse events that could be caused by their studies. This uncertainty is actually quite common in digital age research, and later in this chapter, I’ll devote an entire section to the challenge of making decisions in the face of uncertainty (Section 6.6.4). However, the principle of Beneficence does suggest some changes that might be made to these studies to improve their risk/benefit balance. For example, in Emotional Contagion, the researchers could have attempted to screen out people under 18 years old and people who might be especially likely to react badly to the treatment. They could have also tried to minimize the number of participants by using efficient statistical methods (as described in detail in Chapter 4). Further, they could have attempted to monitor participants and offer assistance to anyone that appeared to have been harmed. In Taste, Ties, and Time, the researchers could have put extra safeguards in place when they released the data (although their procedures were approved by Harvard’s IRB which suggests that they were consistent with common practice at that time); I’ll offer some more specific suggestions about data release later in the chapter when I describe informational risk (Section 6.6.2). Finally, in Encore, the researchers could have attempted to minimize the number of risky requests that are created in order to achieve the measurement goals of the project, and they could have excluded participants that are most in danger from repressive governments. Each of these possible changes would introduce trade-offs into the design of these projects, and my goal is not to suggest that these researchers should have made these changes. Rather, my goal is to show the kinds of changes that the principle of Beneficence can suggest.

Finally, although the digital age has generally made the weighing of risks and benefits more complex, it has actually made it easier for researchers to increase the benefits of their work. In particular, the tools of the digital age greatly facilitate open and reproducible research, where researchers make their research data and code available to other researchers and make their papers available to the public by publishing open access. This change to open and reproducible research, while by no means simple, offers a way for researchers to increase the benefits of their research without exposing participants to any additional risk (data sharing is an exception that will be discussed in detail in the section on informational risk (Section 6.6.2)).