The Milgram Obedience Experiment, The Tuskegee Syphilis Study, and The Belmont Report
Assignment – Paper 2 – Ethics & Laws in Psychotherapy: Read the biomedical research study conducted in the Tuskegee Experiment and the behavioral science research known as the Milgram study. 1) In light of the assigned readings, note the protections criteria afforded human participants in biomedical and behavioral science research; 2)provide a summary of the historical context for these protections afforded to human participants in biomedical research and behavioral science research emerging from the Belmont Report. 3) Then clearly identify the ethical and legal issues presented in both studies in relation to the current standards outlined in Title 45.
Professional Ethics and Laws in Behavioral and Biomedical Research
In this paper the Tuskegee syphilis study and the Milgram obedience experiment, two well-known research studies that raised ethical concerns about scientific experimentation, will be examined. A summary of the historical context for the protection criteria will be provided in light of the Belmont Report, which arose from the Tuskegee Experiment’s ethical malpractice. Finally, the ethical and legal concerns raised in both studies will be identified in relation to the Belmont Report and the current Title 45 standards. However, before delving deeper into the various ethical concerns and protections criteria available to human subjects, the Tuskegee Experiment and the Milgram Study will first be reviewed.
The Milgram Obedience Experiment
In July 1961, during Adolf Eichmann’s trial in Jerusalem, Yale University psychologist Stanley Milgram (1963; 1974) initiated a research study to better understand the psychology of genocide and why a person like Eichmann, and millions of his accomplices, would not defy authority in situations where a clear moral imperative existed. The experiment tested participants’ willingness to obey a figure of authority who instructed them to violate their personal consciences. Participants believed they participated in a scientific study examining the impact punishments had on learning and, more specifically, memory, in which they were assigned the task of acting as a teacher and giving electric shocks to a student. When a student responded incorrectly, the teacher shocked him, gradually increasing the voltage with each incorrect response. Although no actual shocks were delivered, the participants actually thought there were. When the voltage was increased, the student began to protest by repeatedly banging on the wall separating him from the teacher, until he eventually fell silent, indicating that the learner had become unconscious or possibly even dead.
The study’s primary purpose was to ascertain whether the participant would continue to follow the experimenter’s instructions and administer increasingly powerful shocks to the learner, or if the participant would listen to the learner’s requests while ignoring the experimenter’s directions. Despite the fact that the participants were uneasy and showed significant signs of stress, such as sweating, stammering, tremoring, lip biting, laughing nervously, or even fainting, and that most participant interrupted the study at least once to ensure that they should continue, 65 percent of the participants actually delivered the lethal electric shock, and all of them delivered shocks that inflicted pain (Milgram, 1963), findings that appear to be more or less consistent when the study was replicated elsewhere (Blass, 1999).
The Tuskegee Syphilis Study
The Tuskegee Experiment or Tuskegee Syphilis Study began in 1932 and continued for forty years. This four-decade research project is perhaps the most notorious research experiment in American history (Katz et al. 2006), and its revelations resulted in the pivotal 1979 Belmont report. The study enrolled around 400 African Americans who had syphilis, as well as a control group of about 200 people who did not have the disease. The study’s purpose was to observe the disease’s effects without receiving treatment, even though it was completely treatable at the end of the study due to medical advancements. Instead of receiving penicillin, which had become the standard treatment for syphilis by 1943, they were given a variety of placebos, ineffective methods, and diagnostic tests to treat what they referred to as “bad blood” (CDC, 2020). By the time the study concluded, nearly 30 participants had died of the illness, 100 had died of illness-related complications, 40 of the participants’ wives had also contracted it, and 19 of the participants’ children were born with it (Kim & Magner, 2018).
The infringement of ethics and morality relating to the Tuskegee Study was reported in the press in 1972 after revelations done by Peter Buxtun, previously employed by the US Public Health Service. Following the incident, congressional hearings were held, the National Research Act was passed in 1974, and the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, or the Commission, was established. The Commission was tasked with developing an ethics code for human subjects research, eventually resulting in the Belmont Report.
The Belmont Report
While putting together the Belmont Report, the Commission considered four major areas relating to fundamental ethical principles and accompanying guidelines, namely, making a clear distinction between practice and research, evaluating the dangers and advantages of human subject research, and setting standards for selecting individuals to participate in this type of research, and finally, defining what constitutes informed consent in each research setting (DHEW, 1979).
The Ethical Principles of the Belmont Report
In terms of the first distinction between practice and research, the Belmont Report states that while medical or behavioral practice is concerned with providing “diagnosis, preventive treatment, or therapy” to specific individuals, research in the same field is concerned with validating “an hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge” (DHEW, 1979, p. 3). Additionally, while practice and research are compatible, the general rule is that whenever research is incorporated into an activity, it should be reviewed by others, such as a by a review board, to ensure adequate participant protection.
The Belmont Report outlines three critical ethical standards for performing research on human subjects: Respect for Persons, Beneficence, and Justice. First, Respect for Persons means that it is important to maintain each individual’s autonomy while also treating them with decency and respect, as well as allowing them to provide their informed consent. More precisely, informed consent means the participant is “given the opportunity to choose what shall or shall not happen to them” (DHEW, 1979, p. 6). To avoid some of the difficulties associated with ensuring informed consent, it is suggested that it should contain sufficient information about the research project, that the information is conveyed in a way and context that the participant understands, and that the consent is given voluntarily and without coercion or influence. Second, Beneficence, defined as “the quality or state of doing or producing good” (Merriam-Webster, n.d.-a), requires researchers to avoid harming participants while maximizing the research’s potential benefits. In relation to the ethical principle of Respect for Persons, Beneficence requires researchers to examine both the short-term and long-term ramifications of their work, especially when working with vulnerable populations. Finally, Justice, defined as “the quality of being just, impartial, or fair,” as well as “conformity to truth, fact, or reason” (Merriam-Webster, n.d.-b), ensures that the research procedures are non-exploitative and well-designed.
In light of the two previous described research experiments, it is clear how important it is to exercise justice when selecting research subjects. For example, are they picked for factors that are closely relevant to the topic under investigation, or are they chosen because they are readily available, in a vulnerable position, or more easily manipulated, all of which may be more prevalent among certain minority and risk groups. As a result, whenever publicly funded research results in the development of therapeutic applications, justice requires that they benefit everyone, not just those with the financial means, and that the research does not include a disproportionate number of participants who are unlikely to benefit.
Criticisms of the Belmont Report and Feasible Justifications
In an article relating to interviewing researchers about their interpretations and critiques of the Belmont Report, Shore (2006) highlighted various concerns about the ethical principles failing to account for the complexity inherent in various areas of research. For instance, Vanderpool (1996, as cited in Shore, 2006) discusses whether the ethical principles are culturally constrained by a Western perspective and thus insufficiently applicable to other cultures. For example, in a study by Carrese and Rhodes (1995, as cited in Shore, 2006) with Navajo key informants, they described intercultural limitations in Western ethical practice. Contrary to Western thinking, disclosing risk in relation to research may pose some difficulties, as the Navajo community believes that expressing negative outcomes may actually increase the risk of harm. Additionally, Tai and Lin (2001, as cited in Shore, 2006) criticize the ethical principle of respect for persons for being excessively individualistic in comparison to how certain Asian cultures view autonomy, which places a higher premium on collective responsibility than individual rights. Rather than focusing exclusively on the ethical principles outlined in the Belmont Report, Shore advocates for expanding the ethical examination to also include cultural, gender, ethnic, and regional considerations.
Given that the Belmont report is a condensation of principles and general recommendations, it is natural that it is oversimplified in some areas, as Jonsen & Toulmin (1988) point out. As a result, when making ethical analysis, it is beneficial to distinguish between principles and rules, as Beauchamp and Childress (1994) describe principles as broad guidelines that, when necessary, allow for considerable latitude for interpretation and serve as a foundation for the development of more specific rules and standards. Thus, because it is a delicate balance between principles, which can occasionally be too broad and open to interpretation, and rules, which are more defined and authoritatively declared (Merriam Webster), it is critical to recognize that principles are intended to assist in making judgments such as is this conduct unethical or is this research approach morally dubious (Macklin, 1999), as well as in developing more precise and specialized regulations and policies, which will be examined in greater detail further ahead.
Application of the Belmont Report
When applying the Belmont report the following three primary areas are stated: informed consent, risk and benefit assessment, and selection of subjects (DHEW, 1979). Weijer et al. (2012) describe that informed consent must contain three distinct parts in order to be valid. The first part relates to information, asserting that the participant is well-informed and aware of the decision’s implications. The second part addresses comprehension, inferring that the participant must be cognitively capable of making a choice. Finally, the third part discusses voluntariness, which requires that the participant must be in a position to freely choose. In terms of risk and benefit assessment, Weijer et al. recommend that the research intervention adhere to accepted practices in the field of investigation, that participants in the control arm receive effective treatment when necessary, and that the risks associated with data collection are minimized through the use of sound design principles and that the risks are proportional to the knowledge acquired. Concerning the safety of vulnerable participants, such as children, disabled individuals, or those in subordinate positions, researchers must determine whether additional safeguards should be implemented during subject selection. For instance, when participants’ freedom of choice is constrained by their position in an organization, recruiting, privacy, and consent processes must be carefully monitored.
To help researchers apply the ethical principles in a real-world setting, Sims (2010) outlines seven practical steps that ensure participants’ rights are respected in a variety of research situations. Although Sims designed these steps for nurse researchers, they are also applicable to other researchers in related fields. In summary, in addition to ensuring that an institutional review board approves the study, Sims’ steps involve the researcher to obtain informed consent from participants, confirm that the participant comprehend the experiment’s objectives and procedures completely, and verify that the participant was not coerced into participating in any way. Furthermore, the researcher is instructed to keep an eye out for any test-related side effects and report them if they occur, to protect the participants’ identities and provide support if they choose not to continue in the study, and, lastly, to ensure that all clinical trial participants receive the bare minimum of care for their condition.
Ethical and Legal Issues in Relation to the Milgram and the Tuskegee Experiments
In 1981, the Federal Policy for the Protection of Human Subjects was issued, colloquially referred to as the “Common Rule,” which outlines the criteria and mechanisms used by an institutional review board when examining human-participant research (DHHS, 2021, -a). In 1991, fourteen additional branches of government involved in or supporting human-participant research accepted the Common Rule, which is now used by the majority of relevant federal departments and agencies in a revised form called the 2018 Common Rule.
Ethical and Legal Violations in the Milgram Study
Milgram’s experiments were heavily criticized for violating numerous ethical standards, most notably those relating to deception, participant protection, and the right to withdraw. According to the 2018 Common Rule, a participant who will be misled about the circumstances of the study must first consent to the deception by signing an agreement acknowledging the possibility of deception during the study (DHHS, 2021-b). Thus, in terms of deception, the experiment was considered unethical because participants were misled about the study’s purpose, which was to determine response to authority rather than memory and learning, and were misled into believing they were actually giving genuine electric shocks, which was not the case.
As a result of this deception, participants experienced extreme emotional distress both during and after the experiment. While many participants expressed gratitude for the opportunity to participate in the study, many also described long-term anxiety following their discovery of their capacity for committing acts of extreme violence against other humans. The term “inflicted insight” refers to the phenomenon of becoming aware of one’s own shortcomings as a result of participation in a scientific experiment. One way to mitigate the risk of inflicted insight is for the experimenter to properly debrief the participant following the experiment (Levine, 1988), something Milgram failed to do according to Perry (2013). Furthermore, Baumrind (1964) published an article criticizing Milgram for failing to halt the experiment when participants displayed some of the previously mentioned obvious signs of distress, an article that sparked a comprehensive revision of psychological research’s ethical standards. According to the Common Rule, human subject research should provide no more than a minimum risk to the participants, with minimal risk implying that the possibility and degree of mental or bodily injury should be the same as in normal psychiatric, medical, or dental exams of healthy persons (DHHS, 2021-a). Thus, the experiment was deemed unethical in terms of participant protection due to the participants being under extreme stress as a result of the potential harm they were causing other.
In accordance with the Common Rule, the Office for Human Research Protections issued Guidance on Subject Withdrawal from Research, which explains that participants can opt out of a research study at any time (DHHS, 2010), and that informed consent documents should include the necessary information regarding a potential withdrawal. In contrast to these guidelines, when a participant expressed hesitation during the Milgram experiment, the experimenter was instructed to strongly encourage them to continue (Shanab & Yahya, 2009). To summarize, the Milgram experiment was unethical because it amplified the participants’ psychological states, which would be prohibited by a review board today.
Ethical and Legal Violations in the Tuskegee Study
The researchers in the Tuskegee syphilis study clearly violated all three of the ethical principles outlined in the Belmont report. In terms of the first principle of respect for persons, the participants were lied to about their conditions rather than being correctly informed about their circumstances and treatment options, as is required when obtaining informed consent. In order to convince the participants to be part of the research project, they were enticed with low-cost incentives such as free meals, free bus transportation to and from the treatment facility, and offered free placebos as treatment. In relation to the second principle of beneficence, participants were not informed of all potential risks and benefits associated with the treatments they agreed to undergo, which included a painful spinal tap, incomprehensible psychoemotional stress, and repeated needle piercing. Finally, in terms of the third principle of justice, the participants were denied efficient treatment for research purposes, which is possibly the study’s gravest violation. Despite numerous attempts to justify not administering penicillin, such as possible adverse reactions, medical doctors are not supposed to provide potentially beneficial treatment to one group while administering riskier treatments to another. Additionally, to achieve societal justice, research subjects must be chosen fairly and randomly, regardless of economic, social, gender, or race class, as was the case in the Tuskegee study.
To summarize, both the Tuskegee and Milgram studies violated the Common Rule regarding deception and participant protection, as participants were misinformed about their condition and treatment, and participants did not face a minimal risk associated with the treatment they received. Additionally, racism is evident in the Tuskegee study as a result of a violation of the principle of fair subject selection. As the truth about the Tuskegee study came to light, it also became clear what happens when scientific objectives trump fundamental human rights. Tuskegee men were viewed as tools rather than autonomous human beings, which contradicted the primary goal of medical research, which is to improve human well-being.
Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s “Behavioral Study of Obedience.” The American Psychologist, 19(6), 421–423. https://doi.org/10.1037/h0040128
Blass, T. (1999). The Milgram Paradigm After 35 Years: Some Things We Now Know About Obedience to Authority. Journal of Applied Social Psychology, 29(5), 955–978. https://doi.org/10.1111/j.1559-1816.1999.tb00134.x
Brandt A. M. (1978). Racism and research: the case of the Tuskegee Syphilis Study. The Hastings Center report, 8(6), 21–29.
Centers for Disease Control and Prevention (CDC). (2020, March 2). The Tuskegee Timeline. https://www.cdc.gov/tuskegee/timeline.htm
Department of Health, Education and Welfare (DHEW), National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979, April 18). The Belmont Report. United States Government Printing Office. https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf
Department of Health and Human Services (DHHS), Office for Human Research Protections (2021, March 8 -a). 45 CFR 46. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html
Department of Health and Human Services (DHHS), Office for Human Research Protections (2010, September 21). Guidance on Withdrawal of Subjects from Research: Data Retention and Other Related Issues. https://www.hhs.gov/ohrp/sites/default/files/ohrp/policy/subjectwithdrawal.pdf
Department of Health and Human Services (DHHS), Office for Human Research Protections (2021, March 10 -b). 2018 Requirements (2018 Common Rule.) https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/revised-common-rule-regulatory-text/index.html
Kim, O. J. & Magner, L. N. (2018). A History of Medicine. Taylor & Francis.
Katz, R. V., Kegeles, S. S., Kressin, N. R., Green, B. L., Wang, M. Q., James, S. A., Russell, S. L., & Claudio, C. (2006). The Tuskegee Legacy Project: willingness of minorities to participate in biomedical research. Journal of health care for the poor and underserved, 17(4), 698–715. https://doi.org/10.1353/hpu.2006.0126
Levine, R. (1988). Ethics and regulation of clinical research. Yale University Press.
Macklin, R. (1979). Ethics in Global Health: Research, Policy and Practice. Oxford University Press.
Merriam-Webster. (n.d.-a). Beneficence. In Merriam-Webster.com dictionary. From https://www.merriam-webster.com/dictionary/beneficence
Merriam-Webster. (n.d.-b). Justice. In Merriam-Webster.com dictionary. From https://www.merriam-webster.com/dictionary/justice
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371-378. http://dx.doi.org/10.1037/h0040525
Milgram, S. (1974). Obedience to Authority: An Experimental View. Harper and Row.
Perry, G. (2013). Deception and Illusion in Milgram’s Accounts of the Obedience Experiments. Theoretical & Applied Ethics, 2(2): 79–92. University of Nebraska Press.
Shanab, M. E., & Yahya, K. A. (1978). A cross-cultural study of obedience. Bulletin of the Psychonomic Society, 11, 267–269 (1978). https://doi.org/10.3758/BF03336827
Shore, N. (2006). Re-Conceptualizing the Belmont Report: A Community-Based Participatory Research Perspective. Journal of Community Practice, 14(4), 5–26. https://doi.org/10.1300/J125v14n04_02
Sims J. M. (2010). A brief review of the Belmont report. Dimensions of critical care nursing: DCCN, 29(4), 173–174. https://doi.org/10.1097/DCC.0b013e3181de9ec5
Weijer, C., Grimshaw, J. M., Eccles, M. P., McRae, A. D., White, A., Brehaut, J. C., & Taljaard, M. (2012). The Ottawa statement on the ethical design and conduct of cluster randomized trials. PLoS Medicine, 9(11). https://doi.org/10.1371/journal.pmed.1001346