Obtaining ‘informed consent’ is now firmly established as a preeminent legal and ethical requirement in medical practice and research. The specialized health law, bioethics, and medical literature abound with discussions of its precise meaning and content, explorations of the various challenges to informed consent and, increasingly, empirical studies about informed consent practices. This contemporary literature does not question the value of informed consent, but rather generally focuses on how informed consent can best be obtained and promoted or, in specific circumstances, how it may need to be delayed or replaced by surrogate procedures that respect its underlying value. It is therefore hard to imagine that informed consent has become such a moral and legal mainstay only in recent decades.
Obtaining ‘informed consent’ is now firmly established as a preeminent legal and ethical requirement in medical practice and research. The specialized health law, bioethics, and medical literature abound with discussions of its precise meaning and content, explorations of the various challenges to informed consent and, increasingly, empirical studies about informed consent practices. This contemporary literature does not question the value of informed consent, but rather generally focuses on how informed consent can best be obtained and promoted or, in specific circumstances, how it may need to be delayed or replaced by surrogate procedures that respect its underlying value. It is therefore hard to imagine that informed consent has become such a moral and legal mainstay only in recent decades.
This chapter first situates the development of the concept in its historical context. This is followed by a discussion of the normative basis of informed consent in bioethics and law. After an identification of some of the key features of informed consent in law and bioethics, the chapter then proceeds with an overview of the legal requirements to remedy violations of informed consent, focusing in particular on Canadian common law. A brief discussion of increasingly important statutory and guideline-based governance of informed consent completes this section. Finally, some brief comments are made about contemporary issues, focusing on two areas of research where commentators are increasingly calling for more flexible informed consent standards to facilitate public interest oriented research.
Scholars disagree about the extent to which seeking consent based on some level of information-sharing was already recognized in professional practice prior to the twentieth century. The late Jay Katz always maintained that there was little trace of meaningful consent-seeking prior to the second half of the twentieth century (Katz 1984). Ruth R. Faden and Tom L. Beauchamp, who critically analyzed Katz’s claims, historical medical records, and other historical research on consent, suggest that there was some level of consent-seeking in medicine, but agree with Katz that the practice was different from what we now understand as ‘informed consent’ (1986: 56–60). They point out that consent-seeking was driven by a commitment to ‘first, do no harm,’ a key principle of medical ethics, rather than by the more modern and legal conceptualization of informed consent as an expression of self-determination.
The dominant attitude in the medical profession, even among those sensitive to truth-telling, was that patients ought not to be needlessly upset with worrisome news about their medical condition. Early twentieth-century versions of the Hippocratic Oath even explicitly prescribed hiding potentially troubling information from patients. The influential nineteenth-century English physician Thomas Percival stressed in his influential book Medical Ethics the importance of the ‘delicate sense of veracity, which forms a characteristic excellence of the virtuous man’ (1803: 166), but suggested at the same time that truth-telling yields to the important obligation to shield information that could be harmful to patients.
Jay Katz discussed how early ethical codes enacted by the American Medical Association directly took over – often verbatim – Percival’s ethical stance on informed consent and that these views dominated English and American medical ethics until the mid-twentieth century (2004: 1256). Those who supported some level of information-sharing and consent-seeking did so with the idea that providing information offered therapeutic benefits or that deception had a pernicious effect on medical institutions (Beauchamp and Faden 1986: 1233), and not out of respect for autonomous decision-making.
Providing information and obtaining some level of agreement prior to intervening appeared more common in some areas of medical practice than in others. As Beauchamp and Faden suggest, consent in the context of surgery, for example, was understandably a somewhat ‘pragmatic response’ since ‘[i]t is at best physically difficult and interpersonally awkward to perform surgery on a patient without obtaining the patient’s permission’ (Beauchamp and Faden 1986: 1233).
In medical research, the 1947 Nuremberg Code is generally seen as the first strong affirmation of the need to obtain consent from research participants. Yet, the seeds of the informed consent requirement for research participation were also already planted at the end of the nineteenth century in Europe when critical accounts were published about outrageous research practices on the most vulnerable in society, such as the poor, (juvenile) prostitutes, and children (Katz et al. 1972: 284–92). Critical reports of the deliberate infection of patients with syphilis and gonorrhea in Russia and Germany not only illustrate that research often took place without or with only questionable consent, but also that some people within and outside the medical profession already felt morally troubled about this research practice. In the wake of public exposure of some of this research, we saw the development of the first guidelines and regulations on medical research. A Prussian regulation of 1900, enacted in the wake of the prosecution of a German physician for medical experiments without consent and probably the first regulation of its kind, explicitly required consent prior to experimentation, which had to be based on ‘a proper explanation of the possible negative consequences of the intervention’ (Vollman and Winnau 1996).
The start of the legal doctrine of informed consent in Anglo-American law is associated with the US case of Schloendorff v. New York Hospital  211 NY 125, in which Justice Cardozo famously stated that ‘[e]very human being of adult years and sound mind has a right to determine what shall be done to his own body; and a surgeon who performs an operation without his patient’s consent commits an assault, for which he is liable in damages’ (p. 126). In Schloendorff, the court found that the removal of a tumor from a woman who had only consented to an examination constituted battery. Although there are earlier cases that acknowledged a duty to obtain consent from patients (Beauchamp and Faden 1986: 116–23), its association with self-determination and the characterization of surgery without consent as battery set the stage for the twentieth-century legal developments. Cardozo’s formulation became one of the key quotes in later informed consent cases around the world.
The term ‘informed consent’ itself was introduced only much later, in the 1957 case of Salgo v. Leland Stanford Jr University Board of Trustees et al.  154 Cal App2d 560, where the court emphasized that consent had to be based on sufficient information to make it ‘intelligent.’ But, as Jay Katz points out, in the very phrase in which the court introduced for the first time the term ‘informed consent,’ it also tried to reconcile this duty to some degree with the traditional practice of medicine by emphasizing that in providing risk information to patients, physicians had to exercise a certain degree of ‘discretion’ (Katz 2004: 1258). This reflected the more traditional stance that information-sharing could be restricted to avoid harm to the patient. The ambiguity about who determines what level of information patients should receive in order to make meaningful decisions would become in the subsequent years an important part of the legal debate. With its acceptance of some discretionary departure from information-sharing, the court also indicated that failure to provide informed consent was not necessarily an assault or battery. Later US cases, notably Canterbury v. Spence  464 F.2d 772, confirmed explicitly that most cases of failure to provide adequate informed consent could give rise to liability in negligence, while battery should be reserved for the most extreme departures from the informed consent standard. Other jurisdictions also applied this two-pronged approach to failures of informed consent.
Case law may have influenced professional thinking about informed consent, but a variety of interacting cultural and social changes were also taking place at the same time. In particular, legal decisions were influenced by a growing emphasis on individual and consumer rights and professional medical discourse was affected by awareness of legal and social developments and concern for litigation. Faden and Beauchamp suggest that ‘case law has been extremely influential,’ not only in coining the term ‘informed consent,’ but also by ‘set[ting] others on the road to conceiving of the social institution of consent rules as a mechanism for the protection of autonomous decisionmaking’ (Beauchamp and Faden 1986: 142), even if medical professionals took a longer time to embrace informed consent as a standard practice.
In the research context, the post-World War II (WWII) period is also characterized by a steady development towards the imposition of detailed informed consent requirements, albeit not so much through the courts, but through guidelines and regulations. The first influential formulation of the need for informed consent in the international context was, as mentioned before, the Nuremberg Code. The Nuremberg Code consists of ten key ethical principles for research on humans, set out in the 1947 judgment of the international criminal court in the Nuremberg Doctor Trials (Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10 1949). As is well known, German doctors were prosecuted in this trial for some of the most horrific experiments on concentration camp prisoners. Experiments ranged from the recreation of battlefield conditions to find survival techniques and treatment for German soldiers, to the testing of poison and other mass murder tools, to biological warfare, and to studies of twins aimed at confirming Nazi racial ideology. The most common element of these experiments was the blatant disregard for the wellbeing of human beings, but obviously also the absence of any form of consent. Consent thus became emphasized as one of the ten key ethical requirements for medical experimentation.
Some of the people involved in the prosecution became instrumental in developing research ethics standards in the US, in part as a result of their role as expert witnesses. When then Vice President of the University of Illinois, Andrew Ivy, was asked to testify about the disregard of the Nazi doctors for widely accepted ethical principles in research, he was faced with the fact that there were no explicit ethical standards for research in the US. Some research practices in allied countries, for example malaria research in Stateville Prison in Illinois (Advisory Committee on Human Radiation Experiments 1995: 272) and British research funded by the military on infants suffering from spina bifida (Schmidt 2004: 76–7), while not as horrific in nature and not based on a troubling racial ideology as some of the Nazi experiments, shared arguably some characteristics with the Nazi experiments. Recent revelations of US and Pan American Health Organization sponsored syphilis research in Guatemala now confirm even more explicitly how seriously problematic research continued to take place without, or with questionable, informed consent around the same time outside of Germany (Reverby 2012). Prior to testifying in Nuremberg, Ivy drafted a set of rules, including explicit informed consent requirements, which were quickly adopted by the American Medical Association, and sections of which were later verbatim integrated in the Nuremberg decision. When questioned by defense lawyers about the nature of these rules, Ivy alleged that they were a codification of common research practices (Advisory Committee on Human Radiation Experiments 1995).
The Nuremberg trial itself did not appear to have a huge impact outside of Germany. Katz suggests that the Nuremberg Code was seen as a code for ‘barbarians’ and therefore not really relevant outside of Germany (Katz 1992). Yet it did lead to appropriate reflection among leading figures in the medical profession. After all, Germany pre-WWII had one of the most sophisticated healthcare and medical research sectors. And, somewhat cynically, it had also been one of the few countries, if not the only country, that had introduced regulations for medical research. The 1932 ‘Richtlinien’ (guidelines) for non-therapeutic research, which incongruously remained in place during the war, contained more detailed standards than those of the Nuremberg Code and included strong requirements for consent (Lederer 2007).
Following Nuremberg, pressure mounted worldwide to develop a more comprehensive set of rules for medical experimentation. The World Medical Association (WMA), a medical professional organization set up in the wake of WWII, started deliberating on ethical standards for medical research in 1953 (Lederer 2007: 150–60) and eleven years later adopted the Declaration of Helsinki (DOH) 1964. The DOH can be seen as an attempt by the medical community to keep control over research standards within the realm of professional self-regulation (Beauchamp and Faden 1986). But industry interests also influenced its development and approval process. Susan Lederer documents in detail how in the years prior to its adoption, the WMA became financially dependent on the American pharmaceutical industry (Lederer 2007: 157–70). The DOH was in part aimed at curtailing more drastic and detailed international legal rules about research. While Nuremberg’s key requirement for informed consent found its way into the 1966 International Covenant on Civil and Political Rights (ICCPR), which reaffirms the need for informed consent as a human rights issue, no further firm international legal rules for research were developed.
George Annas has argued that the proposed changes to the US Food and Drug Administration (FDA) regulations, which introduced new clinical trials-based standards for drug regulatory approval, made the adoption of the DOH even more important (Annas 1991). The rules of the DOH were more specific than the Nuremberg Code, but they clearly also introduced greater flexibility with respect to informed consent standards. Whereas Nuremberg (and later also the ICCPR) formulates ‘prior informed consent’ as a necessary condition for any form of experimentation, thus prohibiting research on incompetent people, the DOH permitted such research, albeit under specific conditions. Changes to the FDA rules and regulations resulted in a substantial increase in the need for clinical trials, even resulting over time in the development of an entire new clinical trials industry, and stimulated more widespread change in the procedures for informed consent in clinical research, including regulatory standards for informed consent.
Separate from this development in the context of clinical drug trials, exposure in the academic literature of unethical research practices also created pressure for reform within academic research more generally. In 1966, Henry Beecher published a seminal article in the New England Journal of Medicine in which he discussed in detail 22 published studies which, in his view, were ethically dubious, most of them also failing to identify clearly that informed consent was obtained from research subjects (Beecher 1966). Among the studies discussed were two that remain cited as paradigm cases of violations of informed consent in research: the Willowbrook State school study and the Jewish Chronic Disease Hospital study. In the Willowbrook study, parents were asked to consent to the inclusion of their children in an experimental research unit, but with incomplete information about the nature of the study (which involved testing deliberate infection with hepatitis as a possible prophylactic) and in a context of pressure, since it provided at one point preferential access to the overcrowded school. The Jewish Chronic Disease hospital study, for which researchers were subsequently sanctioned, involved the injection of live cancer cells in unconsenting terminally ill patients.
While these and other publications, including the popular press, evoked debate, it was the public exposure of the Tuskegee study in the New York Times in 1972 that had the biggest impact and resulted in firmer official initiatives which lie at the origin of the research ethics review systems that have since mushroomed all over the world (Beauchamp and Faden 1986: 157–67). The Tuskegee study originally started in the 1930s as an observational study comparing health and mortality rates of 400 African-American men infected with syphilis with those of a control group of 200 uninfected men. None of the research subjects were adequately informed that they were involved in research and investigators even presented invasive research procedures as treatments. When the study started, penicillin had not yet been invented, standard treatment for syphilis was both toxic and not very effective, and the disease was not well understood. Yet, the ‘observational study’ continued until its public exposure in 1972 and several papers in the medical literature reported on aspects of the study – long after effective treatment had become available.
In response to public criticism, the Department of Health and Human Services set up an ad hoc panel to review the study. This panel emphasized the need for stronger research guidance, including in the area of informed consent. It further recommended the establishment of a national board to look into the development of more appropriate research ethics procedures. Following this recommendation, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was set up. In 1978, the Commission issued a seminal report, widely known as the Belmont Report, in which it formulated a set of key ethical principles for research involving humans, specifically respect for persons, beneficence, and justice (National Commission for the Protection of Human Subjects 1979). The report connected respect for persons explicitly with a need for the development of informed consent guidelines. This reflected a strong emphasis on autonomy and human dignity as the basis of the need for informed consent. In the years following the Belmont Report, the Department of Health and Human Services issued more specific regulations, clarifying in much more detail than ever before various components of informed consent, exceptions to the strict rules, and also different procedural requirements, such as the exchange of a copy of the informed consent form. It moved the obtaining of informed consent in research into a new era, at least with respect to what officially became required in the context of research.
The move towards stricter research ethics review of informed consent forms and more detailed rules does not mean that no further serious violations occurred in the decades following these developments. In fact, several reports have emerged of serious violations of informed consent standards following the adoption of informed consent requirements. New Zealand, for example, was confronted with a Tuskegee-like research scandal involving ‘observational studies’ of women suffering from cervical cancer, which led to a public inquiry (Committee of Inquiry 1988). In the US, President Clinton set up in 1994 an Advisory Committee to investigate postwar research involving radiation that took place in the context of the Cold War, and which revealed numerous instances of research with no or questionable informed consent from research subjects (Advisory Committee on Human Radiation Experiments 1995). More recently, Canadian historian Ian Mosby unearthed nutritional research undertaken in the post-WWII period on aboriginal communities and aboriginal children residing in residential schools, which raises troubling questions about failures of, or serious problems with, informed consent and exposure of research subjects to harm (Mosby 2013).
In the wake of the US regulatory initiatives mentioned earlier, developments followed internationally. Informed consent became a key requirement in medical research, particularly also because of the internationalization of clinical research. International regulatory initiatives in the years following the adoption of the DOH also contributed to this. The regulatory agencies of the USA, Europe and Japan established, for example, the International Conference on Harmonization (ICH), aimed at harmonizing drug regulatory requirements of the industrialized countries. One of its key initiatives has been the development of the ICH Good Clinical Practice Guidelines (ICH GCP) in the 1990s. These guidelines reflect the key requirements of the US FDA rules and regulations, and include detailed informed consent requirements. The ICH GCP has been very influential around the world. As the DOH, they have often been integrated as soft-law requirements in the drug regulatory processes of various countries (Hirtle et al. 2000).
The historical overview of the development of informed consent already hinted at different foundations of the concept. As mentioned, the first calls for consent were based on the idea that it was the best way of ensuring good healthcare outcomes. Moreover, patient benefit is still put forward as an important rationale for informed consent. Stephen Weir emphasizes how informed consent benefits the patient in several ways: it promotes better patient compliance and participation with current treatment; it may help patients to be more realistic about their prognosis and to plan their lives accordingly; and it can strengthen doctor–patient relations, which may be of benefit in future situations (2004: 72–6). Onora O’Neill also emphasizes the importance of safeguarding trust in the doctor and in healthcare institutions as an important element of informed consent (O’Neil 2002). O’Neil’s identification of ‘trust’ as a foundation of informed consent moves informed consent beyond an issue of ‘beneficence.’ Trust in this context refers to the unique nature of the doctor–patient relation and implies the existence of unique moral duties that arise out of that relation and that help promote meaningful autonomy.
Framing informed consent merely as an issue of beneficence leaves much uncertainty about how the benefit of being informed will be weighed against other benefits or harms. As mentioned, those who first acknowledged that physicians should obtain consent recognized a wide range of exceptions to avoid troubling the patient with potentially harmful information. This notion that too much information can cause harm is what some researchers also invoked as an excuse in some of the historical research studies discussed earlier. In the first court decisions that embraced the informed consent doctrine, a therapeutic exception was still quite prominent. In the 1972 US case of Canterbury v. Spence, for example, the court confirmed the importance of sharing relevant risk information with patients for the purpose of self-determination, but also stressed that it was up to the physician to determine whether some level of non-disclosure was therapeutically required. English courts, while recognizing the importance of information-sharing, left it, until very recently, entirely up to physicians to decide how this must be done and to what extent, according to the standards of the medical profession (Sidaway v. Bethlem Royal Hospital Governors  AC 871). In fact, in Sidaway, the House of Lords even explicitly rejected the ‘informed consent’ doctrine espoused by Canterbury v. Spence as not in line with English law, holding instead that the degree of disclosure required to assist a patient in deciding whether or not to undergo a particular medical procedure was primarily a matter of clinical judgment. However, the idea that decisions about the sharing of information should entirely be left to a physician and based on a weighing of the benefits and harms of information-sharing, which is now rejected in most jurisdictions, is now also increasingly questioned in England.
Benefit fails to provide a solid basis for informed consent in, for example, non-therapeutic research, when the research really aims at generating generalizable knowledge. The patient has, in that case, no therapeutic benefit from being properly informed about the process, and yet we value informed consent in that situation. One could argue that the failure to provide information risks creating psychological harm to research subjects, or could reduce their and other people’s interest in participating in future research projects. But, in theory, this type of harm could be avoided by perfect secrecy. Arguing that research subjects are harmed when not informed about research procedures thus requires some consideration of the impact of disclosure on the integrity or dignity of the person, hence some autonomy-related argument.
Autonomy is most frequently identified as the core value underlying informed consent (Grubb 1998: 110; Beauchamp and Faden 1986; Mclean 2010: 86–7). Informed consent is seen as a condition for proper self-governance. This idea of informed consent as the basis for self-governance is reflected in different ways in the context of medicine. At a most basic level, as in Cardozo’s famous statement, it means that patients have a right to refuse any invasion of their physical integrity. Informed consent constitutes in that context a waiver, which allows health care providers to perform actions under conditions of explicit agreement that would otherwise be considered unacceptable. Informed consent in this sense reflects John Stuart Mill’s view of a person’s sovereignty over his or her own body and mind (2003).
A different, Kantian autonomy-based notion of consent is that it is an important tool to rational self-governance: patients are expected to be properly informed to enable them to opt for morally principled lives (O’Neill 2002: 73–95). Informed consent is, in this view, connected to the value of respect for persons and their dignity (National Commission for the Protection of Human Subjects 1979; Beauchamp and Faden 1986).
Whether informed consent is connected to autonomy or to some notion of beneficence has implications for how information has to be provided and what level of information has to be shared. Beneficence is hardly the only value underlying informed consent, since patients would only need as much information as required to make treatment more effective or the doctor–patient relation more fruitful. Informed consent grounded in autonomy requires more: it requires that patients receive all information they deem relevant to enable a meaningful autonomous choice. Yet, as will be discussed in the section on informed consent law, the practical implementation of that idea is not straightforward. In the healthcare context, we often deal with highly technical and complex information that has to be translated by healthcare professionals so that it can be meaningfully used by patients. Legal systems have embraced different approaches with respect to who should decide what level and type of information must be disclosed to patients.
A detailed discussion of the nature of autonomy exceeds the contours of this chapter, but it is worth pointing out here that there is a rich literature questioning the atomistic notion of autonomy that underlies the dominantly liberal legal and bioethics literature around informed consent. Some authors argue for a situated contextual view of autonomy that calls for a more sophisticated analysis of whether particular decisions are contributing to a self-development that is not undermined by personal and contextual vulnerability and duress. They emphasize also that patients are inevitably connected to others and construct and reaffirm their autonomy through relationships with those around them (McLeod and Sherwin 2000). Those who defend these more complex views of autonomy will tend to pay greater attention to the possible impact of contextual factors on individual decision-making. While these views are widely discussed in the literature, and find to some degree their way into ethics codes and guidelines, for example in concepts such as ‘undue inducement’ and ‘vulnerability,’ courts tend to embrace a practical model of autonomy based on a presumption of autonomy when key conditions are met.
It is worth also noting that the overemphasis on autonomy and informed consent is often identified as a Western phenomenon. Many cultures place emphasis on the need to involve families and communities in healthcare decision-making. Yet promoting individual choice through informed consent clearly seems to have strong appeal, even in, for example, European countries which until recently also embraced more familial and communal involvement. Whether it is seen as a form of cultural imperialism or not, informed consent has clearly gained an important status around the world. The globalization of healthcare practices, and in particular the growing number of international clinical trials and, related to that, the influence of clearly Western-dominated international research ethics standards such as the Declaration of Helsinki, have undoubtedly contributed to its growing status.
It is difficult to provide an all-encompassing legal definition of informed consent as it is understood today. One way is to define it is as an authorization that healthcare providers have to obtain from patients or research subjects, prior to healthcare interventions or enrollment in research procedures, and based on sufficient information about the nature of the procedures, possible alternatives, and the risks and potential benefits of the various options. Yet the concept of informed consent is also widely applied in the context of health information where consent does not lead to any concrete intervention or physical participation in research. In that context, it refers to the agreement to allow confidential information to be used for specific purposes. Informed consent for the use of sensitive health information for research purposes involving the mutual signing by the research subject and researcher of ‘informed consent forms’ can be seen as both an agreement to allow the use of information as well as a pledge by the researcher to keep the information confidential (Lemmens et al. 2013).
Informed consent is more often described by emphasizing its core components: disclosure of information; comprehension; voluntariness; competency; and an agreement with the proposed procedure or intervention. Case law and various statutes dealing with healthcare consent have identified the key elements to be disclosed. Healthcare providers have to provide information about the nature of the procedure, possible alternative options, the risks and benefits of the procedure and of alternative options, and the consequences of not undergoing a procedure (Health Care Consent Act 1996).
The need for comprehension seems obvious. If the goal of information-sharing is to enable people to make autonomous decisions, they have to grasp what they are being told and understand the consent forms they sign. In reality, though, the comprehension component is not always easy to fulfill. Technical information about the procedures involved and the nature of the risks can be hard to translate into accessible language. This is particularly challenging in the context of research, where complex procedures such as randomization, placebo controls, and stopping rules have to be explained, and where there is also inherently more uncertainty about the comparative risks and potential benefits. Funding and regulatory agencies often provide detailed guidelines about how to make information accessible. These guidelines tend to focus on consent forms, emphasizing that they need to be adjusted to the target population, avoid legalistic and highly technical language, and use language of ‘a grade 6 to 8 reading level’ (Health Canada 2010). The informed consent forms can thus contribute to meaningful comprehension. Consent forms also serve a legal purpose (i.e. they provide some level of evidence about the informed consent process). The formalization of consent in the signing of consent forms approved by hospital legal departments or by research ethics committees can, however, also obfuscate meaningful understanding and is often given disproportionate weight. Real informed consent requires more than the signing of a consent form. Healthcare providers and researchers have to ensure through direct communication that the information is understood by patients or research subjects. In this context, Jay Katz’s view of informed consent as part of a process of ‘shared decision-making’ should be kept in mind (Katz 1984). Physicians and researchers have to engage in dialogue to address all relevant informational needs.
Voluntariness refers to the need to ensure that consent is obtained without influences that undermine autonomous choice. Clearly, not all influences do so. Influences can be explicit or implicit, and external or internal. Coercion, undue influence, and fraud or misrepresentation are factors which most commonly affect voluntariness. The term ‘coercion’ tends to be too easily used for all situations where people feel some form of pressure to consent. According to the Belmont Report (National Commission for the Protection of Human Subjects 1979), ‘[c]oercion occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance.’ There is voluminous literature on coercion in healthcare, particularly in the context of research ethics, where Alan Wertheimer’s analysis of coercion has been particularly influential (Hawkins and Emanuel 2005). Wertheimer suggests that coercion only exists when the refusal to comply with the threat would make a person worse off, and that is not present when resisting the threat would leave the person in the same position (Wertheimer 1987). For example, a physician would coerce a patient if he or she indicates that refusing to participate in research would result in withdrawal of all forms of medical care. However, in my view, ‘coercion’ could also be used to characterize an offer that is intentionally made to a person who is extremely vulnerable due to distress, need, or poverty, and who would, under the most basically fair conditions, never accept such an offer. In those circumstances of particular vulnerability, the recipients of the offer may feel that that they have no other option but to accept.
Undue influence is seen as impacting more subtly on voluntariness than coercion does. The concept has been particularly used in research ethics. Clearly, not all forms of influence are undue, since our decisions are inevitably shaped by various influences. Undue influence, according to the US Office for Human Research Protections, ‘occurs through an offer of an excessive or inappropriate reward … in order to ensure compliance’ (Department of Health and Human Services 2013). But when is a reward excessive or inappropriate? The regulations provide no clear answer to that question. It has been suggested that influence is undue when it makes people act ‘against their better judgment’ – for example when payments are so structured that they push people to continue their participation in a clinical trial when they experience side effects and would normally want to withdraw, or when it leads to distortions of the risks and benefits of participation in research (Halpern et al. 2004). Rewards may also be seen as ‘undue’ when they risk undermining the core moral value attached to an activity. Large payments to research subjects can be seen as undermining what is often characterized as the altruistic nature of research participation, or research participation as a ‘humanitarian enterprise’ (Lemmens and Elliott 2001: 52), particularly in the context of research involving patients. Undue influence reflects in that context a concern about commodification. Commodification concerns are also widely debated in the context of organ transplants and assisted human reproduction, where commentators have expressed concern about the use of financial incentives to push people to sell their organs or ova (Radin 1996; Cohen 2002). In particular, in situations of extreme poverty, questions are asked about the possibility of meaningful consent. Some are critical of the use of concepts such as undue inducement and coercion in this context, pointing out that this raises concerns about ‘exploitation’ (Hawkins and Emanuel 2005). Yet it seems artificial to completely separate these different concepts as they are interrelated.
The use of the terms coercion and undue influence in research ethics should be distinguished from their use in legal contexts. In research ethics, reflections on what constitutes coercion or undue influence should make researchers and research ethics committees pause to reevaluate the informed consent practices that will be used in the future. Courts, in contrast, have to rule on whether informed consent was present in the past. The concepts of coercion and undue influence are only exceptionally used in legal cases about informed consent. The consequences of ruling that there was no consent are serious for the person who performed a medical procedure and courts tend to be reluctant to rule that no consent occurred. In law, the two terms are also not always clearly separated (Norberg v. Wynrib  2 SCR 226, p. 247). Coercion, the intentional use of psychological pressure, physical force, or threat, is more clearly deemed to vitiate consent. Undue influence is commonly used in testamentary law, where several conditions have been identified that relate to the vulnerability of the person, the relation of dependency, and the likelihood that the pressure may have had an effect. As Grubb puts it, in discussing the leading English case Re T (Adult: Refusal of Medical Treatment)  4 All ER 649 ‘“[u]ndue influence” is clearly a more insidious and subtle process than overt pressure and, therefore, calls for a closer examination of the facts’ (Grubb 1998: 178). In the case of Norberg v. Wynrib (1992), an opinion supported by three of the six judges, the Canadian Supreme Court applied the contract law-based ‘doctrines of duress, undue influence, and unconscionability [that] have arisen to protect the vulnerable when they are in a relationship of unequal power’ to determine whether a drug-addicted patient could genuinely consent to sexual activity with a doctor who prescribed opioids in exchange. The Supreme Court held the doctor liable for battery, concluding the consent was not valid as a result of the patient’s vulnerability and her dependency in the context of the unequal power relation.
Fraud and misrepresentation, on the other hand, are not often discussed in the bioethics literature on consent, but arise frequently in court. The reason is simple: research ethics committees do not speculate that an informed consent protocol will be fraudulently applied, and it seems clear, from an ethical perspective, that physicians ought not to fraudulently misrepresent information. In contrast, after problems occur, patients or research subjects may claim in court that their consent was affected by fraud. The Supreme Court of Canada indicated in Reibl v. Hughes  2 SCR 880 that only fraud or misrepresentation invalidates consent. The consequences of fraud or misrepresentation depend on the level of fraud or misrepresentation. The key concern in English common law is whether the patient understood the nature and purpose of the procedure (Grubb 1998: 154–5). The Ontario Court of Appeal more recently found in Gerula v. Flores (1995) 126 DLR (4th) 506 that when a surgeon first operated on the wrong spinal vertebrae and then misrepresented why a new operation was needed, consent was absent for both procedures. But an alleged misrepresentation related to collateral issues, such as a physician’s non-disclosure of his own epilepsy in the context of a surgery, is not considered fraud or misrepresentation (Halkyard v. Mathew  WWR 26). As will be discussed further, whether the fraud or misrepresentation vitiates consent impacts the type of legal action that can be undertaken by the patient or research participant.
Competency is another key condition for informed consent. It refers to a person’s ability to understand the relevant information and to appreciate the consequences of accepting or rejecting a treatment option or research participation. Competency is presumed in law in the case of adults (see Chapter 7 on mental health for a detailed discussion of competency issues). Questions of competency arise in the context of mental health, and healthcare or research involving children, to which common law jurisdictions generally apply the mature minor rule, which allows children and adolescents to provide consent when they are deemed mature enough to understand and appreciate the consequences of doing so. Competency is connected to comprehension: competency is treatment specific, so that a person with borderline competency can be competent to make one decision, but incompetent to decide in more complex situations.
The act of consenting can in many circumstances be explicit or implicit, verbal or written. Consent does not always have to be formalized and can be expressed in different ways. Written consent provides stronger – yet not conclusive – evidence that consent has been obtained and that specific information has been shared. Regulations often prescribe that a written consent form must be used, particularly in the research context. Written consent forms are also used for complex medical procedures, such as invasive surgeries, that involve more elevated levels of risk. Yet courts can still rule that notwithstanding the signed consent form, there was no informed consent (Tremblay v. McLauchlan, 2001 BCCA 444). Inversely, the absence of a written consent form should not be equated with the absence of consent. Rather, when regulations require written consent as a norm, they will often specify exceptions where written consent may be impractical or impossible to obtain.
It is sometimes suggested that consent can be ‘presumed,’ for example in an emergency context. However, using the term ‘consent’ for those situations seems questionable and unnecessary, as an emergency exception to informed consent, based on necessity, is widely accepted (Peppin 2011: 158–9). Obviously, informed consent procedures should be adjusted to the circumstances. Specific situations may require shorter informational exchanges, and in exceptional circumstances it will be impossible to obtain even the most minimal form of consent prior to a healthcare intervention. Providing information after the fact should then not be seen as obtaining ‘informed consent’ but as a proper debriefing in line with the standard of care. The exchange of information at that point may also be necessary for follow-up interventions.
A final note is warranted here about another another exception that is often mentioned and has strong historical roots: therapeutic privilege. As mentioned before, physicians were traditionally given much discretion about hiding information that could distress the patient. While the exception is still often mentioned, it is doubtful when physicians could still invoke it to justify a failure to disclose relevant information. Physicians have a duty of care in how they transfer information, and a patient may also express a desire not to receive further information, but the concept of therapeutic privilege clearly no longer allows physicians to make their own judgment about what to tell patients. In some situations, the inability of a patient to comprehend and deal with information may be associated with competency issues. In that case, other protective measures apply based on substitute decision-making (see Chapter 7 on mental health).
Different normative foundations of informed consent may result in different interpretations of its various components in law. Yet, even though courts and legislators often ground the need for informed consent with references to autonomy and the importance of self-determination, they do not engage in a detailed philosophical discussion of what type of autonomy or other ethical norm is the real basis for informed consent, or how various challenges to autonomy impact on meaningful consent. The legal notion of informed consent has its own meaning. For Jessica W. Berg and colleagues, the idea of consent as ‘autonomous authorization’ or as ‘shared decision-making’ and ‘the legal and institutional rules and requirements the fulfillment of which constitutes the social practice of informed consent’ are different but interrelated notions (2001: 16–17). The legal concept of informed consent is driven in part by pragmatic concerns about clarity, feasibility, and certainty. The legal rules surrounding the institutional practice of informed consent will often explicitly refer to the ideals discussed earlier. Common law and civil law jurisdictions generally start from the premise that a person has the right to make his or her own healthcare decisions, and that some level of information-sharing is needed to enable this (McLean 2010). But legal and institutional rules aim to clarify what level of information is to be provided; how it is to be provided (e.g. the use of informed consent forms); who determines what constitutes proper information-sharing; and what the consequences are of violating these rules. These rules vary among jurisdictions. The discussion here aims at offering a picture of some of the key legal concepts, questions, and tests that have emerged, with particular attention to Canadian common law. This will be followed by a brief discussion of the regulation of informed consent in various jurisdictions.
When patients (or research subjects) feel that they have not been properly informed about a medical procedure or about the research project they were enrolled in and they feel harmed, what type of legal action is available to them at common law? Two common law actions can be used, depending on the nature of the violation: battery or negligence. Battery involves the intentional touching of a person without his or her consent. The action in battery, a form of trespass on the person, seemed a logical tool for courts, once it became accepted that the right to self-determination required medical professionals to obtain consent from patients prior to any physical interference with their body. As Katz points out (1997), battery offers a more robust protection of the concept of self-determination underlying informed consent: the mere fact of bodily intrusion suffices for a claim of battery. No physical or psychological harm has to be proven as the harm resides in the violation of the dignitary interest people have in the integrity of their body (Katz 1997: 165). Battery also offers the advantage that it is up to the defendant to provide evidence of consent (Peppin 2011: 162). Nonetheless, courts became hesitant to allow actions in battery, fearing the use of the action every time that there was a potential problem with the consent given for a medical procedure. Consequently, battery became restricted to cases where there was no disclosure as to the nature of the procedure, notably where patients consented to one operation and surgeons performed another (Mulloy v. Hop Sang  1 WWR 741; Marshall v. Curry (1993) 3 DLR 260; Murray v. McMurchy  2 DLR 442). Katz locates this development in the judicial and societal deference to the medical profession.
For McLean, other factors favored the introduction of a remedy based on negligence: some healthcare practices do not involve physical touching, such as the prescription of medication; and some failures to obtain informed consent, such as the failure to discuss alternatives, are hard to qualify as battery (2010: 71).
Other Canadian cases have broadened the scope of battery: the earlier mentioned Ontario case of Gerula v. Flores where a surgeon performed a second operation under a false pretext to correct an earlier mistake; Malette v. Schulman et al. (1990) 72 OR (2d) 417, where a doctor provided an emergency blood transfusion to a Jehovah’s Witness even though he was aware of a prior expressed wish not to receive such transfusion; Norberg v. Wynrib, discussed above; Nightingale v. Kaplovitch  OJ No 585, where the doctor continued an examination of a patient’s colon after being asked to stop; and Toews v. Weisner and South Fraser Health Region, 2001 BCSC 15, involving the vaccination of a minor without parental consent, even though the nurse vaccinating the child believed the parents had consented.
While battery protects the right of patients to be free from physical intrusion in extreme cases of failure to consent, negligence protects more widely the right of patients to have all relevant information before making healthcare decisions. Negligence has become the more common claim in cases of failure to provide adequate informed consent. To establish negligence, a plaintiff must overcome several hurdles associated with traditional tort claims. Patients must establish a duty of care, a breach of that duty, harm suffered, and causation between the breach and the harm. In the context of informational negligence, there are three key issues: the content of the informational duty of care, and particularly how the standard of care will be determined; the nature of the harm suffered; and causality between the failure to inform and the harm suffered.
Courts have developed different standards to determine what constitutes sufficient information. As pointed out before, English common law has been more reluctant than other jurisdictions to fully embrace the doctrine of informed consent. Even though English courts have long recognized the importance of providing information to enable patients to make self-regarding decisions, how that information is to be provided and how much information should be shared is still largely measured according to the so-called professional standard (McLean 2010: 73–6). The 1985 Sidaway case and Gold v. Haringey Health Authority  QB 481 both confirmed that the traditional test for negligence in medical practice from Bolam v. Friern Hospital Management Committee  1 WLR 583 also applied in cases about information-sharing. The duty of disclosure is seen as ‘primarily … a matter of clinical judgment’ (Sidaway v. Bethlem Royal Hospital Governors et al.), to be determined on the basis of what a reasonable physician would have done (i.e. disclosed) in those circumstances. Sheila McLean notes, however, that more recent cases, while not rejecting outright the Bolam test, move prudently away from mere reliance on professional judgment (2010: 79–81). In Chester v. Afshar  UKHL 41, which dealt with a question of causation (see below), Lord Walker of Gestingthorpe emphasized that ‘autonomy has been more and more widely recognized’ in the time that elapsed since Sidaway (1985: 92). Lord Steyn stated even more explicitly that the traditional physician-centered approach is to be abandoned: ‘[i]n modern law medical paternalism no longer rules and a patient has a prima facie right to be informed’ (Chester v. Afshar 2004, p. 16). Without explicitly abandoning Sidaway, the English Lords promoted patient autonomy through a remarkable interpretation of the causation test.
Canadian, Australian, and US courts have since long adopted a test that appears more in line with the concept of self-determination underlying informed consent. As mentioned earlier, the US Canterbury case emphasized in 1972 that all information that is material to a patient’s decision should be disclosed, thus recognizing that information-sharing should be approached from the perspective of the patient’s informational needs. In Canada, a duo of 1980 Supreme Court cases, Hopp v. Lepp  2 SCR 192 and Reibl v. Hughes, rejected the professional standard and emphasized that the duty of disclosure has to be assessed from what a reasonable patient in the same position would want to know in order to make a properly informed decision. Physicians should not only inform patients about the nature of the procedure, but also about ‘any material risk and any special or unusual risks’ (Hopp v. Lepp, p. 210). In addition, the duty also extends to elements that ‘the doctor knows or should know that the particular patient deems relevant to a decision’ (Reibl v. Hughes, p. 894). The Reibl case is particularly interesting because the court recognized that information-sharing is not just about transferring medical evidence, but also about what particular risks mean to the person because of his or her particular circumstances. Mr Reibl had testified that had he known about a particular risk factor, he would have postponed the elective surgery until his lifetime retirement pension started and he would have been covered by disability insurance. Also noteworthy is the court’s emphasis that the surgeon should have made sure that he was understood, considering the patient’s difficulty with the English language (Reibl v. Hughes, p. 927). The Supreme Court thus emphasized that informed consent requires attentive interaction and not just the unilateral transfer of information. Australian decisions have also moved away from the English doctrine and towards the same recognition of a duty to disclose not only the type of material information patients generally need to make informed decisions, but also information that a doctor knew or should have known specific patients needed in order to make healthcare-related decisions (Rogers v. Whitacker (1992) 175 CLR 479; Chappel v. Hart (1998) 156 ALR 517). The information-sharing itself should be done according to proper professional standards.
Numerous decisions explore what type of information should have been provided pursuant to the patient-centered test. Even though courts will make decisions on the basis of the particular circumstances of each case, some illustrations are useful to show the possible consequences of the reasonable patient standard. There are some precedents indicating that patients reasonably prefer more detailed risk information when medical procedures are elective, as in plastic surgery, since detailed risk assessment appears more important than in medically necessary procedures (Peppin 2011: 168). In the same vein, research subjects can be expected to prefer more detailed risk information, particularly when they participate as healthy volunteers in a research project. Case law in the context of research is rare, but two Canadian precedents suggest that the disclosure obligation of researchers ‘is at least as great as, if not greater than, the duty owed by the ordinary physician or surgeon to his patient’ (Halushka v. University of Saskatchewan (1965) 53 DLR (2d) 436, pp. 443–4). The 1965 Halushka case from Saskatchewan was cited with approval in the 1989 case of Weiss v. Solomon  AQ no. 312, decided under Quebec civil law. Michael Hadskis points out, however, that Halushka was decided prior to the affirmation of the patient-centered disclosure test in 1980, and can thus be situated as a reaction against the existing professional practice standard, which seems indeed even less appropriate in the context of non-therapeutic research on healthy subjects (2011: 471–2). He suggests that it was more important for the court in that context to indicate how the research standard was ‘different.’ It seems indeed logical that research subjects would generally want to engage in a fuller risk assessment in those circumstances. But this more detailed risk assessment would now also fit under the ‘reasonable patient in the same circumstances’ standard, making this simply an application of the same test. The question of whether there is a higher standard of disclosure in the context of research, or whether this is simply an application of the reasonable person standard, is important in the context of new forms of research, particularly in the context of biobank research, where the highest possible standard of disclosure would be hard if not impossible to respect. A reasonable person standard, on the other hand, could make it possible to look at what people in similar circumstances would usually expect to receive as information.
Courts have emphasized that all reasonable alternatives, with their specific risks and comparative benefits, should be explained (Van Dyke v. Grey Bruce Regional Health Centre  197 OAC 336; Van Mol (Guardian ad litem of) v. Ashmore, 1999 BCCA 6). But what are ‘reasonable alternative options’? Indirectly, professional standard components emerge again when it comes to determining the duty of physicians to provide information about alternatives outside of ‘mainstream’ medicine. A growing number of patients are interested in so-called complementary or alternative medicine, which includes a wide gamut of practices, some of which run counter to the standards of the medical profession and are firmly rejected by mainstream medicine (see Chapter 23 on traditional, complementary, and alternative medicine). Other alternative practices, such as acupuncture or naturopathy, have gained some level of acceptance in the context of medicine. Should physicians disclose these ‘alternatives’ and discuss their risks and potential benefits? It may depend on the level of professional and societal support for the practice, and whether physicians could be reasonably expected to have known about the patient’s interest, for example because a patient asked questions that hinted at his or her interest.
Another important hurdle to surmount in common law liability for negligence in informed consent is the causal link between the breach of the informational duty and the harm. The requirement to prove harm and causation make it much harder to obtain compensation for negligence than when courts find battery in cases where consent was absent or fraudulently obtained. In the latter cases, no questions of causation arise as the harm is the intentional violation of the person’s physical integrity.
As discussed, the doctrine of negligence provided patients a remedy when physicians give incomplete or inadequate information, with less serious implications for physicians than battery-based claims. In negligence, patients still have to show that the procedure they inadequately consented to harmed them, and that the harm would have been avoided had they been properly informed (Peppin 2011). This means that two elements have to be proven: ‘injury causation’ and ‘decision causation’ (Tenenbaum 2012). In information negligence cases, a patient has to show that, had proper information been given, he or she would have made a different decision and the harm would thus not have occurred. This may seem relatively straightforward when the procedure is elective (e.g. plastic surgery), a serious risk factor was not disclosed, and the procedure clearly resulted in harm that could have been avoided by not having the procedure at all.
Proving on a balance of probabilities that harm was caused by a particular procedure or healthcare product (the injury-causation element), tends to be difficult in the context of healthcare because it is often unclear whether the harm was a result of the condition being treated or a result of the procedure/healthcare product. The provision of healthcare involves a multitude of interactions by many different people and complex chains of causation that have to be disentangled in court.
It is also difficult to reconstruct what patients would have done had they been properly informed, particularly when we are dealing with complex risk/benefit analyses. Risk assessment has inherently subjective components (Waring and Lemmens 2004). People’s perception of risk inevitably changes in light of personal experiences, and people are obviously much more inclined to see themselves as risk averse once they have suffered harm. How do the courts establish – or better, hypothetically reconstruct – what patients would have done had they been properly informed? Different tests have been used by courts.
Some New Zealand and Australian courts have employed a so-called subjective standard to determine decision-causation (Smith v. Auckland Hospital Board  NZLR 191; Ellis v. Wallsend District Hospital  2 Med LR 103). The Canadian Supreme Court, while rejecting this approach in ordinary medical malpractice cases, has explicitly endorsed this subjective test in the context of the relation between manufacturer of healthcare products and patients, specifically in Hollis v. Dow Corning Corporation  4 SCR 634). Hollis addressed the manufacturer’s failure to inform patients of risks of rupture of breast implants. The Supreme Court of Canada explained that:
In the case of a manufacturer … there is a greater likelihood that the value of a product will be overemphasized and the risk underemphasized. It is, therefore, highly desirable from a policy perspective to hold the manufacturer to a strict standard of warning consumers of dangerous side effects to these products.
In the case of a manufacturer … there is a greater likelihood that the value of a product will be overemphasized and the risk underemphasized. It is, therefore, highly desirable from a policy perspective to hold the manufacturer to a strict standard of warning consumers of dangerous side effects to these products.(Hollis v. Dow Corning Corporation, para. 46)
The Court emphasized the power imbalance between manufacturers, patients, and doctors with respect to the resources and the available information. One of the confounding factors was that the manufacturer had invoked the learned intermediary rule, which could have interrupted the chain of causation. The ‘learned intermediary rule’ refers to the role of physicians in providing detailed information to patients with respect to prescription drugs. The Court emphasized that this rule would only have applied if the company had fully informed the physician with clear, complete, and up-to-date information.
This subjective standard has frequently been described as ‘open to the abuse of hindsight’ (Mason and Laurie 2006: 408; Grubb 1998: 176), even by Canadian courts. This concern was acknowledged in Hollis, but Justice Laforest simply stated (for a majority of five judges) that that this ‘can be adequately addressed at the trial level through cross-examination and through a proper weighing by the trial judge of the relevant testimony’ (para. 46). This statement that cross-examination and the usual factual determination at the trial level can address problems of hindsight is significant. It is unclear why this would not be possible in standard information negligence cases. The rejection of the subjective test in those cases is as much a policy decision to facilitate judicial decision-making and limit the liability of physicians.
Nearly all US state statutes governing malpractice and most US case law embrace an objective standard of causation (Tenenbaum 2012: 709–20), arguably for the same policy reason. Under an objective standard, the question is not what the particular patient would have done, but whether the hypothetical reasonable patient would not have consented. The use of an objective standard has been criticized for conflicting with the underlying foundation of informed consent: respect for patient autonomy (Tenenbaum 2012: 718–19). Indeed, according to an objective standard, there is no inquiry into the values, preferences, or personal sensitivities of patients that are an essential component of individual decision-making. The objective standard makes it very difficult to obtain damages on the basis of informed consent claims alone (i.e. if there is no concurrent negligent practice) since it is easy to claim that the reasonable person in need of care would have consented to a procedure offered by a healthcare provider. In a way, the objective causality standard facilitates the type of medical paternalism that was diminished through the rejection of the professional disclosure standard.
As mentioned, the Canadian Supreme Court also explicitly rejected the subjective test in standard information negligence cases because it would ‘put a premium on hindsight’ (Reibl v. Hughes, p. 898), and could threaten the medical system through an onslaught of ‘liability claims from patients influenced by unreasonable fears and beliefs’ (Arndt v. Smith  2 SCR 539, para. 15). Yet it recognized the problems of a ‘reasonable person’ standard. It introduced a middle-of-the-road test, or modified objective test. Under this test, the court asks whether the particular patient, appropriately informed of the risks and in those particular circumstances, would have accepted the treatment. The ‘objective’ component of the test lies in the requirement of the ‘reasonableness’ of the particular position of the patient. In the case of Mr Reibl, as pointed out earlier, the court took into consideration that he was close to retirement and could reasonably have decided to wait for the surgery. This appears somewhat similar to a ‘hybrid test’ that has been used in a few English cases (Chatterton v. Gerson  1 All ER 257) and which Grubb describes as a subjective test, followed by an objective appraisal (1998: 175).
In the 1997 case of Arndt v. Smith, a split Canadian Supreme Court confirmed the modified objective test but, according to Peppin, ‘made the test somewhat more subjective both in emphasizing the Reibl elements of subjectivity and in making the “in the shoes of” test a more interior and particularized one’ (2011: 182). A minority of justices (three out of nine) even argued that the subjective test should have been used. The case is an interesting illustration of the causality question. In Arndt, the court dealt with the claim that Ms Arndt, who developed chicken pox during pregnancy, was not adequately informed by her family physician about the reasonably remote risk of serious implications for the foetus. She gave birth to a seriously disabled child. The trial judge made reference to Ms Arndt’s particular scepticism towards mainstream medicine, her interest in having a midwife, and her rejection of an ultrasound, and concluded from these factors that she would not have opted for an abortion (which was at that time in Canada also still subject to procedural limitations). Based on the factors mentioned in the lower court, the majority concluded that the reasonable person in the same circumstances as Ms Arndt, with her type of beliefs and expectations, would not have opted for an abortion. Commentators have rightly criticized the second-guessing that is reflected in the reasoning of the majority of the court with respect to such an intimate area of personal life, based on stereotypical presumptions of how people’s belief in traditional medical procedures must also be reflected in how they feel about abortion (Peppin 2011: 179–84; Nelson and Caulfield 1999).
A final note in the causation context: English law has taken a most peculiar approach to causation in the earlier mentioned case of Chester v. Afshar (Mason and Laurie 2006: 409–11). In this case, the House of Lords had to decide on the case of a woman who underwent lumbar surgery and suffered from a rare serious adverse event. The trial judge found that the neurosurgeon had failed in his duty to warn Ms Afshar of this small, 1 to 2 per cent risk. The question to be decided by the House of Lords was one of causation between the failure to warn and the harm. Interestingly, there was no finding that Ms Afshar would not have undergone the surgery with the particular surgeon. Three of the Lords concluded that to establish causation, it was sufficient to find that had she been warned of the risk, she would not have decided to take the surgery ‘at the time and place that she did’ (Chester v. Afshar, p. 8), and that it was thus very unlikely that the same adverse event would have happened. They referred to the Australian case of Chappel v. Hart, which shared some features with this case, but where the rationale was that the patient would have opted to have the operation performed by a more skilled surgeon, thus changing perhaps more substantially the risk profile of the operation.
One can conclude from this discussion of the legal components of negligence that most common law jurisdictions have struggled in trying to reconcile the practical requirements of judicial decision-making as well as policy considerations about the need for compensation, and the importance of avoiding significant costs to the healthcare system that could result from excessive litigation, with the important value underlying informed consent. Some jurisdictions have a longer history of recognizing the informed consent doctrine. England has until very recently resisted the doctrine of informed consent, but is now catching up in its own particular way, through, as it is stated in Chester, ‘a narrow and modest departure from traditional causation principles’ (p. 10).
The right to make one’s own, well-informed healthcare decision is further reflected in various statutory provisions, regulations, and guidelines. As mentioned earlier, most US states, for example, have codified the common law rules in specific healthcare consent statutes, clarifying the requirements of informed consent and the rules about causation (Tenenbaum 2012). Many Canadian provinces also have healthcare consent legislation. Since 1994, several provisions in the Civil Code of Quebec 1991 section on personality rights grant specific decision-making authority to patients (Kouri and Philips-Nootens 2011). The same development is noticeable in other common law and civil law jurisdictions. New Zealand is an example of a jurisdiction which regulates informed consent in a specific patient rights code, part of a new trend towards the codification of such rights. The New Zealand Code of Health and Disability Services Consumers’ Rights 1996 explicitly requires healthcare providers to give an ‘informed choice’ to patients and to obtain their ‘informed consent’ (basic provision right 7(1)).
In addition, many jurisdictions also have privacy statutes, some focusing specifically on health information, which contain rules about informed consent with respect to the use of personal information, often with specific rules about health information research (Lemmens and Austin 2009). It is worth noting here, even if a detailed discussion exceeds the scope of this chapter, that the concept of privacy is often interpreted as a component of other constitutionally protected rights, such as the right to liberty. Other privacy law remedies based on constitutional rights could thus be available in some jurisdictions.
As mentioned earlier, drug regulatory agencies, medical organizations, and funding agencies have historically played an important role in promoting better informed consent procedures for research. The legal status of the guidelines enacted by these agencies varies. Some, such as the historical Nuremberg Code, are part of international law. Others, such as the Declaration of Helsinki, are highly influential, but are enacted by organizations with no direct legal authority. Yet the Declaration of Helsinki has been integrated in many countries as a reference document in national guidelines or regulations (Sprumont et al. 2007). In many countries, including Canada, there are overlapping yet distinct research guidelines for clinical trials aimed at drug approval on the one hand, and for other forms of publicly funded research on the other. In Canada, both the Declaration of Helsinki and the ICH GCP are mentioned in guidance documents for clinical trials issued by the drug regulatory agency, Health Canada. The Canadian federal funding agencies have issued another research ethics document, the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS), which has to be respected in all federally funded institutions (Hadskis 2011). Both the ICH GCP and the TCPS require – with some exceptions – the signing of consent forms and provide a detailed list of elements that have to be disclosed, such as the nature of the research, the identity of the researchers, the procedures involved, the foreseeable risks, the potential benefits, the right to withdraw from the study, and the measures in place to protect the confidentiality of the information provided. Hadskis emphasizes that the TCPS, but not the ICH GCP guidelines, also explicitly requires disclosure of commercialization plans and conflicts of interest issues (2011: 473). It is important to point out that research ethics committees prospectively review the appropriateness of the informed consent procedures and can thus impose additional requirements for specific studies.
Due to their status as ‘soft’ law in Canada, the ICH GCP guidelines and the TCPS are not directly enforceable. Violations of the ICH GCP can be seen as a violation of good clinical practices by the drug regulatory agencies and lead to an investigation and sanctions associated with drug approval. Funding agencies can indirectly enforce the TCPS through the withdrawal or suspension of research funding to the investigators or the institutions involved. Courts could also possibly use the ICH GCP and TCPS as sources to establish a common law standard of care in the context of research where participants take researchers to court for failure to obtain informed consent (Campbell and Glass 2001).
Courts could theoretically also refer to a gamut of international declarations and statements, as well as professional ethics codes that are not directly mentioned in the national research or clinical trials governance systems, and that emphasize the importance of informed consent. A discussion of all these other soft law-based mechanisms exceeds the scope of this chapter.
As discussed in this chapter, informed consent has evolved from a concept imposed by early case law on a somewhat reluctant medical profession, to gradually being implemented – albeit not always respected – in the context of research through research ethics guidelines and regulations, and to finally becoming established as a crucial component of medical practice and research. Even if there is still much discussion about how to best meet the requirements of informed consent and to refine practices to achieve the ideal of the fully informed patient and research subject, there is now also a growing chorus of commentators questioning its feasibility or even its appropriateness in particular areas of medicine, particularly in the context of research.
One area where traditional approaches to informed consent are questioned is biobank-based research. Biobanks are research infrastructures (Kaye 2009) rather than one-dimensional research projects, which involve the long-term storage of biological samples, and constantly accumulate associated information, including clinical, familial, environmental, and social data. In the context of biobank research, many of the specific items that have traditionally been seen as essential elements of the duty to inform (e.g. nature of the research, risks and potential benefits, identity of the researcher) are not known at the time the samples and data are collected and stored. Many authors, including some in this book, have pointed out that traditional legal and ethical informed consent requirements are difficult if not impossible to respect in the context of biobanks (Kaye 2009; Deschênes et al. 2001; Caulfield and Knoppers 2010; Allen et al. 2013). Knoppers and Caulfield state that ‘the existing law and ethics policies were not developed with … the large-scale biobanking in mind’ (Knoppers and Caulfield 2010: 4). Many have therefore argued for different informed consent models for biobanking research. These models, several of which have overlapping elements, include the use of an option model, broad (or blanket) consent, and authorization.
Under the model of broad consent, biobank participants are given a set of core choices, which allows them to set parameters surrounding the use of their samples (Deschênes et al. 2001; National Bioethics Advisory Commission 1999; McGuire and Gibbs 2006). They can thus refuse some forms of future research, determine whether they want to be recontacted for specific purposes, or even allow general use with or without anonymization of samples. The ‘authorization model’ recognizes that consent to the use of a sample is not exactly the same as consent for research participation (Caulfield et al. 2003). More recently, Kaye and colleagues have also advocated for a model of ‘dynamic consent,’ which involves the creation of communication structures and more detailed involvement of research subjects and patients in subsequent research practices. In this model, information technology is used to transform consent into a bidirectional, ongoing, interactive process between patients and researchers. Participants can express preferences about the use of their data and samples for research on a continuing basis (Kaye et al. 2011).
Are all of these models in line with the legal and ethical informed consent requirements? Some authors have gone as far as to suggest that biobank research may violate traditional legal consent requirements and recommended that a legislative framework be introduced for biobanks (Caulfield 2007) or that consent requirements be overhauled (Allen et al. 2013). Yet in many jurisdictions, including Canada, the difficulty of obtaining informed consent in this type of research involving stored samples and information is to some level already addressed through legislative provisions. Many Canadian privacy statutes, for example, contain a broad research exception that allows health information to be used for research purposes without consent under specific circumstances (Alberta Health Information Act 2000; Ontario Personal Health Information Protection Act 1996). Research ethics committees are given the task of evaluating these conditions and determining if appropriate privacy mechanisms and other measures are in place to protect research subjects. They have to evaluate whether obtaining consent is difficult or impossible (and Kaye’s work on dynamic consent suggests that there are ways to promote ongoing patient involvement), whether appropriate privacy protection is in place, and whether the research serves a public interest. Although special legislation may therefore not always be needed, it could indeed provide clarity to have more detailed provisions that are adjusted to biobank research.
It seems appropriate to distinguish in the context of biobanks the procedures for the original consent to storage of a sample (which one could still appropriately call ‘consent to storage’) and those for the subsequent use of the sample for specific research-related procedures. With respect to the consent to storage, information can be provided about the nature of the biobank, the overall area of research, and some of the typical issues and concerns that can arise in the context of biobanks. The consent should also include agreement to submit one’s personal information and biological sample to a specific ‘governance system,’ which should be surrounded by a publicly accountable governance system. Lisa Austin and I have argued elsewhere that focusing on informed consent obfuscates the fact that biobank research raises difficult legal and ethical issues that cannot be appropriately addressed by individual consent (Lemmens and Austin 2009). These issues include the familial nature of the information, the impact on communities and aboriginal peoples, and concerns about the commercialization of products developed on the basis of personal biological samples and associated research.
The commercialization issue, raising concerns about exploitation and questions of ownership of personal biological samples, has received much attention, and has also resulted in legal claims and compensation requests (Moore v. Regents of the University of California  51 Cal. 3d 120; Greenberg et al. v. Miami Children’s Hospital Research Institute  264 2d 1064; Washington University v. Catalona  490 F.3d 667). Rather than pretending that informed consent procedures will help us to adequately deal with this, we suggest that the focus should be on the improvement of the often very weak governance structures surrounding research. This will be particularly challenging for research that increasingly traverses jurisdictional boundaries. Some private initiatives, such as the Public Population Project in Genomics and Society, are worth noting in this context, but further international initiatives appear needed to ensure accountable public interest-oriented governance. International organizations such as the World Health Organization could play a role in developing a proper governance framework for international research (Gostin et al. 2013). There are precedents of international legal initiatives in this area. The European Convention on Human Rights and Biomedicine 1997, for example, has integrated several detailed provisions on the protection of human rights in the context of biomedicine, which provide remedies to individuals residing in states that ratified it.
Another related area where there has been much discussion lately is in the context of research that aims at evaluating different standards of care where, arguably, research subjects are subjected only to risks associated with the standard therapies. This issue recently came up in the controversy surrounding the SUPPORT study, a large National Institutes of Health funded international, multi-centre, randomized controlled trial, which aimed at determining the optimal oxygen saturation levels for premature newborns (SUPPORT Study Group 2010) by comparing two different levels that were routinely used in standard care. No one questioned the scientific rationale and the importance of the study since neonatologists have been struggling for decades with how much additional oxygen could be provided to premature newborns to reduce the risk of brain damage, while still avoiding the risk of blindness that has been associated with exposure to high oxygen levels. The Office of Human Research Protections (OHRP), an official US agency mandated with the protection of human research subjects and the enforcement of research regulations of the Department of Health and Human Services, criticized the research study for violations of informed consent procedures (Department of Health and Human Services 2013).
Commentators in the medical community, some of whom were directly involved in or provided institutional support for the study, accused the Office of overzealousness. They insisted that an elevated standard of informed consent was inappropriate and unnecessary for studies comparing different standards of care (Hudson et al. 2013; Modi 2013). Interestingly, a group of leading bioethics and research ethics scholars also promptly published a letter in support of the study and accusing the OHRP of overreach (Wilfond et al. 2013), which evoked in turn a similarly strong support letter for OHRP by other research ethics experts (Macklin et al. 2013). The OHRP, these research ethics commentators, and patient advocate Sydney Wolfe (Wolfe 2013) particularly criticized the failure to give detailed information to parents about the different types of risk associated with being included in a clearly defined high or low oxygen group in the context of this research study. Even if all of the oxygen levels were used in standard care, the inclusion in a small, clearly delineated group of either low or high oxygen did create, according to the critics, a different type of risk, which had to be explained to parents. It was also argued that not all consent forms explained well the procedures involved in creating the double blind, which involved the use of manipulated oximeters so that clinicians would not know the exact level of oxygen provided.
Interestingly, around the same time, several articles came out in the literature arguing for more flexible informed consent procedures for SUPPORT-like comparative research studies (Faden et al. 2013; Faden et al. 2014). Particularly noteworthy are the repeated calls by consent experts Faden, Beauchamp, and Kass to rethink our approach to informed consent for such studies: ‘in a mature learning health care system with ethically robust oversight policies and practices,’ they argue, ‘some randomized CER studies may justifiably proceed with a streamlined consent process and others may not require patient consent at all’ (Faden et al. 2014: 766). Their suggestion that facilitating consent procedures for minimal risk research could help ensure that ‘higher-risk research gets the focused attention it deserves’ rejoins increasing criticism on the unnecessary administrative burden imposed by research ethics review (Kim et al. 2009).
Faden and colleagues seem to take for granted, though, that one of their key conditions for more flexible consent procedures (i.e. the existence of robust oversight policies and practices) has been fulfilled. Yet this is certainly overstated in many jurisdictions (Lemmens and Austin 2009). Even though the USA has one of the most extensive systems for research governance, ongoing concerns as expressed in the wake of the SUPPORT Study and recent controversies about highly questionable informed consent procedures in the context of research involving vulnerable psychiatric patients at the University of Minnesota add fuel to the fire of those arguing that oversight in the US is also seriously lacking and in need of reform (Lemmens 2014).
There is certainly something to be said about the fact that research ethics review now often focuses excessively on procedural requirements, particularly related to informed consent, and ignores much more difficult but key questions about appropriate levels of risk in research. Checking informed consent forms and imposing informed consent rituals is indeed not necessarily increasing research subject protection. At the same time, there is also reason to be concerned about heeding too quickly calls from the research community to lift informed consent requirements in order to facilitate research. Research efficiency is important, but the history laid out in the first part of this chapter should remind us how the rights of individuals to make autonomous choices in healthcare and research are all too easily trampled upon. Avoiding informed consent bureaucracy and promoting research efficacy are important, but it is equally important not to take the progress in the protection of research subjects and truly meaningful informed consent for granted.
International Covenant on Civil and Political Rights 1966 (UN General Assembly).
Arndt v. Smith  2 SCR 539.
Bolam v. Friern Hospital Management Committee  1 WLR 582.
Canterbury v. Spence (1972) 464 F.2d 772.
Chatterton v. Gerson  1 All ER 257.
Chappel v. Hart (1998) 156 ALR 517.
Chester v. Afshar (2004) UKHL 41.
Ellis v. Wallsend District Hospital (1989) 17 NSWLR 553.
Gerula v. Flores (1995) 126 DLR (4th) 506.
Greenberg et al. v. Miami Children’s Hospital Research Institute  264 2d 1064.
Halkyard v. Mathew  WWR 26.
Halushka v. University of Saskatchewan (1965) 53 DLR (2d) 436.
Hollis v. Dow Corning Corporation  4 SCR 634. Hopp v. Lepp  2 SCR 192.
Malette v. Schulman et al. (1990) 72 OR (2d) 417.
Marshall v. Curry (1993) 3 DLR 260.
Moore v. Regents of the University of California  51 Cal. 3d 120.
Mulloy v. Hop Sang  1 WWR 714.
Murray v. McMurchy  2 DLR 442.
Nightingale v. Kaplovitch  OJ No. 585.
Norberg v. Wynrib  2 SCR 226.
Re T (Adult: Refusal of Medical Treatment)  4 All ER 649.
Reibl v. Hughes  2 SCR 880.
Rogers v. Whitacker (1992) 175 CLR 479.
Salgo v. Leland Stanford Jr. University Board of Trustees et al.  154 Cal App2d 560.
Schloendorff v. Society of New York Hospital (1914) 211 NY 125.
Sidaway v. Bethlem Royal Hospital Governors (1985) AC 871.
Smith v. Auckland Hospital Board (1964) NZLR 191.
Toews v. Weisner and South Fraser Health Region 2001 BCSC 15.
Tremblay v. McLauchlan, 2001 BCCA 444.
Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10 1949, US Government Printing Office, vol. 2, pp. 181–2.
Van Dyke v. Grey Bruce Regional Health Centre et al. (2005) 225 DLR (4th) 397.
Van Mol (Guardian at litem of) v. Ashmore  BCJ No. 31.
Washington University v. Catalona  490 F.3d 667.
Weiss v Solomon  AQ no. 312.