(Note: This post is the second in a series exploring how moral diversity limits the moral authority of policy makers to craft healthcare policies for diverse moral communities. In a separate piece in Discourse, I adumbrate those limits as they appear in high-profile clashes over physician-assisted suicide, euthanasia, and other issues.)
Examples of moral diversity on display in high-profile controversies are just the most obvious ones. As I explain in a previous post for Open Health Policy, headline-grabbing disputes framed in terms of competing values may be relatively few and far between, but the iceberg of moral diversity lies mostly beneath the surface of the waves.
While science holds out hope of an end run around moral disputes by grounding policy choices in objective facts, scientific controversies arise in the context of histories, cultures, and philosophical frameworks themselves embodying diverse values. Thus, even scientific disputes tend to include moral dimensions, so that no royal road to “correct” policy choices can be found by invoking “the facts.”
Moral diversity arises yet again with respect to risk. Zen teacher Charlotte Joko Beck observed that we live as if caught in a hurricane while piloting an airplane. Filled with anxiety, eyes glued to the controls, we spend our days monitoring instruments, flipping switches, and struggling with the yoke in a futile attempt to survive the storm. She asked why we don’t live, instead, as if piloting an unpowered glider. We are sure to die either way. At least the glider pilot, by giving up control, can enjoy the thrill of the ride. “If we spend our life looking for the eye of the hurricane,” she wrote in Nothing Special: Living Zen, “we live a life that is fruitless. We die without having really lived.”
For Zen students, the message is inspiring, but not all of us are disposed to surrender control. What does it mean to live well in relation to risk? Risk preferences vary between individuals and groups, and they encompass not just one sort of preference but a set of related ones. Early scientific controversies over nuclear power in the United States shed light on the variety of risk preferences and their significance for public policy.
Opponents and advocates of nuclear power were divided over the real risk of a serious accident, which was hard to quantify in a meaningful way. But they were also at odds over the quality of the risks involved, and debates about risk provided cover from which to advocate for cherished social and political values. For a deeper dive, see philosopher Douglas MacLean’s “Understanding the nuclear power controversy” in Scientific controversies: Case studies in the resolution and closure of disputes in science and technology.
Opponents of nuclear power feared radiation exposure from reactor accidents, even though they didn’t fear exposure from routine medical diagnostics, which MacLean notes is more exposure than is needed or healthy. In other words, regardless of the real risk, opponents preferred to avoid certain kinds or qualities of risk. What explains this?
Even if the real risk of a catastrophic accident, such as a core meltdown, was vanishingly small, the consequences of one could be inconceivably severe. Moreover, those consequences could include cancers and genetic mutations, which are particularly dreaded. As MacLean observes, refusing a risk that is lower than accepted risks, such as the risk associated with dental X-rays, is not necessarily irrational when considering the different qualities of the respective risks.
Disputes about the risks of nuclear power also provided cover for groups to advocate for preferred social and political values. Some believed the U.S. at the time faced a choice between two very different energy futures — one dominated by nuclear power and one in which solar power prevailed, instead. Critics of nuclear power believed nuclear had to be centrally organized, whereas solar was compatible with “a decentralized superstructure of deployment and production.” They drew a straight line from investment in nuclear power to loss of cherished local autonomy as large oil and utility companies began to shape the distribution of social and political power.
In some sense, early controversies over nuclear power turned on scientific questions about measuring risk, but for the most part they did not. They enacted what MacLean calls “a morality play.” Due to the moral dimensions of the disputes stemming from divergent values, even expert consensus that measurable risks were low likely would not have resolved them.
Public controversies over pharmaceutical regulation recall those early disputes over nuclear power. The drug Iressa was at the center of one such controversy in the mid-2000s. Clinical trials established its effectiveness in about 10% of cases of non-small cell lung cancer (NSCLC). Many patients taking Iressa lived longer. However, when a larger study failed to show the same benefits, regulators restricted future use of Iressa to patients participating in further clinical trials, leaving Iressa on the market only for those already taking the drug. Meanwhile, regulators approved another drug, Tarceva, after clinical trials showed Tarceva improved survival for NSCLC patients.
The implication is that allowing new patients and their physicians to choose between Iressa and Tarceva was too risky. But whether Tarceva would provide the same benefit to the same individuals that Iressa would help was unknown. Clinical trials measure averages, not individual responses. As with disputes over nuclear power, the quality of the risk matters. Patients with serious disease and few options might prefer to try Iressa. For a closer look at the Iressa case, see Chapter 13 of Richard Epstein’s Overdose.
In the 1970s, public disputes over the effectiveness of the cancer drug Laetrile provided cover for the pursuit of divergent political values. With no scientific evidence to support the claim that Laetrile was effective, regulators took action, but a judge issued an injunction forbidding them from restricting the use of Laetrile. Meanwhile, some approved anticancer drugs had no demonstrated benefit against the types of cancer for which physicians were prescribing them. As Robert Young observes in “Federal regulation of Laetrile” (see Scientific controversies), for those weary of governmental intrusions into their lives with no apparent benefit, the Laetrile controversy was an opportunity to “strike back and reduce at least part of the government’s power over them.” It was equally an opportunity for those more favorable to government paternalism to press for stronger regulations.
Just as high-profile battles over assisted suicide or euthanasia reveal distinct moral communities, a closer look at risk preferences reveals others. Some of us were made to fly airplanes, others to fly unpowered gliders. To respect the boundaries of those communities, healthcare policy makers should be mindful of moral disagreements over different quantities and qualities of risk, as well as conflicts between divergent social and political values operating under cover of disputes ostensibly about risk.