Читать книгу The Psychology of Environmental Law - Arden Rowell - Страница 28
Cognitive Heuristics—Availability and Representativeness
ОглавлениеOne important set of phenomena related to diffusion has to do with people’s tendency to use various shortcutting techniques to limit the effort spent solving problems and making decisions. These shortcuts can further worsen the ability to see remote effects. We are mainly “cognitive misers” (Fiske & Taylor, 1984/1991) who in most circumstances only attend to the features of a decision that are most obvious, pressing, and large—not those that are hard to detect, distant, and incremental. Many of these shortcuts or “heuristics” were first described by Amos Tversky and Daniel Kahneman in the 1970s and are by now likely to be familiar to many readers. We will discuss general heuristics in greater detail later; here we focus on two common ones with particular implications for people’s perception, understanding, and response to diffuse environmental impacts. These are the “availability” heuristic, where a more easily recalled feature is assumed to occur more often than it actually does in fact, and the “representativeness” heuristic, where one or more shared features of a person or object leads an observer to assume there are many more such shared features (Kahneman & Tversky, 1979; Tversky & Kahneman, 1974). Each of them can significantly reduce the salience of diffused effects.
The availability heuristic is what it sounds like: Easily thought of instances of a phenomenon—that is, ones that are more “available” to a person thinking about them—are estimated to be more frequent than more obscure ones. Kahneman and Tversky (1973) famously demonstrated this heuristic with the “K” study: Participants were asked to report the number of English words that start with the letter K and the number of English words whose third letter is K. People can retrieve words that start with K with relative ease (kangaroo, kid, kayak), but coming up with words with K in the third position is devilish (acknowledge, lake, unkind). As a consequence, people estimate that there are twice as many of the former as the latter. In fact, there are about three times as many of the latter as the former, but the ease of retrieval of words that start with K made participants overestimate their frequency.
In many contexts, the availability heuristic operates according to the aphorism “out of sight, out of mind”: The less visible, immediate, or vivid a risk or event, the more likely people are to underestimate its likelihood and severity (Kahneman et al., 1982; Kuran & Sunstein, 1999; Weber, 2006; Wiener, 2016). Vivid, highly newsworthy environmental disasters can thus trigger heightened awareness of environmental issues, and may help to explain the tradition of postdisaster environmental legislation, including the passage of CERCLA after Love Canal and the Clean Water Act after the Cuyahoga River fire and the Santa Barbara oil spill. Less-visible or imaginable harms, however—particularly those that are likely to affect foreign persons (Rowell & Wexler, 2014)—are far more likely to be neglected. This may create special challenges for environmental policies seeking to address problems, such as climate change, where the harms from emitting behaviors are likely to be felt far away and many years in the future (Weber, 2006; Rowell, 2015). These challenges are all the greater for unprecedented problems—such as extreme-scenario climate change, or nuclear war (Baum & Tonn, 2015)—for which humans have no mental template for comparison. Despite their catastrophic potential, the fact that such events have not occurred in living memory may make them unavailable, which in turn may lead to underinvestment and even mismanagement of attendant risks (Wiener, 2016).
Availability may also have profound effects on how people perceive the harms of more chronic, common environmental behaviors—particularly when the consequences of those behaviors are separated from their causes by space or time. Consider, for a homely example, that modern trash pickup programs, combined with dedicated landfills, are generally effective at making municipal waste management largely invisible to the average person (Nagle, 2013). As waste is managed through removal, it becomes less observable, and thus less cognitively available. Or consider that many people are at least theoretically aware that plastics can take over a thousand years to biodegrade, that enormous trash islands of plastic are floating and dispersing into the oceans, and that even on land, plastic presents ecological risks (Rochman et al., 2016; M. L. Taylor et al., 2016). A few may also be aware of recent research showing that human exposure to microplastics—in the air we breathe, the water we drink, even the table salt we sprinkle on our food—is utterly pervasive (Zhang et al., 2020). Yet plastics use—including single-use, “disposable” plastic products—remains widespread around the globe; 18 billion tons of plastic flows into the ocean each year, of which 40% are single-use products (Parker, 2018). The seeming intractability of casual plastics use may be at least partially ascribable to the fact that in general their harms, particularly to marine ecosystems, are invisible to the average person. Indeed, such harms are often brought to mind only by media campaigns highlighting sea life mutilated by six-pack rings, plastic bottles, and plastic straws. Such campaigns—which heighten the vividness, and thus the availability, of the harms caused by plastics—can be highly effective. One video of a sea turtle with a straw imbedded in its nose, for instance, has been credited with sparking an avalanche of local plastic straw bans (Rosenbaum, 2018). Such “availability cascades,” where a previously invisible and neglected issue becomes suddenly salient, triggering an avalanche of behavioral and policy responses, have been chronicled in other contexts as well (Kuran & Sunstein, 1999).
Cognitive shortcuts can also systematically distort information that could be used to generate decisions and minimize externalities. Consider the representativeness heuristic, which might be thought of as the “if it looks like a duck and quacks like a duck, it must be a duck” shortcut. Things that share features of a class are estimated as more likely to be part of that class. In Kahneman and Tversky’s (1983) classic demonstration of this phenomenon, they described a person named “Linda” to research participants like this: “Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.” They were then asked to rank the probabilities of various facts about Linda, among them that 1) Linda is a bank teller and 2) Linda is both a bank teller and a feminist. Linda seems very representative of the second category. Nevertheless, it cannot be more likely that she is a member of the second category (both a bank teller and a feminist) than that she is a member of the first (only a bank teller)—the former is a subset of the latter. And yet, by large margins, people rank the second likelihood as higher. Because Linda “looks like” a feminist, they fail to account for the underlying probability of being just a bank teller.
The representativeness heuristic can lead to a number of cognitive errors. One of the more well known is “base rate neglect,” which is a failure of the perceiver to account for the underlying distribution of something when estimating its current probability. The more remote the underlying baseline probability to be accounted for, the worse the distortion caused by the availability heuristic. A different (and perhaps more literal) sort of “diffusion” than being remote across time or space is actually very common in environmental harms: Often pollutants are literally microscopic or highly dilute but still consequential. Yet their diffuse, invisible nature makes it hard to take them seriously—we tend to forget their effects or their importance in favor of more immediate, visible phenomena.
To see how this can work (or fail to work!), imagine a toxicologist is trying to determine whether a particular chemical is present in a sample of water drawn from a large wastewater supply on a particular day. As a whole, there are usually 50,000 parts per million (ppm) of one common chemical and only 1 ppm of a very rare chemical. The toxicologist uses a test that is 99% accurate at discriminating between the two chemicals, and after applying the test, announces that the chemical she found in the sample is the very rare one. What is the probability that she is right? Most people’s first-order, intuitive answer is “99%,” because the test is 99% accurate. However, upon a little further reflection, more sophisticated answerers will realize that the probability is lower than 99% because it is necessary to compensate for the fact that on an absolute measure, there will be many times that the test will identify the chemical as “rare” when in fact it is “common,” given that it will be wrong 1% of the time. So a second-order answer, without actually breaking out a calculator, might be something lower—98%? 75%? 50%?
In fact, the real probability that the toxicologist’s result is accurate is more like 0.2%. How could intuition be so far off? The representativeness heuristic helps explain it. (To see an explanation for this number, complete with the math for the above example, check out this endnote.1) We know that the general underlying probability of finding the common versus the rare chemical are worlds apart. But once we get new diagnostic information (namely, a positive identification of the chemical), we tend to forget what came before and assume that the new odds represent the old odds too. So we learn that a test is 99% accurate, and we tend to neglect the fact that finding the “rare” chemical is still wildly unlikely. There are some circumstances where people will correct for the underlying probabilities at least somewhat, but it is a remarkably sticky cognitive bias.
This base rate neglect cuts both ways: Not only might it make us less willing to take steps to mitigate a diffuse but real harm, it can make us think that pro-environmental efforts we are taking now have more effect than they actually do because we fail to appreciate the magnitude of the underlying problem. Recycling plastic bottles is a pro-environmental behavior, for example, which the EPA recommends in part because of its impact on reducing climate emissions. Yet the impacts of each bottle recycled are truly tiny compared with the magnitude of the problem. As a comparative measure, you would have to recycle roughly 40,000 plastic bottles to equal the climate emissions of one round-trip coach flight between New York and London (Berners-Lee, 2011). This is not to suggest that recycling plastic is a bad choice—after all, the climate impacts of air travel are notoriously terrible. It might suggest, however, that base rate neglect can encourage people to overestimate the impacts of small actions like recycling, even as they underestimate the impacts of higher-impact actions such as shifting to plant-based diets or reducing air travel.
The availability and representativeness heuristics are at heart demonstrations that humans are bad at certain kinds of probabilistic cognitive processing, which can make appreciating diffuse environmental effects harder. But diffusion may not just affect the likelihood of seeing or understanding environmental phenomena in an abstract sense, it may also affect actual perception, particularly where the diffusion occurs across time. Psychological research demonstrates that people’s ability to process information that is temporally distant from them is far more complex than conventional environmental law approaches assume—even those approaches that are economically sophisticated and take into account concepts like discounting. Temporal diffusion can affect the relationship between people’s perceptions of value and the time value of money, subjective experience of time as continuously flowing, and even the felt experience of time as unidirectional. These distortions make meaningful interactions between present and future people—or even between present and future selves—impossible (Rowell 2014), and they trigger a number of powerful psychological phenomena, which we discuss further below.