Читать книгу Averting Catastrophe - Cass R. Sunstein - Страница 8

Оглавление

1

What We Don’t Know

In the face of a pandemic threatening to produce numerous deaths, should aggressive preventive measures be undertaken, even if we cannot specify the benefits of those measures? Should cities, states, and nations be locked down? Exactly when and how? Should people be required to wear masks?

Imagine that some new technology, such as artificial intelligence, poses a catastrophic risk, but that experts cannot say whether the risk is very small, very large, or somewhere in between.1 Should regulators ban that technology? Or suppose that genetically modified foods pose a risk of catastrophe—very small, but not zero.2 Should officials forbid genetically modified foods? Should they require them to be labeled?

Suppose that scientists say that climate change will produce a range of possible outcomes by 2100, but that they cannot specify the likelihood of those outcomes. Should public officials assume the worst?3 Should the social cost of carbon, designed to capture the damage from a ton of carbon emissions, reflect worst-case scenarios, and if so, exactly how?4 Robert Pindyck describes the challenge this way:5

The design of climate change policy is complicated by the considerable uncertainties over the benefits and costs of abatement. Even if we knew what atmospheric GHG concentrations would be over the coming century under alternative abatement policies (including no policy), we do not know the temperature changes that would result, never mind the economic impact of any particular temperature change, and the welfare effect of that economic impact. Worse, we do not even know the probability distributions for future temperatures and impacts, making any kind of benefit–cost analysis based on expected values challenging to say the least.

Let us underline these thirteen words: “we do not even know the probability distributions for future temperature and impacts.” If we do not even know that, how shall we proceed?

Eight Conclusions

With a focus on public policy and regulation, my goal here is to help answer such questions. Among other things, I will be exploring possible uses of the maximin principle, which calls for choosing the approach that eliminates the worst of the worst-case scenarios. To see how the principle works, imagine that you face a risk that could produce one of three outcomes: (a) a small loss, (b) a significant loss, and (c) a catastrophic loss. The risk could come from genetic modification of food, nuclear power, a terrorist attack, a pandemic, an asteroid, or climate change. As I shall understand it here, the maximin principle says that you should take steps to avert the catastrophic loss. In life, as in public policy, that principle focuses attention on the very worst that might happen, and it argues in favor of eliminating it.

The maximin principle has been subject to formidable objections, especially within economics. An obvious concern is that eliminating the worst case might be extremely costly, and it might impose worst-case scenarios of its own. If you spend the next week trying to avert worst-case scenarios, you will create a lot of problems for yourself. I will be emphasizing and attempting to fortify the standard objections here. Nonetheless, one of my goals is to show that the maximin principle deserves a place in regulatory policy. I shall attempt to specify the circumstances in which it deserves that place. A central point is that sometimes regulators lack important information. Much of the discussion will be abstract, and based on stylized examples, but I shall ultimately make a number of concrete proposals, designed for real-world problems.

My starting point is simple: In extreme situations, public officials of diverse kinds must decide what kinds of restrictions to put in place against low-probability risks of catastrophe or risks that have terrible worst-case scenarios, but to which probabilities cannot (yet) be assigned. Some people, of course, favor quantitative cost-benefit analysis, whereas others favor some kind of Precautionary Principle. I am going to be embracing the former here, at least as a general rule,6 but the claims that deserve emphasis involve the exceptions, which may call for precautionary thinking in general and for the maximin principle in particular.

This short book will cover a great deal of ground, and it will be useful to specify the basic conclusions at the outset. The first four are straightforward. The remaining four are not.

1 (1)To the extent feasible, policymakers should identify the likely costs and benefits of various possible outcomes.7 They should ask about the harms associated with those outcomes and the probability that they will occur. They should aim to come up with probability distributions, accompanied by point estimates. When they cannot produce probability distributions, they should try to come up with reasonable ranges of both costs and benefits. They should do that partly to reduce the risk that political judgments will be based on intuitions, dogmas, or interest-group pressures. For example, people’s intuitions are often a product of the availability heuristic, by which their judgments about risks depend on what events come readily to mind. Use of the availability heuristic can lead people to be unduly frightened of low-level risks and unduly complacent about potentially catastrophic risks (including the risks associated with horrific outcomes that are not on people’s viewscreen).

2 (2) In deciding what to do, policymakers should focus on the expected value of various options: the potential outcomes multiplied by the probability that they will occur. In general, they should pick the option with the highest expected value. The qualification is that they might want to create a margin of safety, recognizing that it might itself impose serious burdens and costs. To avoid harm, a degree of risk aversion may well make sense—but not if it is too costly and not if it imposes risks of its own. Insurance may be worth buying, but sometimes its price is too high.8 (As we shall see, we might need a margin of safety against the risks created by margins of safety—a point that raises questions for those who like margins of safety.)

3 (3) In some cases, the worst cases are sufficiently bad, and sufficiently probable, that it will make sense to eliminate them, simply in terms of conventional cost-benefit analysis. That appears to have been the case for aggressive responses to the coronavirus pandemic in 2020.9 That is, the benefits of those responses justified the costs. (It is natural to ask: How aggressive, exactly? How aggressive is too aggressive? Aggressive in what way? Those are the right questions, and the best answers pay close attention to both costs and benefits.)

4 (4) In some cases, the worst-case outcomes are highly improbable, but they are so bad that it may make sense to eliminate them under conventional cost-benefit analysis. Even though they are highly improbable, they might have an outsized role when regulators are deciding what to do. That is a reasonable view about costly efforts to reduce the risk of a financial crisis.10 That is, such a crisis is highly unlikely (in any particular year), but its costs are so high that it is worthwhile to take (costly) steps to prevent one.11 Again, this is standard cost-benefit analysis, based on expected values.

5 (5) In some circumstances, often described as Knightian uncertainty, observers (including regulators) cannot assign probabilities to imaginable outcomes, and for that reason the maximin principle is appealing. I will argue that contrary to a vigorously defended view in economics, the problem of uncertainty is real and sometimes very important. For emphasis: Among economists, it is often claimed that Knightian uncertainty does not exist. That claim is wrong, and Knight was right (as was Keynes and as is Pindyck). In significant domains, we cannot assign probabilities to the possible outcomes.

6 (6) In some cases, the probability of extreme, very bad events is higher than normal;12 it might make sense to eliminate those very bad outcomes, perhaps using conventional cost-benefit analysis, perhaps not. Some important problems involve “fat tails,” for which the probability of a rare, bad event declines relatively slowly as that event moves far away from its central tendency. The fact that complex systems are involved can be important here; consider pandemics.

7 (7)In some cases, we do not know how fat the tail is or how bad the extreme, very bad event might be. Critical information is absent. Here as well, the maximin principle might have appeal.

8 (8)With respect to (5), (6), and (7), the problems arise when efforts to eliminate dangers would also impose very high costs or eliminate very large potential gains. If regulators spent a large percentage of gross domestic product on eliminating the risk of pandemics, they would probably do more harm than good. In addition, there might be extreme events of another sort, suggesting the possibility of wonders or miracles,13 which might make human life immeasurably better and whose probability might be reduced by aggressive regulation. In deciding whether to impose such regulation on (for example) new technologies, it is important to keep wonder and miracles in mind.

Ignorance and Maximin

This is a long and complicated list, so let us simplify it. In general, public officials should attempt to make human life better, which means that they should maximize social welfare (bracketing for the moment complex questions about what exactly that means).14 To do that, they should calculate costs and benefits, with probability distributions as feasible and appropriate, and they should proceed if and only if the benefits justify the costs, perhaps incorporating a degree of risk aversion.15 They should also focus on fair distribution—on who is being helped and who is being hurt—either because it is part of the assessment of social welfare, or because it is independently important. They should not focus solely or mostly on the worst cases; they should not give them more weight than other cases (bracketing for now risk aversion or loss aversion, to which I shall turn in due course). At the same time, calculation of costs and benefits may not be feasible, and an important question remains: Are there any problems that the maximin principle can handle better than welfare maximization?

The best answer is a firm “no,” but it is too simple. One reason involves cases of Knightian uncertainty, where probabilities cannot be assigned. As we shall also see, the maximin principle is especially appealing when the costs of eliminating the worst-case scenario are not terribly high and when the worst-case scenario is genuinely grave. Consider, for example, the following cases:

1 1. A nation faces a potential pandemic. It does not know the probability that the pandemic will occur. If it takes three steps, it can eliminate the danger. The three steps are not especially costly.

2 2. Over the next decade, experts believe that a nation is at risk of a serious terrorist attack. They do not know the probability that it will occur. But they believe that certain security measures at airports will substantially diminish the danger. Those measures are not especially costly.

3 3. Over the next decade, experts believe that a nation is at risk of a financial crisis. They do not know the probability that it will occur. They also believe that new capital and liquidity requirements, imposed on financial institutions, will make a financial crisis far less likely. Those requirements are burdensome, but their costs are manageable.

In all of these cases, policymakers ought to give serious consideration to the maximin principle. As we shall see, the argument for use of that principle grows stronger as the badness of the worst-case scenario increases. It grows weaker as the costs of eliminating the worst-case scenario rise and as that scenario becomes decreasingly grave.16

There are no simple rules here. Judgments, not calculations, are required, and ideally, they will come from a well-functioning democratic process. But even when judgments are required, they can be bounded by applicable principles, which can prevent a lot of trouble.

Averting Catastrophe

Подняться наверх