Читать книгу Averting Catastrophe - Cass R. Sunstein - Страница 12

Gaps in Knowledge

Оглавление

Consider in this regard a document from the White House, Principles for Regulation and Oversight of Emerging Technologies, issued in 2011 and still in effect.8 (I was a coauthor of the document, and with apologies and a salute to my co-authors, I am going to raise doubts about it.) In general, the document embraces cost-benefit analysis, but in a puzzlingly qualified way:9

Benefits and costs: Federal regulation and oversight of emerging technologies should be based on an awareness of the potential benefits and the potential costs of such regulation and oversight, including recognition of the role of limited information and risk in decision making.

What, exactly, is the role of limited information? What is the role of risk? With respect to regulation, the document explicitly calls out the problem of uncertainty:

The benefits of regulation should justify the costs (to the extent permitted by law and recognizing the relevance of uncertainty and the limits of quantification and monetary equivalents).

The two italicized sentences are different. The first refers to limited information and risk. The second refers to uncertainty and the limits of quantification. But with respect to some problems, including those potentially raised by pandemics, climate change, and emerging technologies, we should understand the document, taken as a whole, to be emphasizing the epistemic limits of policymakers and regulators, and also to be drawing attention to the problem of Knightian uncertainty. These limits, and that problem, can be seen as qualifications to the general idea, pervasive in federal regulation, that regulators should proceed with a new regulation only if its benefits justify its costs.10

OMB Circular A-4, a kind of bible for federal regulatory analysis in the United States, explicitly recognizes both epistemic limits and Knightian uncertainty, and offers a plea for developing probability distributions to the extent feasible.11 But what if it is not feasible to produce probability distributions, either because we lack frequencies or because Bayesian approaches cannot come up with them?

For a glimpse at the problem, consider a few numbers from annual cost-benefit reports of the Office of Information and Regulatory Affairs, the regulatory overseer in the Executive Office of the President.

1 (1) The projected annual benefits from an air pollution rule governing motor vehicles range from $3.9 billion to $12.9 billion.12

2 (2) The projected annual benefits of an air pollution rule governing particulate matter range from $3.6 billion to $9.1 billion.13

3 (3) The projected benefits of a regulation governing hazardous air pollutants range from $28.1 billion to $76.9 billion.14

4 (4) The projected benefits of a regulation governing cross-state air pollution range from $20.5 billion to $59.7 billion.15

It is worth pausing over three noteworthy features of those numbers. First, the government does not offer probability estimates to make sense of those ranges. It does not say that the probability at the low end is 1%, or 25%, or 50%. The default implication may be that the probability distribution is normal, so long as it is not specified, which might mean that the point forecast is the mean of the upper and lower bound. But is that really what is meant?

Second, the ranges are exceptionally wide. In all four cases, the difference between the floor and the ceiling is much higher than the floor—which is in the billions of dollars! (The technical term here is: Wow.)

Third, the wide ranges suggest that the worst-case scenario from government inaction, understood as a refusal to regulate, is massively worse than the best-case scenario. If regulators focus on the worst-case scenario, the relevant regulation is amply justified in all of these cases; there is nothing to discuss. The matter becomes more complicated if regulators focus on the best-case scenario or on the midpoint. But where should they focus?

All of these examples involve air pollution regulation, where projection of health benefits depends on significantly different models, leading to radically different estimates.16 There appears to be a great deal of scientific uncertainty. But even outside of that context, relatively standard regulations, not involving new technologies, often project wide ranges in terms of benefits, costs, or both.17 In terms of monetized costs, the worst case may be double the best case.18 In terms of monetized benefits, the best case may be triple the worst case.19 For a more general glimpse, consider this table, with particular reference to the wide benefits ranges:20

Table 1: Estimates of Annual Benefits and Costs of Non-Environmental-Related Health and Safety Rules: October 1, 2003–September 30, 2013
(billions of 2001 and 2010 dollars)
Area of Safety and Health Regulation Number of Rules Estimated Benefits Estimated Costs
2001$ 2010$ 2001$ 2010$
Safety rules to govern international trade 3 $0.9 to $1.2 $1.0 to $1.4 $0.7 to $0.9 $0.9 to $1.1
Food safety 5 $0.2 to $9.0 $0.3 to $10.9 $0.2 to $0.7 $0.3 to $0.9
Patient safety 7 $12.8 to $21.9 $12.8 to $21.9 $0.9 to $1.1 $1.1 to $1.4
Consumer protection 3 $8.9 to $20.7 $10.7 to $25.0 $2.7 to $5.5 $3.2 to $6.6
Worker safety 5 $0.7 to $3.0 $0.9 to $3.6 $0.6 $0.7 to $0.8
Transportation safety 24 $13.4 to $22.7 $15.4 to $26.4 $5.0 to $9.5 $6.0 to $11.4

Some of these gaps are very big, but for pandemics, climate change, and new technologies, the difference between the worst and the best cases is (much) bigger still.21 It is also important to emphasize that new or emerging technologies may be or include “moonshots,” understood as low-probability (or uncertain-probability) outcomes with extraordinarily high benefits; recall that we might call them miracles. Regulation might prevent those miracles,22 or make them far less likely. In this domain, we may have “catastrophe-miracle” tradeoffs. It would not be at all good to prevent miracles from happening; they might make life immeasurably better (and longer). It is essential to attend to extreme downsides, but extreme upsides should not be neglected. In cases that involve new technologies, miracles might be in the offing, and we need to weigh them in the balance.

Because of its relevance to pandemics, climate change, and regulation of emerging technologies, I focus throughout on the difference between risk and uncertainty and urge that in the context of risk, adoption of the maximin principle is usually (not always) a fundamental mistake. Everything depends on the particular numbers, but in general, I aim to bury that rule, not to praise it.

At the same time, I suggest that it deserves serious attention under identifiable conditions. When regulators really are unable to assign probabilities to outcomes, and when some possible outcomes are catastrophic, the maximin principle can have considerable appeal. Climate change is at least a candidate for this conclusion,23 and something similar might be said for some pandemics and other new or emerging risks, including some that are not even on the horizon.24 But a great deal depends on what is lost by adopting the maximin principle. As we will see, catastrophic risks—of low or uncertain probability—might accompany both regulation and nonregulation.

Averting Catastrophe

Подняться наверх