Читать книгу Averting Catastrophe - Cass R. Sunstein - Страница 7

Introduction

Оглавление

This book has an unusual origin. Under President Barack Obama, I served as Administrator of the White House Office of Information and Regulatory Affairs, which oversees federal regulation in domains that include highway safety, health care, clean water, air travel, agriculture, occupational health, homeland security, clean air, and climate change. One of our tasks was to help develop a “social cost of carbon”—a number that reflects the economic damage of a ton of carbon emissions.

This was, and is, an exceedingly important number. Among other things, it helps determine the stringency of regulations designed to control greenhouse gas emissions from motor vehicles and power plants. It is fundamental to climate change policy. Working on the social cost of carbon, to produce a concrete number, may have been the most difficult task of my professional life. It was difficult in part because of the known unknowns, and the unknown unknowns, and the challenge of deciding how to handle them. In some respects, we were flying blind.

Dozens of people were involved; many of them were experts on science, economics, or both. They disagreed on fundamental issues. They disagreed vigorously about the magnitude of the harmful effects of greenhouse gas emissions. They disagreed about how much was known and how much was unknown. They disagreed about how to handle the possibility of catastrophe and whether to build in a large margin of error, which would produce a much higher number. We were able to reach agreement, but it took many months, and (to put it gently) not everyone who joined the agreement thought that the resulting number was the best choice.

My aim here is to connect some important questions in regulatory policy with some fundamental issues in decision theory. We have many illuminating treatments of regulatory policy, often focusing on social welfare, cost-benefit analysis, and distributive justice. We have a great deal of illuminating work in decision theory, focusing on risk and uncertainty, and also on how people actually handle challenging questions. Most of the time, those who focus on regulation do not engage decision theory, and vice versa. If we bring the two together, we should be able to make some progress in handling some of the most difficult problems of the current era, including those raised by pandemics, climate change, and others that we can only glimpse (and perhaps not even that).

My main goal is to explore how to think about averting catastrophe, understood as extreme downsides, making human life immeasurably worse. But we should also attend to the possibility of miracles, understood as extreme upsides, making human life immeasurably better. In reducing the risk of the former, we should try our best not to reduce the possibility of the latter.

Consider in this regard a passage from John Maynard Keynes, who lived through the Great Depression and World War II, who spent much of his young adulthood in same-sex relationships before he fell head over heels in love with a woman, and who knew a great deal about the unforeseeable:1

By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

Keynes’s central claim is that some of the time, we cannot assign probabilities to imaginable outcomes. “We simply do not know.” Keynes immediately added, however, that “the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.”

How on earth, he wondered, do we manage to do that? Keynes listed three techniques:

1 (1) We assume that the present is a much more serviceable guide to the future than a candid examination of past experience would show it to have been hitherto. In other words we largely ignore the prospect of future changes about the actual character of which we know nothing.

2 (2) We assume that the existing state of opinion as expressed in prices and the character of existing output is based on a correct summing up of future prospects, so that we can accept it as such unless and until something new and relevant comes into the picture.

3 (3) Knowing that our own individual judgment is worthless, we endeavor to fall back on the judgment of the rest of the world, which is perhaps better informed. That is, we endeavor to conform with the behavior of the majority or the average. The psychology of a society of individuals each of whom is endeavoring to copy the others leads to what we may strictly term a conventional judgment.

Keynes did not mean to celebrate those techniques. He thought that they were ridiculous. “All these pretty, polite techniques, made for a well-panelled Board Room and a nicely regulated market, are liable to collapse,” because “we know very little about the future.”

Keynes’s discussion describes a problem, and it is real. But I have four concerns about his brief, exquisitely written treatment. First, we often do know a lot about the future, at least for the purposes of policy and law. Second, we can learn more than we now know. Instead of making a stab in the dark, we might want to wait and learn. Third, it is too simple, often, to say that “we simply do not know.” There are some relevant things that we do know. Fourth, we need to know how to handle situations in which it is true, or close to true, that “we simply do not know.” It is not enough to disparage current techniques as pretty and polite. My main aim here is to put these concerns about Keynes’s discussion in contact with an enthusiastic endorsement of his claim that, in important situations, we know far too little to make good Benthamite calculations.

There are approaches that are not exactly pretty, but that qualify as polite. They can save humanity a lot of distress.

Averting Catastrophe

Подняться наверх