Читать книгу The Best-Laid Plans - Randal O'Toole - Страница 23

Before a model becomes complicated enough to be useful for planning, it becomes too complicated for anyone to understand.

Оглавление

Forest Service planners ran into this barrier when writing plans for the nation’s forests during the 1980s. Each national forest covers one to two million acres, ranging in many cases from lush wetlands to alpine or desert tundra. Various competing uses for these lands include timber cutting, livestock grazing, mining, oil and gas exploration, wildlife habitat, watershed, fisheries, and numerous (and often incompatible) forms of recreation. Each of these uses had positive or negative effects on many of the other uses.

The computer models planners developed to plan these forests vastly oversimplified the forests. The computer program that the Forest Service used allowed planners to divide their forests into just a few hundred different kinds of land. This limit meant that such variables as steepness of slope, soil erodability, and even land productivity were often left out of the models. The program also allowed planners to model relatively few resources: timber along with a few species of wildlife or two or three kinds of recreation.

Despite these and other oversimplifications, the models were so complicated that few understood how they really worked or how to interpret the results. While most of the computer programmers who designed the models probably understood them, many other members of the planning team did not, nor did the agency officials who were basing their decisions on those models.

As just one example, most of the computer “runs” aimed at maximizing the net economic return from the timber and other resources in the forest, with future values and costs discounted by 4 percent per year. At a 4 percent discount rate, $1 in a hundred years is worth only 2 cents today. While it is perfectly appropriate to use a discount rate, when combined with the Forest Service’s nondeclining flow rule the results could be bizarre. Given the requirement that no less timber be cut in the future than is cut in the first year, the computer would decide to cut all the valuable timber first, when it greatly contributed to the net returns, while leaving the money-losing timber for the distant future, when the 4 percent discount rate would render the losses negligible.

This is not the way the Forest Service actually manages national forests, and any national forest manager who practiced such “high grading” would be severely criticized by his or her colleagues. Yet high grading was built into most forests’ computer models. This effectively allowed the models to cut more timber today than if the models had been constrained to ensure that both timber volumes and net returns did not decline in the future. It is possible that forest officials understood this deception and accepted it to get more timber cutting. But it is also likely that many did not even realize that the computer runs were proposing cutting rates that were economically unsustainable.

Models simple enough for the decisionmakers to understand would be dangerously oversimplified. In fact, the models were already dangerously oversimplified, but making them complicated enough to account for more variables would make them so complicated that even the programmers would be unable to understand them. If they can’t understand the models, then they won’t know when the model results make sense or when they are just gobbledygook.

The point of a model, of course, is to simplify reality so that it becomes easy to understand. But the simplifications necessary to make a forest, much less an urban area, understandable are so great that the model is no longer reliable. Such a simplified model might be useful for a city to project future tax revenues or to estimate the costs of providing schools, sewer, water, and other urban services. But urban planners want to go much further: they want to practically dictate how every single parcel should be used.

“Regional planning efforts should not stop short of creating detailed physical plans for the development and redevelopment of neighborhoods, especially for areas near transit stations,” urges architect Andres Duany. “Merely zoning for higher density in these locations is not enough….The most effective plans are drawn with such precision that only the architectural detail is left to future designers.”14 Developers may get to decide what color to paint their buildings, but planners will decide exactly how each parcel will be used and the size and shape of each building on the parcel.

Despite this overwhelming task, with the appropriate simplifications, the plan practically writes itself. Since planners imagine there are only a dozen or so different land uses, they can write off entire lifestyles, such as urban farmers or exurbanites, as being too messy or somehow immoral. Since planners imagine there are only a few dozen origins and destinations, they don’t have to plan for automobiles because a light-rail transit system should adequately serve everyone in the region.

“In urban planning,” warns Yale political scientist James Scott, “it is a short step from parsimonious assumptions to the practice of shaping the environment so that it satisfies the simplifications required by the formula.”15 In other words, planners who rely on oversimplified models are more likely to try to impose the model’s results on reality than to build more accurate models. As absurd as this sounds, some planning advocates actually endorse such a policy. “If economic reality is so complex that it can only be described by complicated mathematical models,” says planning guru Herman Daly, “then the reality should be simplified.”16 Under this ideal, planners should regulate choice and complexity out of existence and require everyone to adopt the lifestyle choices that planners think best.

The Best-Laid Plans

Подняться наверх