Читать книгу Damned Lies and Statistics - Joel Best - Страница 12

Оглавление

2

SOFT FACTS

Sources of Bad Statistics

A child advocate tells Congress that 3,000 children per year are lured with Internet messages and then kidnapped. Tobacco opponents attribute over 400,000 deaths per year to smoking. Antihunger activists say that 31 million Americans regularly “face hunger.” Although the press tends to present such statistics as facts, someone, somehow, had to produce these numbers. But how? Is there some law enforcement agency that keeps track of which kidnappings begin with online seductions? Are there medical authorities who decide which lung cancer deaths are caused by smoking, and which have other causes, such as breathing polluted air? Who counts Americans facing hunger—and what does “facing hunger” mean, anyway?

Chapter 1 argued that people produce statistics. Of course they do. All human knowledge—including statistics—is created through people’s actions; everything we know is shaped by our language, culture, and society. Sociologists call this the social construction of knowledge. Saying that knowledge is socially constructed does not mean that all we know is somehow fanciful, arbitrary, flawed, or wrong. For example, scientific knowledge can be remarkably accurate, so accurate that we may forget the people and social processes that produced it. I’m writing this chapter on a computer that represents the accumulation of centuries of scientific knowledge. Designing and building this computer required that people come to understand principles of physics, chemistry, electrical engineering, computer science—who knows what else? The development of that knowledge was a social process, yet the fact that the computer works reliably reflects the great confidence we have in the knowledge that went into building it.

This is one way to think about facts. Knowledge is factual when evidence supports it and we have great confidence in its accuracy. What we call “hard fact” is information supported by strong, convincing evidence; this means evidence that, so far as we know, we cannot deny, however we examine or test it. Facts always can be questioned, but they hold up under questioning. How did people come by this information? How did they interpret it? Are other interpretations possible? The more satisfactory the answers to such questions, the “harder” the facts.

Our knowledge about society tends to be “softer” than our knowledge of the physical world. Physicists have far more confidence in their measurements of the atomic weight of mercury than sociologists have in their descriptions of public attitudes toward abortion. This is because there are well-established, generally agreed-upon procedures for measuring atomic weights and because such measurements consistently produce the same results. In contrast, there is less agreement among social scientists about how best to measure—or even how to define—public opinion.

Although we sometimes treat social statistics as straightforward, hard facts, we ought to ask how those numbers are created. Remember that people promoting social problems want to persuade others, and they use statistics to make their claims more persuasive. Often, the ways people produce statistics are flawed: their numbers may be little more than guesses; or the figures may be a product of poor definitions, flawed measurements, or weak sampling. These are the four basic ways to create bad social statistics.

GUESSING

Activists hoping to draw attention to a new social problemoften find that there are no good statistics available.*When a troublesome social condition has been ignored, there usually are no accurate records about the condition to serve as the basis for good statistics. Therefore, when reporters ask activists for facts and figures (“Exactly how big is this problem?”), the activists cannot produce official, authoritative numbers.

What activists do have is their own sense that the problem is widespread and getting worse. After all, they believe it is an important problem, and they spend much of their time learning more about it and talking to other people who share their concerns. A hothouse atmosphere develops in which everyone agrees this is a big, important problem. People tell one another stories about the problem and, if no one has been keeping careful records, activists soon realize that many cases of the problem—maybe the vast majority—go unreported and leave no records.

Criminologists use the expression “the dark figure” to refer to the proportion of crimes that don’t appear in crime statistics.1 In theory, citizens report crimes to the police, the police keep records of those reports, and those records become the basis for calculating crime rates. But some crimes are not reported (because people are too afraid or too busy to call the police, or because they doubt the police will be able to do anything useful), and the police may not keep records of all the reports they receive, so the crime rate inevitably underestimates the actual amount of crime. The difference between the number of officially recorded crimes and the true number of crimes is the dark figure.

Every social problem has a dark figure because some instances (of crime, child abuse, poverty, or whatever) inevitably go unrecorded. How big is the dark figure? When we first learn about a problem that has never before received attention, when no one has any idea how common the problem actually is, we might think of the dark figure as being the entire problem. In other cases where recordkeeping is very thorough, the dark figure may be relatively small (for example, criminologists believe that the vast majority of homicides are recorded, simply because dead bodies usually come to police attention).

So, when reporters or officials ask activists about the size of a newly created social problem, the activists usually have to guess about the problem’s dark figure. They offer estimates, educated guesses, guesstimates, ballpark figures, or stabs in the dark. When Nightline’s Ted Koppel asked Mitch Snyder, a leading activist for the homeless in the early 1980s, for the source of the estimate that there were two to three million homeless persons, Snyder explained: “Everybody demanded it. Everybody said we want a number…. We got on the phone, we made a lot of calls, we talked to a lot of people, and we said, ‘Okay, here are some numbers.’ They have no meaning, no value.”2 Because activists sincerely believe that the new problem is big and important, and because they suspect that there is a very large dark figure of unreported or unrecorded cases, the activists’ estimates tend to be high, to err on the side of exaggeration. Their guesses are far more likely to overestimate than underestimate a problem’s size. (Activists also favor round numbers. It is remarkable how often their estimates peg the frequency of some social problem at one [or two or more] million cases per year.3)

Being little more than guesses—and probably guesses that are too high—usually will not discredit activists’ estimates. After all, the media ask activists for estimates precisely because they can’t find more accurate statistics. Reporters want to report facts, activists’ numbers look like facts, and it may be difficult, even impossible to find other numbers, so the media tend to report the activists’ figures. (Scott Adams, the cartoonist who draws Dilbert, explains the process: “Reporters are faced with the daily choice of painstakingly researching stories or writing whatever people tell them. Both approaches pay the same.”4)

Once a number appears in one news report, that report is a potential source for everyone who becomes interested in the social problem; officials, experts, activists, and other reporters routinely repeat figures that appear in press reports. The number takes on a life of its own, and it goes through “number laundering.5 Its origins as someone’s best guess are now forgotten and, through repetition, it comes to be treated as a straightforward fact—accurate and authoritative. Soon the trail becomes muddy. People lose track of the estimate’s original source, but they assume the number must be correct because it appears everywhere—in news reports, politicians’ speeches, articles in scholarly journals and law reviews, and so on. Over time, as people repeat the number, they may begin to change its meaning, to embellish the statistic.

Consider early estimates for the crime of stalking.6 Concern about stalking spread very rapidly in the early 1990s; the media publicized the problem, and most state legislatures passed anti-stalking laws. At that time, no official agencies were keeping track of stalking cases, and no studies of the extent of stalking had been done, so there was no way anyone could know how often stalking occurred. After a newsmagazine story reported “researchers suggest that up to 200,000 people exhibit a stalker’s traits,”7 other news reports picked up the “suggested” figure and confidently repeated that there were 200,000 people being stalked. Soon, the media began to improve the statistic. The host of a television talk show declared, “There are an estimated 200,000 stalkers in the United States, and those are only the ones that we have track of.”8 An article in Cosmopolitan warned: “Some two hundred thousand people in the U.S. pursue the famous. No one knows how many people stalk the rest of us, but the figure is probably higher.”9 Thus, the original guess became a foundation for other, even bigger guesses (chapter 3 explores how repeating statistics often alters their meaning).10

People who create or repeat a statistic often feel they have a stake in defending the number. When someone disputes an estimate and offers a very different (often lower) figure, people may rush to defend the original estimate and attack the new number and anyone who dares to use it. For example, after activists estimated that there were three million homeless in the early 1980s and the Reagan administration countered that the actual number was closer to 300,000, the activists argued that the administration’s figures could not be trusted: after all, the administration was committed to reducing expenditures on social programs and could be expected to minimize the need for additional social services.11 Various social scientists set out to measure the size of the homeless population. When their findings confirmed that the 300,000 figure was more reasonable, the social scientists came under attack from activists who charged that the research had to be flawed, that the researchers’ sympathies must have been with the administration, not the homeless.12 In general, the press continued reporting the large estimates. After all, activists and reporters knew that the actual number of homeless persons was much higher—didn’t everyone agree that three million was the correct figure? This example suggests that any estimate can be defended by challenging the motives of anyone who disputes the figure.

In addition, the dark figure often plays a prominent part in defending guesses. There are always some hidden, unnoticed, uncounted cases and, because they are uncounted, we cannot know just how many there are. Arguing that the dark figure is large, perhaps very large (“The cases we know about are just the tip of the iceberg!”), makes any estimate seem possible, even reasonable. We know that some victims do not report rapes, but what proportion of rapes goes unreported? Is it two in three? Surveys that ask people whether they’ve been victimized by a crime and, if so, whether they reported the crime to the police, find that about two-thirds of all rapes go unreported.13 But surely these surveys are imperfect; some rape victims undoubtedly refuse to tell the interviewer they’ve been victimized, so there still must be a dark figure. Some antirape activists argue that the dark figure of unreported rapes is very large, that only one rape in ten gets reported (this would mean that, for every two victims who fail to report their attacks to the police but tell an interviewer about the crimes, seven others refuse to confide in the interviewer).14 Such arguments make an impassioned defense of any guess possible.

Activists are by no means the only people who make statistical guesses. It is difficult to count users of illicit drugs (who of course try to conceal their drug use), but government agencies charged with enforcing drug laws face demands for such statistics. Many of the numbers they present—estimates for the number of addicts, the amounts addicts steal, the volume of illicit drugs produced in different countries, and so on—cannot bear close inspection. They are basically guesses and, because having a big drug problem makes the agencies’ work seem more important, the officials’ guesses tend to exaggerate the problem’s size.15 It makes little difference whether those promoting social problems are activists or officials: when it is difficult to measure a social problem accurately, guessing offers a solution; and there usually are advantages to guessing high.

There is nothing terribly wrong with guessing what the size of a social problem might be. Often we can’t know the true extent of a problem. Making an educated guess—and making it clear that it’s just someone’s best guess—gives us a starting point. The real trouble begins when people begin treating the guess as a fact, repeating the figure, forgetting how it came into being, embellishing it, developing an emotional stake in its promotion and survival, and attacking those who dare to question what was, remember, originally just someone’s best guess. Unfortunately, this process occurs all too often when social problems first come to public attention, because at that stage, a guess may be all anyone has got.

DEFINING

Any attempt to talk about a social problem has to involve some sort of definition, some answer to the question: “What is the nature of this problem?” The definition can be—and often is—vague; sometimes it is little more than an example. For instance, a television news story may tell us about a particular child who was beaten to death, and then say, “This is an example of child abuse.” The example takes the place of a precise definition of the problem. One difficulty with this practice is that media coverage usually features dramatic, especially disturbing examples because they make the story more compelling. Using the worst case to characterize a social problem encourages us to view that case as typical and to think about the problem in extreme terms. This distorts our understanding of the problem. Relatively few cases of child abuse involve fatal beatings; comparatively mundane cases of neglect are far more common. But defining child abuse through examples of fatal beatings can shape how we think about the problem, and child-protection policies designed to prevent fatalities may not be the best way to protect children from neglect. Whenever examples substitute for definitions, there is a risk that our understanding of the problem will be distorted.

Of course, not all definitions of social problems depend on dramatic examples. People promoting social problems sometimes do offer definitions. When they do so, they tend to prefer general, broad, inclusive definitions. Broad definitions encompass more cases—and more kinds of cases. Suppose we want to define sexual violence. Certainly our definition should include rapes. But what about attempted rapes—should they be included? Does being groped or fondled count? What about seeing a stranger briefly expose himself? A narrow definition—say, “sexual violence is forcible sexual contact involving penetration”—will include far fewer cases than a broad definition—for example, “sexual violence is any uninvited sexual action.”16 This has obvious implications for social statistics because broad definitions support much larger estimates of a problem’s size.*

No definition of a social problem is perfect, but there are two principal ways such definitions can be flawed. On the one hand, we may worry that a definition is too broad, that it encompasses more than it ought to include. That is, broad definitions identify some cases as part of the problem that we might think ought not to be included; statisticians call such cases false positives (that is, they mistakenly identify cases as part of the problem). On the other hand, a definition that is too narrow excludes cases that we might think ought to be included; these are false negatives (incorrectly identified as not being part of the problem).17

In general, activists trying to create a new social problem view false negatives as more troubling than false positives. Remember that activists often feel frustrated because they want to get people concerned about some social condition that has been ignored. The general failure to recognize and acknowledge that something is wrong is part of what the activists want to correct; therefore, they may be especially careful not to make things worse themselves by defining the problem too narrowly. A definition that is too narrow fails to recognize the problem’s full extent; in doing so, it continues ignoring at least a part of the harm and suffering that ought to be recognized. Thus, activists might point to an example of a woman traumatized by a flasher exposing himself, and then argue that the definition of sexual violence needs to be broad enough to acknowledge the harm suffered by that woman. Activists sometimes favor definitions broad enough to encompass every case that ought to be included; that is, they promote broad definitions in hopes of eliminating all false negatives. Remember, too, that broad definitions make it easier to justify the big numbers advocates prefer.

However, broad definitions invite criticism. Not everyone finds it helpful to lump rape and flashing into a single category of sexual violence. Such broad definitions obscure important differences within the category: rape and flashing both may be unwanted, but classifying them together may imply they are equally serious. Worse, broad definitions encompass cases that not everyone considers instances of social problems; that is, while they minimize false negatives, they do so at the cost of maximizing cases that critics may see as false positives. Consider the long-running debate over the definition of pornography.18 What ought to be considered pornographic? Presumably hard-core videos of people having sex are included in virtually all definitions. But is Playboy pornographic? What about nude sculptures, or the annual Sports Illustrated swimsuit issue? Some antipornography activists may favor a very broad, inclusive definition, while their critics may argue that such definitions are too broad (“That’s not pornography!”).

Clearly, the definition of a social problem will affect statistics about that problem. The broader the definition, the easier it is to justify large estimates for a problem’s extent. When someone announces that millions of Americans are illiterate, it is important to ask how that announcement defines illiteracy.19 Some might assume that illiteracy means that a person cannot read or write at all, but the speaker may be referring to “functional illiteracy” (that is, the inability to read a newspaper or a map or to fill out a job application or an income tax form). Does illiterate mean not reading at all? Not reading at the third-grade level? Not reading at the sixth-grade level? Defining illiteracy narrowly (as being unable to read at all) will include far fewer people and therefore produce far lower statistical estimates than a broad definition (being unable to read at the sixth-grade level).

Often, definitions include multiple elements, each of which can serve to make the definition broader or narrower. Consider homelessness again. What should a definition of homelessness encompass? Should it include the cause of homelessness? If a tornado destroys a neighborhood and the residents have to be housed in temporary emergency shelters, are they homeless, or should we count only people whose poverty makes them homeless? What about the length of time spent homeless? Does someone who spends a single night on the streets count, or should the label “homeless” be restricted to those who spend several (and if so, how many?) nights on the streets? Each element in the definition makes a difference. If we’re counting homeless persons, and we count only those whose poverty made them homeless, we’ll find fewer than if we include disaster victims. If we count those who were without a home for thirty days in the last year, we will find fewer homeless people than if our standard is only ten days, and using ten as a standard will produce a lower number than if we agree that even a single night on the streets qualifies someone to be considered homeless.

In fact, some advocates for the homeless argue that definitions based on these elements are far too narrow, and they offer even broader definitions.20 They suggest that people who stay in the homes of friends or relatives—but who have no homes of their own—ought to be counted as homeless. Under this definition, an impoverished mother and child who never spend a night on the streets or in a shelter but who “double up” and live with relatives or another poor family ought to be counted as homeless. Obviously, using this broader standard to count cases will produce higher numbers than definitions that restrict homelessness to those living on the streets. Still other advocates argue that people whose housing is inadequate or insufficient also ought to be counted as homeless. This still broader definition will lead to even larger numbers. Calculating the number of homeless people (or illiterate people or acts of sexual violence) inevitably reflects our definitions.

In other words, statistics about social problems always depend on how we define the problem. The broader the definition, the bigger the statistic. And, because people promoting social problems favor big numbers (because they make the problem seem bigger and more important), we can expect that they will favor broad definitions. Often, advocates justify broad definitions by emphasizing the importance of being inclusive. People who spend a single night on the streets (or who have to stay with friends, or who live in substandard housing) also suffer. Who are we to decide that their suffering shouldn’t count? Clearly, advocates argue, these people deserve to be included when we speak of “homelessness.”

There are, then, two questions about definitions that ought to be asked whenever we encounter statistics about social problems. First, how is the problem defined? It is all too easy to gloss over definitions, to assume that everybody knows what it means to be homeless or illiterate or whatever. But the specifics of definitions make a difference, and we need to know what they are. Second, is the definition reasonable? No definition is perfect. Definitions that are too narrow exclude false negatives (cases that ought to be included), while definitions that are too broad include false positives (cases that ought to be excluded). It is difficult to have a sensible discussion about a social problem if we can’t define the problem in a way that we can agree is reasonable. But even if we cannot agree, we can at least recognize the differences in—and the limitations of—our definitions.

MEASURING

Any statistic based on more than a guess requires some sort of counting. Definitions specify what will be counted. Measuring involves deciding how to go about counting. We cannot begin counting until we decide how we will identify and count instances of a social problem.*

Damned Lies and Statistics

Подняться наверх