Читать книгу Lead Wars - Gerald Markowitz - Страница 12
Оглавление1Introduction
A Legacy of Neglect
In August 2001, the Court of Appeals of Maryland, that state’s highest court, handed down a strongly worded, even shocking opinion in what has become one of the most contentious battles in the history of public health, a battle that goes to the heart of beliefs about what constitutes public health and what our responsibility to others should be. The court had been asked to decide whether or not researchers at Johns Hopkins University, among the nation’s most prestigious academic institutions, had engaged in unethical research on children. The case pitted two African American children and their families against the Kennedy Krieger Institute (KKI), Johns Hopkins’s premier children’s clinic and research center, which in the 1990s had conducted a six-year study of children who were exposed by the researchers to differing amounts of lead in their homes.
Organized by two of the nation’s top lead researchers and children’s advocates, J. Julian Chisolm and Mark Farfel, the KKI project was designed to find a relatively inexpensive, effective method for reducing—though not eliminating—the amount of lead in children’s homes and thereby reducing the devastating effect of lead exposure on children’s brains and, ultimately, on their life chances. For the study, the Johns Hopkins researchers had recruited 108 families of single mothers with young children to live in houses with differing levels of lead exposure, ranging from none to levels just within Baltimore’s existing legal limit, and then measured the extent of lead in the children’s blood at periodic intervals. By matching the expense of varying levels of lead paint abatement with changing levels of lead found in the blood, the researchers hoped to find the most cost-effective means of reducing childhood exposure to the toxin. Completely removing lead paint from the homes, Chisolm and Farfel recognized, would be ideal for children’s health; but they believed, with some justification, that a legal requirement to do so would be considered far too costly in such politically conservative times and would likely result in landlord abandonment of housing in the city’s more poverty-stricken districts.
Despite the intentions of KKI researchers to benefit children, the court of appeals found that KKI had engaged in highly suspect research that had direct parallels with some of the most infamous incidents of abuse of vulnerable populations in the twentieth century. The KKI project, the court argued, differed from but presented “similar problems as those in the Tuskegee Syphilis Study, . . . the intentional exposure of soldiers to radiation in the 1940s and 50s, the test involving the exposure of Navajo miners to radiation . . . and the secret administration of LSD to soldiers by the CIA and the army in the 1950s and 60s.” The research defied many aspects of the Nuremberg Code, the court said, and included aspects that were similar to Nazi experimentation on humans in the concentration camps and the “notorious use of ‘plague bombs’ by the Japanese military in World War II where entire villages were infected in order for the results to be ‘studied.’”1 More specifically, the court was appalled that many of the children selected for the study were recruited to live in homes where the researchers knew they would be exposed to lead and thus knowingly placed in harm’s way. Children, the court argued, “are not in our society the equivalent of rats, hamsters, monkeys and the like.”2 The court was deeply troubled that a major university would conduct research that might permanently damage children, given what was already known about the effects of lead.
How could two public health researchers who had devoted their scientific lives to alleviating one of the oldest and most devastating neurological conditions affecting children be likened to Nazis? Was this just a “rogue court,” an out-of-control panel of judges, as many in the public health community would argue? These were the questions that initially drew our attention. We soon became aware, however, of the much more complex and troubling story underlying the case, about not just the KKI research but also the public health profession, the nation’s dedication to the health of its citizens in the new millennium, and the conundrum that we as a society face when confronting revelations about a host of new environmental threats in the midst of a conservative political culture. In its ubiquity and harm, lead is an exemplary instance of these threats. Yet there are many others we encounter in everyday life that entail similar issues, from mercury in fish and emitted by power plants to cadmium, certain flame retardants, and bisphenol A, the widely distributed plastics additive that has been identified as a threat to children.3
For much of its history, the public health field provided the vision and technical expertise for remedying the conditions—both biological and social—that created environments conducive to harm and within which disease could spread. And throughout much of the profession’s history, public health leaders have joined with reformers, radicals, and other social activists to finds ways within the existing political and economic structures to prevent diseases. Although the medical profession has often been given credit for the vast improvements in Americans’ health and life span, the nineteenth- and early-twentieth century public health reformers who pushed for housing reforms, mass vaccination campaigns, clean water and sewage systems, and pure food laws in fact played a major role in improving children’s health, lowering infant mortality, and limiting the impact of viral and bacterial diseases such as cholera, typhoid, diphtheria, smallpox, tuberculosis, measles, and whooping cough. In the opening years of the twentieth century, for example, Chicago’s public health department joined with Jane Addams and social reformers at Hull House to successfully advocate for new housing codes that, by reducing overcrowding and assuring fresh air in every room, led to reduced rates of tuberculosis. And New York’s Commissioner of Health Hermann Biggs worked with Lillian Wald and other settlement house leaders to initiate nursing services for the poor, pure milk campaigns, vaccination programs, and well-baby clinics that dramatically reduced childhood mortality. Biggs, Addams, and other Progressives worked from a firm conviction that as citizens we have a collective responsibility to maintain conditions conducive to every person’s health and well-being.
These broad public health campaigns to control infectious diseases yielded great victories from the 1890s through the 1930s. But with the first decades of the twentieth century, a different view of the profession began to gain ascendancy, redefining the mission of public health in ways that belied its role as an agent of social reform. In 1916 Hibbert Hill, a leading advocate of this new direction, put it this way: “The old public health was concerned with the environment; the new is concerned with the individual. The old sought the sources of infectious disease in the surroundings of man; the new finds them in man himself. The old public health . . . failed because it sought [the sources] . . . in every place and in every thing where they were not.”4 In this view, the idea was for the fast-growing science of biological medicine to concentrate on treating disease person by person rather than on eradicating conditions that facilitated disease and its spread, in some cases encouraging reforms in behavior to reduce individual exposure to harm. Hence, like numerous other fields in the early decades of the century, public health became professionalized, imbuing itself with the aura of science and setting itself off as possessing special expertise.
By the middle decades of the twentieth century, public health officials thus typically conceived of their field mainly as a laboratory-based scientific enterprise, and many public health professionals saw their work as a technocratic and scientific effort to control the agents that imperiled the public’s health individual by individual.5 We can see this shift in perspective in treating tuberculosis, for example. An infectious disease that terrified the American public in the eighteenth and nineteenth centuries, tuberculosis had begun to decline as a serious threat by the early twentieth century, mainly because of housing reforms, improvements in nutritional standards, and general environmental sanitation. By midcentury, public health officials tended to downplay such environmental conditions and came to rely instead on the armamentarium of new antibiotic therapies to address the relatively small number of tuberculosis victims. The history of responding to industrial accidents and disease offers another example. In the early years of the twentieth century, reformers such as Crystal Eastman addressed the plague of industrial accidents and disease in the steel and coal towns of Pennsylvania by advocating for higher wages, shorter hours, and better working conditions through unionization. By the 1950s, industrial disease and accidents had largely faded from public health view—ironically, in part because the earlier reform efforts had led to protective legislation—and it was left to company physicians to treat individual workers. This turn toward technological and individualistic solutions to problems that had once been defined as societal was by midcentury part of a general shift in American culture away from divisive class politics and toward a faith in ostensibly class-neutral science, technology, and industrial prowess as the best way to address social or public-health-related problems.
Since the early twentieth century, a tension has existed within the public health field—which mirrors a societal one—between, on the one hand, those who set their sights on prevention of disease and conditions dangerous to health through society-wide efforts and, on the other, those who believe in the more modest and pragmatic goal of ameliorating conditions through piecemeal reforms, personal education, and individual treatments. Despite the tremendous successes of environmental and political efforts to stem epidemics and lower mortality from infectious diseases, the credit for these improvements went to physicians (and the potent drugs they sometimes had at hand), whose role was to treat individuals. This shift also coincidentally, or not so coincidentally, undermined a public health logic that was potentially disruptive to existing social and power relationships between landlord and tenant, worker and industrialist, and poor immigrants and political leaders.
At elite universities around the country—from Harvard, Yale, and Columbia to Johns Hopkins and Tulane—new schools of public health were established in the first two decades of the twentieth century with funds from the Rockefeller and Carnegie Foundations. Educators at these new schools had faith that science and technology could ameliorate the public health threats that fed broader social conflicts. They envisioned a politically neutral technological and scientific field removed from the politics of reform. The Johns Hopkins School of Hygiene and Public Health was at the center of this movement. William Welch, the school’s founder and first director (as well as the first dean of the university’s medical school), argued persuasively that bacteriology and the laboratory sciences held the key to the future of the field.6 By the mid-twentieth century, municipal public health officials in most cities had adopted this approach. If early in the century public health workers in alliance with social reformers succeeded in getting legislation passed to control child labor and the dangers to health that accompanied it, and to protect women from having to work with such dangerous chemicals as phosphorus and lead, by midcentury departments of health worked more often to reduce exposures of workers to “acceptable” levels that would limit damage rather than eliminate it. Similarly, by the 1970s departments of health had established clinics aimed at treating the person with tuberculosis but displayed little interest in joining with reformers to tear down slums and build better houses for at-risk low-income people.7
By the 1950s and 1960s, when childhood lead poisoning emerged as a major national issue, public health practitioners were divided between those who defined their roles as identifying victims and treating symptoms and those who in addition sought alliances with social activists to prevent poisoning through housing reforms that would require lead removal. Drawing on the social movements of the 1960s, health professionals joined with antipoverty groups, civil rights organizations, environmentalists, and antiwar activists to struggle for access to health facilities for African Americans in the South and in underserved urban areas, for Chicanos active in the United Farm Workers’ strikes in the grape-growing areas of California and the West, for Native Americans on reservations throughout the country, and for soldiers returning from Vietnam suffering from post-traumatic stress disorders, among others. By the end of the twentieth century, though, the effort to eliminate childhood lead poisoning through improving urban infrastructure had largely been abandoned in favor of reducing exposures.
CHILDHOOD LEAD POISONING: PUBLIC HEALTH TRIUMPH OR TRAGEDY?
The campaign to halt childhood lead poisoning is often told as one of the great public health victories, like the efforts to eliminate diphtheria, polio, and other childhood scourges. After all, with the removal of lead from gasoline, blood lead levels of American children between the ages of one and five years declined precipitously from 15 micrograms per deciliter (μg/dl) in 1976–80 to 2.7 μg/dl by 1991–94,8 and levels have continued to drop. Today, the median blood lead level among children aged one–five years is 1.4 μg/dl, and 95 percent of children in this age group have levels below 4.1 μg/dl. Viewed from a broader perspective, however, the story is more complicated, and disturbing, and may constitute what Bruce Lanphear, a leading lead researcher, calls “a pyrrhic victory.”9 If 95 percent of American children have below what is today considered the danger level for lead, then 5 percent—a half million children—still have dangerous amounts of lead in their bodies. A century of knowledge about the harmful effects of lead in the environment and the success of efforts to eliminate some of its sources have not staunched the flood of this toxic material that is polluting our children, our local environments, and our planet.
FIGURE 1.Rates of lead poisoning, 2003. These rates are based on the CDC’s 2003 level of concern (10 µg/dl). In 2012, the CDC lowered that to 5 µg/dl, increasing the number of at-risk children from approximately 250,000 to nearly half a million. Source: Environmental Health Watch, available at www.gcbl.org/system/files/images/lead_rates_national.jpg.
Today, despite broad understanding of the toxicity of this material, the world mines more lead and uses it in a wider variety of products than ever before. Our handheld electronic devices, the sheathing in our computers, and the batteries in our motor vehicles, even in new “green” cars such as the Prius, depend on it. While in the United States the new uses of lead are to a certain degree circumscribed, the disposal of all our electronic devices and the production of lead-bearing materials through mining, smelting, and manufacture in many countries continue to poison communities around the world. Industrial societies in the West may have significantly reduced the levels of new lead contamination, but the horror of lead poisoning here is hardly behind us, exposure coming from lead paint in hundreds of thousands of homes, in airborne particles from smelters and other sources and from contaminated soil, lead solder and pipes in city water systems, and some imported toys and trinkets. Over time, millions of children have been poisoned.
In the past, untold numbers of children suffered noticeably from irritability, loss of appetite, awkward gait, abdominal pain, and vomiting; many went into convulsions and comas, often leading to death. The level of exposure that results in such symptoms still occurs in some places. But today new concerns have arisen as researchers have found repeatedly that what a decade earlier was thought to be a “safe” level of lead in children’s bodies turned out to itself result in life-altering neurological and physiological damage. Even by the federal standard in place at the beginning of 2012 (10 μg/dl), more than a quarter of a million American children were victims of lead poisoning, a condition that almost a century ago was already considered with some accuracy as totally preventable. Later in 2012, the Centers for Disease Control (CDC) lowered the level of concern to 5 μg/dl, nearly doubling estimates of the number of possible victims.
The ongoing tragedy of lead poisoning rarely provokes the outrage one might expect, however. If this were meningitis, or even an outbreak of measles, lead poisoning would be the focus of concerted national action. In the 1950s, fewer than sixty thousand new cases of polio per year created a near panic among American parents and a national mobilization of vaccination campaigns that virtually wiped out the disease within a decade. At no point in the past hundred years has there been a similar national mobilization over lead despite its ubiquity and the havoc it can wreak.
For much of the twentieth century we have no systematic records telling us the number of children whose lives have been destroyed by lead. What we have known, as one researcher put it in the 1920s, is that “a child lives in a lead world.”10 By the 1920s, virtually every item a toddler touched had some amount of lead in or on it. Leaded toy soldiers and dolls, painted toys, beanbags, baseballs, fishing lures, and other equipment that were part of the new consumer economy of the time; the porcelain, pipes, and joints in the sparkling new kitchens and bathrooms of the expanding housing stock—all were made of or contained large amounts of lead. But more ominously and disastrously, lead became part of the air Americans breathed when, in 1923, lead was introduced into gasoline to give cars more power. With the dramatic growth of this vast industry, every American child, parent, and neighbor began to systematically incorporate into their bodies a toxic heavy metal that was already known to be poisoning workers in the United States and elsewhere.
For centuries, and particularly with the Industrial Revolution, lead had been causing workers in foundries, smelters, paint factories, and other industries to suffer severe, sometimes fatal neurological damage. By the 1920s, children as well were facing a special threat from the very rooms they lived in every day. Paint, the seemingly innocuous wall covering that replaced wallpaper as the most desirable room decoration in the early twentieth century, contained huge amounts of this deadly material. Up to 70 percent of a can of paint in the first half of the century was composed of lead pigments. Such paint was aggressively marketed as the covering of choice to millions of young families through jingles, advertisements, and even paint books for children, who were told, for example, “This famous Dutch Boy Lead of mine can make this playroom fairly shine.”11
The vast expansion of America’s cities fueled growing use of lead paint as a convenience of modern American life, which also fostered competition among various paint manufacturers eager to gain new sales and capitalize on potential profits. The paint companies could have manufactured their products without lead—zinc-based paints (promoted as safe and nontoxic because they were “lead free”) had been on the market as early as 1900, and by the 1930s titanium pigments were available as well. Instead, the lead industry chose to run massive marketing and promotion campaigns all through the first half of the twentieth century despite their knowledge that lead paint was causing children to go into comas, suffer convulsions, and die.12 The result was a major public health disaster. By the middle decades of the century, millions of children were suffering the effects of acute or chronic lead exposure, and tens of thousands of children had died.13 Lead was by then certainly among the most prevalent, if not the most well-known, of the threats to children.
The Kennedy Krieger Institute had been identifying such lead-poisoned children and treating them for over a half century when researchers there embarked in 1991 on what became its controversial study.14 Baltimore, and Johns Hopkins University in particular, had been at the center of work on childhood lead poisoning even longer, for almost a century.15 Over this period, the city had made some of the more innovative attempts to address what has proven to be one of the nation’s most intractable environmental problems. In the 1930s Baltimore’s health commissioner identified lead paint as a major source of injury to children, and since the 1950s Johns Hopkins had numbered among its faculty the foremost lead researchers in the nation, including Julian Chisolm, perhaps the preeminent university lead researcher of the middle decades of the twentieth century—and the co-principal investigator for the KKI study. The irony of this history is unmistakable. Here was the premier center for the study of lead poisoning, located in the virtual heart of the country’s lead poisoning epidemic, at the eye of a storm over whether or not children were being used, as the Maryland Court of Appeals ultimately opined, as “canaries in the mines”16 and “guinea pigs.”17
Fairly typical of the thousands of children that Johns Hopkins had sought to help before the 1960s, when children living in poorly maintained slum housing suffered from convulsions after ingesting lead paint, was John T., aged nine months, who was brought by his distraught parents to the Harriet Lane Home of the Johns Hopkins Hospital in February 1940.18 John was a well-nourished, playful, and cooperative child, with no history of developmental problems, according to the admitting nurse. John’s father was well educated, having spent three years at a theological seminary. He had also trained as an entertainer, but a back injury had incapacitated him and he had gone on relief, receiving fifty-six dollars a month to support his family. Despite their meager income, their home was well maintained and “quite attractively furnished,” the visiting social service worker reported.19
John was admitted to the hospital because he had developed an ear infection, but that was easily treated; his appetite was good and he slept well. The medical record indicated that John had been breast-fed for six of his first nine months. He was soon eating cereals; started on spinach, string beans, and other vegetables; and given soft-boiled eggs. In his short life he had never suffered from nausea or vomiting. The record reported a happy, healthy infant from a good home who “held up his head at three months; sat up at six months; had first teeth at seven months,” and at month nine was able to walk around holding on to objects and walls. “This baby behaves well,” said the record. “No problem. Keeps himself entertained all day.” He had suffered an attack of chicken pox but had recovered well. In a matter of two days his ear infection seemed to clear up and he was sent back home to 1023 North Caroline Street, a three-story row house just north of the medical center, to rejoin his four other siblings, who ranged in age from two to six years. This healthy child had everything to look forward to.
In the ensuing months John returned periodically to the clinic for treatment of his chronic earaches, but by the time he was two years old he had developed symptoms that were not at all routine. In May 1941 his parents rushed him to the hospital. A few hours earlier, he had “bent over to the left and couldn’t straighten up,” they told the admitting nurse, and since that time he had “been acting ‘crazy-like.’” John had been “eating plaster,” they said, and the previous day he had eaten “some paint.” The nurse summed up what she had observed: “This is a fairly well-nourished and developed two year old colored boy who is crying and is excitable.” At the hospital he “fell to the left side when he tried to walk, and he reeled around to the left. He didn’t respond to his name or questions.” The hospital raised the possibility that John suffered from lead poisoning, encephalitis, and secondary anemia. He had apparently been eating plaster for the past six months, and his mother reported that “he has been eating paint that peels off from window sills.” The blood work showed 390 micrograms of lead per deciliter of blood—almost eighty times the level considered by the CDC to be dangerous for children seventy years later20 and at the time clearly a cause of acute poisoning. The social worker in charge of the case noted that “because the landlord refused to make any repairs in this home, the family pooled their money and bought some paint which they have used all over the home.” When told that paint from the windowsills was dangerous, his mother said she had not realized its danger and had “caught him on frequent occasions with a mouth full of paint chips.” She promised that in the future “she would make every effort to keep the child away from the paint.” She had “a large play pen and from now on the child [would] be kept there. It gives him adequate room to move about and have a good time,” the social worker wrote, “and will make it impossible to get to the window sill and eat more paint.”
In mid-June 1941, after more than a month in the hospital, John’s symptoms subsided and he was sent back home. But two months later the mother was back at the social service department. In the words of the social worker, the family had “contacted the real estate agency several times about repair work but with no success”; there continued to be problems of “loose plaster throughout the home in spite of their efforts at repair.” The social worker contacted the health department, which promised to investigate the home conditions, but we know neither the results nor what befell John in subsequent years.
What did and did not occur in response to the plight of John’s family is telling. At the time, Johns Hopkins was the lone institution and Baltimore the lone city in the country that was systematically trying to identify and treat large numbers of children affected by the increasing tonnage of lead polluting the nation’s housing. Baltimore and Hopkins had been the epicenter of this issue ever since the area’s rapid growth at the turn of the century had created a huge housing boom and, with it, the use of lead-based paint throughout the central city. The first American case of poisoning due to lead paint ingestion was also documented here, in 1914, by Henry Thomas and Kenneth Blackfan at the very same Harriet Lane Home where John was treated. And Baltimore’s Department of Health was the first local health agency to mount a campaign to protect a city’s children from the effects of lead. In fact, in the 1930s it used the new medium of radio to broadcast public service announcements warning its residents about lead’s dangers: “Every year there are admitted to the hospitals of Baltimore a number of children with lead poisoning caused by eating paint. Most of these children die,” listeners were told, “but those who live are almost equally unfortunate because lead poisoning leaves behind it a trail of eyes dimmed by blindness, legs and arms made useless by paralysis, and minds destroyed even to complete idiocy.”21
The response of Johns Hopkins to the epidemic was, however, fraught with practical and institutional problems emblematic of a larger crisis over lead poisoning and other ubiquitous toxic pollutants that continue to plague us today. Lead poisoning was both a medical and a social problem of inordinate proportions. Hopkins could treat the problem by allowing children who came to its attention a brief respite from the environmental assault on their bodies and brains, but such respites were typically just that, and not adequate to stop these assaults. John was returned to his home following the acute episode of lead poisoning that had nearly paralyzed him. But Hopkins, in no position either to compel the landlord to repair the home or to provide the family with a lead-free house, had no answer to the problem of how to protect John from further lead dosings other than to have his desperate mother promise to keep him pent up in a playpen, away from what was assumed to be the major sources of lead.
We may look back on John’s treatment and the “discharge protocol” as inadequate (although in many ways it is similar to what occurs in numerous localities today). And we may assume that John would have likely returned to the hospital with a fresh, and possibly fatal, episode of lead ingestion: despite the well-intentioned advice to his parents, no well-functioning toddler could remain for long in a four-by-four pen. But we would be wrong to write off the John Hopkins’s effort as an anomaly or proof of special inadequacy of the medical and social service system of the time. True, unlike the Harriet Lane Home, which generally saw its responsibility as treating the acute symptoms of lead poisoning as best it could, the Kennedy Krieger Institute, facing similar problems in serving Baltimore’s children a half century later, took as its responsibility finding the means to protect children from lead exposure, treating them when evident symptoms began to appear and planning for their return to a safe environment. But, in the end, despite this wider purview, KKI, like its predecessor, could not overcome the huge social and economic issues that frame the long, troubling, and desperate history of lead poisoning in Baltimore, and in the nation.
The story of John and the public health response to cases such as his are indicative of the entire history of lead poisoning in particular and the crisis of environmental and industrial pollution in general. The root of John’s disease lay in the physical conditions in which he and his family lived—poor housing whose walls were covered with a poison. But the only response was from the public health and medical professions, and they could only provide medical care to the individual child. That was important, of course, but the broad social problems that affected huge numbers of children living in similar conditions were left unaddressed, virtually guaranteeing that there would be many more children like John in urgent need of help. John was suffering from more than an environmental exposure to a known neurotoxin, caused by shoddy landlords and peeling paint. He suffered from a social and economic system that condemned his family to poverty and racial discrimination, as well as to the urban decay that put him in harm’s way. John’s parents could hardly be blamed for the constraints he and his entire family were forced to endure in, for example, the limited choices in housing they would have had. And even vague attempts to “explain away” John’s situation by pointing to his color and poverty could not counter the observations that his mother was a hard-working, sincere, and dedicated parent who, according to the social worker at Harriet Lane, was “genuinely interested and concerned about the children” and, using the racist language of the period, “more intelligent than the average negro.” Nor could John’s well-educated and industrious father be blamed for the family’s economic plight and thereby somehow explain away the disease as a family failing. Public health agencies, without such traditional explanations for the diseases of poverty to fall back on—and with no ability to confront the socioeconomic relationships among lead producers, paint manufacturers, housing officials, and landlords that had produced the epidemic of lead poisoning—lacked the tools and the will to control the epidemic effectively as well as the clout to effect much change.
The good doctors of the Harriet Lane Home faced an impossible situation. On the one hand their responsibility was to treat disease and they did so to the best of their abilities. But, in the context of such a glaring threat—children being poisoned by a toxin in their home—one would hope they would have gone beyond that role to advocate more forcefully for housing reforms and rehabilitation as a means of prevention. Public health administrators, advocates, and policy-oriented academics, though, faced a classic dilemma: how does one prevent disease and premature or unnecessary death when the means of effecting such prevention are controlled by a political and economic system over which one has limited influence and that profits from the existing social relationships that produce disease? In this respect, the public health problems of the 1940s are no different than what we face today, though the political climate is quite different. In fact, given the growing attention to the impact of chronic illnesses and low-level environmental exposures to a host of toxic chemicals and industrial products whose chemistry, much less whose health effects, is largely not understood, the problem is only magnified.22
Acute lead poisoning, the kind of poisoning John suffered, perhaps the oldest and best understood environmental disease, has been for the most part successfully contained in the United States over the past half century through judicial, legislative, and regulatory decisions as well as scientific discoveries and medical interventions. Removing some of the most obvious sources of lead from the world of children and adults—from gasoline, paint, canned foods, and other widely available consumer products—was an outstanding public health achievement, which in aggregate lowered the average exposure to lead by orders of magnitude. During the 1960s and 1970s, public health authorities joined with various social movements and thereby were instrumental in shaping these regulatory actions and bringing to the nation’s attention the huge number of childhood poisonings. Through coalitions with social reformers, public health authorities were able to press national, state, and/or local authorities to enact legislation and authorize agencies to achieve reforms. Because of reduced exposure consequent to those reforms, children in the United States today rarely go into convulsions or suffer massive brain damage from lead poisoning, although this is still a major problem in many areas of the developing world. Similarly, because of other regulatory action, Americans rarely suffer from the most acute symptoms of mercury poisoning, arsenic poisoning, or radiation exposure.
Concern over acute lead poisoning has given way to recognition of the subtler but often still devastating problems induced by lead ingestion, problems only vaguely considered a generation or two ago. Indeed, researchers in the past few decades have changed our understanding of the effects that comparatively low levels of lead exposure have on the brain of the developing child, and with that our understanding of the potential low-dose dangers of other toxins. Mercury, chromium, and other heavy metals still cause damage to children (and adults) even if exposure is rarely fatal; the level of arsenic in some of our water supply is with good reason a cause of concern to the U.S. Environmental Protection Agency and state health officials.
Low-dose effects of such toxins are not new problems; they occurred in the acute age as well. But they typically went unrecognized as toxic because of the glaring damage that accompanied acute poisonings and the limited technological tools available for identifying very low levels of exposure. Today, though, we need only read the newspaper headlines to see the growing alarm over the potential harmful health effects of, for example, bisphenol A, a chemical additive that mimics estrogen and other human hormones and that is found in a myriad of children’s toys, baby bottles, plastic containers, adhesives, computer-generated taxi and credit card receipts, and a host of other consumer products.23 Or we may point to the emerging controversies over the use of nanoparticles in skin creams and cosmetics, or the chemicals used in flame retardants in children’s clothing and other consumer items. In these and many other instances, it will require broad population-based public health actions to prevent damage, not just direct individual treatment to deal with these substances’ effects.
The decline of the various social movements in the 1970s and 1980s had a telling effect on the public health profession, as it was deprived of the power and energy of political and social allies that could influence legislators and bureaucrats in local, state, and federal agencies. Following the election of Ronald Reagan, even federal agencies whose mission coincided with public health activists were under attack and stymied in their attempts to regulate the environment, identify and remedy unhealthy working conditions, and provide services to the poor. New publicly built housing virtually ceased in these years. In the face of this broad assault, largely at the behest of conservative critics of the Great Society programs of the 1960s, public health activism waned.24
A strategy of avoiding confrontation with the political and economic institutions that impede the solutions for public health problems—and indeed may have given rise to them—has led to avoiding confrontation with the structural impediments to improving public health. This is the dilemma of public health today: For generations, many in the public health field have depended on the laboratory, on the development of the next magic bullet, on new technologies and diagnostic and therapeutic interventions to deal with public health problems. But, like lead, other ubiquitous environmental poisons now raise fundamental problems that cannot easily be addressed by these methods. If detection of endocrine disruption is truly a new frontier in the understanding of reproductive problems or other biological changes, for example, a medical intervention may not be adequate; and even were it possible, dealing with the consequences individual by individual would overwhelm any health system.
If public health professionals are to effectively address the problems of chronic conditions, subtle neurological damage, obesity, and childhood developmental anomalies, they will be forced to confront huge industries that profit from, for example, the production of fast foods, high-calorie drinks, and tobacco. These health difficulties are not simply an issue for public health professionals; they are of course an issue for society as a whole. Public health individuals and institutions can press, but ultimately their success depends on political and economic forces larger than themselves. From the guarantee of an adequate water supply and sewer system to the passage of Medicare and Medicaid, successful public health reforms of the past have depended on social movements and legislative and/or executive action, and the same is likely to be true for effective action on a broad array of toxins, lead included.
THE SCIENCE AND POLITICS OF LOW DOSES
As the character of the lead-poisoning epidemic has changed over the past half century, especially with the elimination of lead from the manufacture of paint and from gasoline, and as the harm to children of nonfatal doses of lead has become more apparent, the focus of research has shifted to the effects of these smaller doses. Results indicate that, though the level of lead exposure may be low compared to what brings on acute episodes of lead poisoning, the effects are far from minor.
Children with even relatively low levels of lead in their blood (even below 5 micrograms per deciliter) have been shown to suffer disproportionately from behavioral problems in school, school failure, hyperactivity, trouble concentrating, difficulty with impulse control, lowered intelligence scores on standardized tests, higher rates of juvenile delinquency and arrests, and ultimately unemployment and failures in life. Further, children with lead exposure are more likely as adults to have physical problems like kidney and heart disorders. The scientific community and many political leaders now recognize that lead poisoning has been among the most important epidemics affecting children in the United States in the last century.
A particular tragedy of low-level lead poisoning is that its “symptoms” are easily confused with myriad other insults suffered by children who grow up in poor communities, whose housing is substandard and whose lives are shaped by poor education, social marginalization, and, in some instances, racism. In a 1990 article Herbert Needleman noted a stunning statistic that brings this issue home: more than half of all “poor black children have elevated blood lead levels,” estimated at the time as exceeding 25 μg/dl.25
Consider, for example, Sam T., the youngest of his family’s nine children, born in June 1990, just as the Kennedy Krieger Institute study was beginning in Baltimore.26 The family lived in an apartment located in one of Milwaukee’s poorest and most lead-polluted neighborhoods, but according to his medical record, Sam “thrived as a baby” and was developmentally normal at the ages at which he started to crawl, walk, and babble.27 Like many lead-poisoned children, his problems began as a toddler, when he began to move more freely around the apartment, mouthing or sucking his fingers after touching the walls, windowsills, or other objects covered with lead paint or dust.28 When Sam was fourteen months old, a routine check found his blood lead level to be 18 μg/dl, at that time almost twice the Centers for Disease Control’s acceptable exposure limit, which had been reduced from 25 to 10 μg/dl in 1991. A few months later, his blood lead level had almost doubled, to 40 μg/dl, and it did not fall below 25 μg/dl at any time tested over the next two and a half years.29 The family moved to a house nearby in an attempt to escape such a heavily leaded environment, but conditions there were no better. In the summer of 1993, when Sam turned three, his lead levels jumped significantly and he was hospitalized for five days while he received chelation treatments, the in-hospital chemotherapeutic blood treatment aimed at leaching lead from the body.30 But by then it was too late to forestall damage.
When Sam entered kindergarten, teachers immediately noticed that he had problems. Within weeks, he was referred for speech and language therapy and was soon, according to the court record, “transferred to a different school because he needed a small, structured classroom.”31 In first and second grades, he had difficulties with reading, writing, and arithmetic and he suffered various language delays.32 In his teenage years, a battery of neuropsychological exams indicated that Sam “had a number of deficiencies in various areas of brain function . . . : problem solving, planning, executive function, fine motor function, expressive language, aspects of visual-spatial construction, visual working memory, visual-spatial memory and verbal concept formation”33—an array of deficits consistent with what is known about damage from lead ingestion. “[Sam]’s injuries are permanent and irreversible,” the examining physician concluded.34 By his midteens Sam, who had been described as a normal, happy infant, had become a failure in school, a troubled young man who lacked the skills to escape the dangerous neighborhood in which he was raised.35
The lessons of America’s continuing lead-poisoning epidemic are not confined to the tragedies of a few specific children like Sam T. Nor are its lessons limited to lead alone. Discovery by lead researchers of the impacts of early low-level lead exposure has been instrumental in revolutionizing our understanding of environmental danger and how we define what is a risk. As a result, our concerns regarding environmental dangers can no longer be confined to worries over cancer, heart disease, and the like. Researchers have identified that low-level exposures can result in biological changes with measurable and important consequences for individuals. Behavioral changes such as hyperactivity, attention deficit disorders, and even antisocial behaviors have been linked to low-level exposures to lead, mercury, and other heavy metals in infancy and even in utero. Morphological changes such as premature puberty and an increased proportion of female births have been linked to the rise in the use of plastics and bisphenol A and other “endocrine disruptors.”36
Researchers into low-level exposures to a variety of substances have also challenged, even transformed, our understanding of what is toxic and what is toxicology. We can no longer take solace in believing that any substance can be used if a “safe” level of exposure is officially identified. Researchers have shown that for many synthetic materials introduced yearly into our environment, the developmental moment at which a fetus or child is exposed to a toxin is every bit as important as the amount to which he or she is exposed.37 Many of these issues that challenge us today were first identified while studying lead and lead exposures. The modern history of this unfolding understanding and corresponding attempts to regulate lead may thus give us insight applicable to current debates over other toxic substances.
Sam T.’s story is similar to that of countless others, often who have ingested far less lead. In fact, from the 1970s to the 1990s a growing body of research indicated that as each lower “safe level” was agreed upon by the federal government, deleterious effects were found at a still lower level. Investigators such as Philip Landrigan at Mount Sinai Medical Center in New York, Herbert Needleman at the University of Pittsburgh, and Kim Dietrich and Bruce Lanphear, both then at the University of Cincinnati, showed that even quite small amounts of lead, between 1 and 10 micrograms per deciliter of blood, were associated with deficits similar to Sam’s: lowered IQ, behavioral disorders, perceptual problems, and other effects that seriously undermined the ability of children to succeed in school or work environments. This shift in focus—from the impact of relatively high blood lead levels as the cause of severe, sometimes fatal neurological damage to the subtler behavioral and intellectual deficits associated with low-dose lead exposure—raised new concerns about lead’s wide-ranging toxic effects and forced rethinking of what clinicians should attend to beyond textbook symptoms of severe lead poisoning. The growing scientific literature on lead’s effects, as we will see, has been bitterly contested by the lead industry at every step and has resulted in some classic instances of attempted intimidation of university researchers and attacks on their scientific integrity.38
The extensive documentation of low-level effects over recent decades has led the Centers for Disease Control to progressively lower the blood lead levels considered to put children at risk. Until the late 1960s, most public health officials and physicians believed that 60 micrograms per deciliter of blood was not dangerous for children. But by 1978 the CDC had halved this figure, reducing it still further in 1985, to 25 μg/dl, and then in 1991 to 10 μg/dl.39 Jane Lin-Fu, a leading lead researcher, has observed that today “we know that normal [blood-lead level] should be near 0, that unlike essential elements such as calcium . . . lead has no essential role in human physiology and is toxic at a very low level.”40 Most prominent researchers agree with Lin-Fu’s assessment.41 Indeed, the CDC’s lead advisory committee, the scientific body that consults on the federal definition of lead poisoning, recommended in January 2012 that the level of concern for lead be cut in half, to 5 μg/dl. This was adopted by the CDC later that year.42 The political implications of this recommendation are profound and contentious, however. As a result, the number of children considered at risk of lead poisoning rose dramatically, from an estimated 250,000 children with levels above 10 μg/dl to as many as 450,000 with levels exceeding 5 μg/dl, placing renewed pressure on government, industry, and public health officials to take action.
Lowering the overall exposure of children to lead entails eliminating the wide variety of ways that children come in contact with lead in their everyday lives. Newspapers are filled with stories of children who have been poisoned by the lead paint on imported toys, lead solder on children’s jewelry, lead from pipes that deliver water to homes, lead in soil tainted by leaded gasoline that once powered cars, lead spewed from smelters in the United States and throughout the world, and, still most importantly, lead from paint that remains on the walls of nearly all houses built before 1960 or that was applied in many other homes until lead paint was banned in 1978.
Just as there have been disagreements over what constitutes a “safe” blood lead level, so too have there been debates about how best to protect children from lead in their homes. In 1991 the CDC, under the auspices of the U.S. Department of Health and Human Services, published its Strategic Plan for the Elimination of Childhood Lead Poisoning,43 which some prominent researchers called “a truly revolutionary policy statement.”44 This document, building on an extensive period of reevaluation among researchers of childhood lead poisoning, proposed “a society-wide effort [to] virtually eliminate this disease as a public health problem in 20 years.”45 The document’s publication led to a host of studies seeking ways to eliminate or at least broadly curtail lead poisoning in America. While some researchers developed protocols aimed at eliminating lead as a widespread urban pollutant through its complete removal, others sought more pragmatic solutions—pragmatic, that is, from the viewpoint of the politics of the times, not from that of families whose children were at risk of permanent brain damage—seeking to remove some if not all lead from the windowsills, walls, ceilings, and woodwork of older homes.
The debate in the early 1990s over what should be done developed in a dramatically altered political environment, as memories of the Great Society were replaced by a more conservative political culture. The rise of Reaganism after 1980, the growing power of corporations, the decline of the civil rights and labor movements, the end of the construction of low-income public housing, and the antigovernment rhetoric and attacks on what were considered liberal social reforms all undermined support for more far-reaching solutions to the lead-poisoning problem. As Herbert Needleman, a pioneer in the early studies of low-level lead neurotoxicity, put it: “Instead of asking, ‘how can we develop a plan to spend U.S. $32 billion over the next 15 years and eliminate all of the lead in dangerous houses?’ the question became, ‘how little can we spend and still reduce the blood-lead levels in the short term?’” Opposition from industry, landlords, and others was so strong, and the countervailing voices so few, said Needleman, that “it was not long before the vision of the early 1990s, true primary prevention, eradication of the disease in 15 years, was replaced by an enfeebled pseudopragmatism,” which came down to only partial abatement of polluted homes.46
One researcher’s pseudopragmatism, however, is another advocate’s realistic attempt to help children at risk. And one person’s policy failure is another’s public health success story. Those who have watched a century of children sacrificed on the altar of lead poisoning are aghast that we, as a wealthy industrial society, would continue to knowingly allow future generations of children to be exposed to lead. In contrast, those who have set their sights lower and labored to reduce rather than eliminate lead in children’s environment, believing this to be the only “practical” course, celebrate dramatic declines in both blood lead levels and symptomatic children as among the great successes in public health history.
THE KENNEDY KRIEGER CASE AND THE ETHICS OF LEAD RESEARCH
The lead researchers at Johns Hopkins’s Kennedy Krieger Institute faced a troubling dilemma in the midst of this history: children living in Baltimore, the epicenter of the lead-poisoning epidemic for almost a century, were being poisoned because their homes had been covered with lead paint, which, when it deteriorated, the children inhaled or ingested. Despite the CDC’s grand vision of eliminating lead from the home, it was highly unlikely that the money necessary for a dramatic federal detoxification program would be appropriated: during the Reagan and first Bush administrations, government social projects were defined as part of the problem, not a part of the solution. It was in this general context that the Environmental Protection Agency funded the Kennedy Krieger Institute: the federal government, and various lead researchers, were looking for relatively inexpensive, nonconfrontational, noncoercive methods of partial abatement so that landlords would reduce the lead hazard to children rather than either evade an abatement law or abandon their properties.47
“The purpose of the study,” wrote Mark Farfel, the co-principal investigator with Julian Chisolm, “is to characterize and compare the short and long-term efficacy of comprehensive lead-paint abatement and less costly and potentially more cost-effective Repair and Maintenance (R&M) interventions for reducing levels of lead in residential house dust which in turn should reduce lead in children’s blood.”48 “R&M,” as Farfel later put it in a grant renewal request, “may provide a practical means of reducing lead exposure for future generations of children who will continue to occupy older lead-painted housing which cannot be fully abated or rehabilitated without substantial subsidy.”49 In the struggle to prevent lead poisoning, there was no question in the researchers’ minds as to what was ultimately needed: the complete removal of lead paint. But “repair and maintenance” was a compromise they made in the hopes of doing at least some good in a difficult time.
The Farfel and Chisolm study was designed to test the efficacy of three different methods of lead reduction in older homes. The investigators then planned to contrast results with those of two control groups: one of children living in homes that had previously undergone what was thought to have been full lead abatement and the second of children living in homes built after 1978 and presumed to be lead free. For the study, more than a hundred parents with young children were recruited to live in the various partially lead-abated houses. The premise of the research was that children would now be in a safer environment, a home that was an improvement over the lead-covered homes that were generally available to poor residents of Baltimore. However, the blood lead levels of children in at least two of the homes rose over the course of the study.50 It was the two sets of parents of these children who filed the lawsuits alleging that they had not been properly informed of the risks their children faced while participating in the study.51
To some, the Johns Hopkins study was an attempt by dedicated researchers to address the discoveries about the effects of low-level lead exposure and determine the cost of reducing harm to children living in leaded environments. This was the view the Baltimore City Circuit Court took in granting KKI’s motion for summary judgment and dismissing the suit initially. But others, including the plaintiffs on appeal, clearly saw things in a different light. In reversing the lower court’s decision and ordering the lawsuits to proceed to trial, the Maryland Court of Appeals in 2001, quite aware that the subjects of the KKI study were African American children, saw the case through the lens of a long history of ethical and social debate over the use of vulnerable populations in human subjects research, as well as through the lens of the ongoing environmental justice movement that joined the civil rights and environmental movements of the previous three decades. Since the 1970s, and particularly following revelations about the Tuskegee experiments, in which an initial sample of 399 African American men with syphilis were “observed” over a forty-year period from the early 1930s through the early 1970s and to whom state-of-the-art treatments were denied, historians, ethicists, and others had explored instances where scientific research has crossed generally accepted ethical and social boundaries. They had detailed the evolution of standards for human subjects research, the importance of informed consent, and other ethical issues involved in nontherapeutic and sometimes harmful experimentation.
The Baltimore study was organized in the wake of a dramatic rethinking of the use of human subjects in scientific experiments. During the preceding two decades researchers had become acutely aware of the ethical dilemmas presented by human subjects research, especially when it involved “vulnerable populations.” The legacy of Nuremberg, the revelations about the Tuskegee experiments, and publication in the late 1970s of the “Belmont Report,” the landmark federal report that expanded and codified the principles of ethical research with human subjects, all combined to cast doubt on the morality and ethical basis of the KKI research.52 But this framework may be too simple, incapable of acknowledging the complexity of the issues at hand.
Two contending interpretations of the events in Baltimore indicate fissures in the public health community over the ethics and politics of the KKI study, echoing the debates over research on vulnerable populations that emerged in the 1970s, 1980s, and 1990s and the debates over what was feasible to accomplish in a conservative political environment. In one common interpretation, the KKI study was intrinsically unethical because the means of protecting children by complete abatement from lead poisoning were well known, and to knowingly subject children to lead by placing them in homes where only partial abatement had been performed put them needlessly at risk. In the main contending interpretation, research such as Farfel and Chisolm’s on incremental improvements that could lessen exposures was necessary and important because, in the context of the social realities of housing and income inequalities in America, complete abatement on a large scale was a utopian idea. As two subscribers to this general interpretation explained, “Powerfully appealing egalitarian principles cannot be regarded as a sufficiently compelling reason to totally shut down research that offers a realistic prospect of improving conditions [in this instance, complete abatement] or to what many might consider a minimally just standard of living. . . . We contend that it is the failure to conduct such research that causes the greater harm, because it limits health interventions to the status quo of those who can afford currently available options [complete abatement] and deprives disadvantaged populations of the benefits of imminent incremental improvements in their health conditions.”53
In this light, the history of the KKI research could be seen as a tragedy rather than a melodrama: a fight between two defensible conceptions of the public good rather than a fight between the forces of good and evil. Of course, this raises different questions: Are two defensible conceptions of the public good both ethically justifiable with respect to putting others knowingly in harm’s way? Could not valuable research be—and have been—done on levels of abatement without putting children in harm’s way, with a differently designed study? Lurking here, too, is another question that, as we will see, adds a further twist: was it justified to assume that partial and complete abatement could be conducted safely and effectively in the early 1990s?
The decision of the Maryland Court of Appeals to let the case go forward unleashed a storm of controversy and argument, not just about the KKI study but more generally about the ethics of what was called “nontherapeutic research” on children. Until the Maryland court decision, discussion about the use of children as research subjects, according to bioethicist Lainie Friedman Ross, had been largely “pragmatic.” She notes that “unless research is done with children the advances of modern medicine cannot accrue to them or to future generations of children.”54 Following the KKI decision, the questions became more complex as ethicists began weighing in.
The first article laying out the issues raised by the court’s ruling was written by Robert Nelson, a professor of anesthesia and pediatrics at Philadelphia’s Children’s Hospital, and published in late 2001. Nelson addressed three basic considerations relating to the use of children as vulnerable populations in research: (1) whether or not “the interventions or procedures of the research offered the prospect of direct benefit to enrolled children”; (2) whether or not the “interventions or procedures involved in the research provided greater-than-minimal risk”; and (3) whether or not the parents of the children were properly informed of the potential risks of the study. On each of these issues Nelson argued that the court had acted properly in remanding the case to trial. With regard to the first question, for half of the children (those already living in housing that was to be abated as part of the study) the study “offered the prospect of direct benefit,” he said. But this was not true for those children who were moved into potentially more dangerous situations. Ethically, the partially abated home was a potential danger and therefore moving a child into it could not be considered a direct benefit. This was related to the second issue, whether the study presented these children with greater than minimal risk. Addressing this question was critical because the Belmont Report in 1977 argued, according to Nelson, “that a parent lacks the moral authority to expose a healthy child to more than minimal risk research.” Because the KKI researchers did not know for certain how partial lead abatement would affect the blood lead levels of the children, “the risk of continued lead exposure compared to the standard or full lead abatement procedure is more than minimal.” In addition, Nelson maintained, intentionally exposing children to lead by moving them into partially abated homes “cannot be considered as minimal risk” because “a ‘reasonable parent’ would not intentionally expose a child to environmental lead without making every effort to reduce or eliminate the lead exposure.”55
Were the parents properly informed about the critical matters that might concern a parent of a young child exposed to lead? The KKI study consent forms offered to the parents, Nelson said, “did not contain information that the ‘reasonable parent’ would want to know,” specifically, that the aim of the study was to evaluate “the effectiveness of three different methods of lead abatement,” what the impact on young children would be of the resulting lead exposure, and what the risks were “of inadequate lead abatement.”56 Further, what action was reasonable to expect from public health and housing officials in this conservative political era?
In the end, Nelson concluded, the court of appeals’s decision was consistent with the recommendations of the Belmont Report that “parents do not have the moral or legal authority to enroll healthy children in research that does not offer the prospect of direct benefit unless the risks of that research are no greater than the ordinary risks of daily life—a standard referred to as ‘minimal risks.’”57 In short, healthy children should not be encouraged to move into potentially dangerous situations.
Following the court’s decision, the critical issue that emerged for researchers, policy makers, and the public alike was the meaning of “minimal risk” and, more specifically, the relationship between socioeconomic inequality and the everyday risks of being poor in American society. The KKI study and the court’s response to it begged the question of whose life should be the standard of “the ordinary risks of daily life.” Did the dangers inherent in being poor mean, for example, that the children in the KKI study, because their everyday experience carried with it greater potential for harm, could be exposed to greater danger than average middle-class children who lived in a safer environment?
“Two entirely different standards emerge [in interpreting the meaning of ‘minimal risk’] depending upon whether researchers consider the daily or routine risks of harm encountered by some or all children,” wrote Loretta Kopelman, a professor at the Brody School of Medicine of East Carolina University and a member of the Institute of Medicine’s Committee on Research on Children. “With the first interpretation, or relative standard, the upper limit of harm would vary according to the particular group of subjects; with the second, or absolute standard, the upper limit would be the risks of harm encountered by all children, even wealthy and healthy children.” Kopelman reminded readers of the terrible consequences and ethical quandaries of such interpretative variation. In the 1960s and 1970s, for example, mentally retarded children had been used as subjects in the infamous Willowbrook hepatitis studies in which children were given hepatitis using the rationale that the “disease was endemic to the institution [and thus] the children would eventually have gotten hepatitis.”58
As the court of appeals’s ruling sank in, its implications appeared more profound and troubling. In an article titled “Canaries in the Mines,” Merle Spriggs, a medical ethicist at the University of Melbourne’s Murdoch Children’s Research Institute, gave perhaps the most cutting critique of the Johns Hopkins research: “The argument that the [KKI] families benefitted because they were not worse off can be compared with the arguments used in the infamous, widely discussed mother-child HIV transmission prevention trials in developing countries,” Spriggs said, referring to medical trials sponsored in the previous decade by the U.S. government in which researchers, seeking an inexpensive, effective way of reducing HIV transmission between mothers and children in African countries, provided AZT treatments to some mothers while comparing them with untreated “controls” who received only a placebo. Some argued that the research was justified in part because the African women who received the placebo would not normally have received any treatment at all, though ethical concerns would have precluded the research from being conducted in the United States. Both the HIV transmission prevention trials and the KKI research, Spriggs pointed out, “involved the problematic idea of a local standard of care,” an underlying assumption that “risky research is less ethically problematic among people who are already disadvantaged.” If this “relativistic interpretation of minimal risk” was considered acceptable, it opened a Pandora’s box of deeply disturbing issues and could virtually unleash the research community on poor people. She warned that such a stance “could allow children living in hazardous environments or who faced danger on a daily basis to be the subject of high risk studies.”59
Above all, what the KKI research effort exposed was a fault line that divides poor people from the rest of Americans and extends far beyond the ethics of occasional research. No one would suggest that a middle-class family allow their children to be knowingly exposed to a toxin that could be removed from their immediate environment. But for decades, as a society we have accepted that poor children can be treated differently. We have watched for over a century as children have, in effect, been treated as research subjects in a grand experiment without purpose. How much lead is too much lead? What are the limits of our responsibility as a society to protect those without the resources to protect themselves? As we confront new information about environmental toxins like mercury, bisphenol A, phthalates, and a host of new chemicals that are introduced every year into the air, water, and soil, whose reach extends beyond the poor, the issues raised by the KKI story—and by the modern history of the lead wars more generally—are issues that, by our responses, will define us all.
The history of lead poisoning and lead research is paradigmatic of the developing controversies over a range of toxins and other health-related issues now being debated in the popular press, the courts, and among environmental activists and consumer organizations, as well as within the public health profession itself. Public health officials struggle mightily with declining budgets, a conservative political climate, and a host of challenging and new health-related problems. Today, the public health community continues to have the responsibility to prevent disease. But it has neither the resources, the political mandate, nor the authority to accomplish this task, certainly not by itself. It is an open question whether it has the vision to help lead the effort, or to inspire the efforts needed.
Whatever the limitations of the bacteriological and laboratory-based model that public health developed in the early part of the twentieth century in response to the crises of infectious disease, there is no arguing that this model provided a coherent and unifying rationale for the profession. But, as we witness the emergence of chronic illnesses linked to low levels of toxic exposures, no powerful unifying paradigm has replaced bacteriology. Some suggest that the “precautionary principle” can serve as an overall guide, arguing that it is the responsibility of companies to show that their products are safe before introducing them into the marketplace or the environment, that we as a society should err on the side of safety rather than await possible harm. By adopting this approach, public health would reestablish prevention as its primary creed. Others insist that a renewed focus on corporate power, economic inequality, low-income housing options, racism, and other social forces that shape health outcomes is most needed to counter the antiregulatory regime of early twenty-first-century America. These ideas, or a more unified alternative, however, have yet to galvanize the field or the broader public, at least in the United States.
In this book we look at the shifting politics of lead over the past half century and the implications for the future of public health and emerging controversies over the effects of other toxins. The developing science of lead’s effects, the attempts of industry to belittle that science, the struggles over lead regulation, and the court battles of lead’s victims have taken place against the backdrop of a changing disease environment and, in more recent decades, an emerging conservative political culture, both in the broader society and in the public health profession.
Researchers have shown over the last five decades that the effects of lead, at ever-lower exposures tested, represent a continuing threat to children, a tragedy of huge dimensions. In the coming decades, without substantial political and social change, we will be placing millions more children at risk of life-altering damage. This research, combined with declining public will and resources to remove lead from children’s environment, has left the public health community and society at large with a difficult dilemma, not unlike that which Julian Chisolm and his young colleague Mark Farfel faced: Should we insist on the complete removal of lead from the nation’s walls, through some combination of full abatement and new housing, and therefore a permanent solution to this century-old scourge? Or should we search for a “practical” way to reduce the exposure of children to “an acceptable” level?
If we choose the former, the danger is that, without strong popular and political advocacy and a public health profession rededicated to the effort, nothing will be done—complete abatement may well be judged too costly, and we may encounter an ugly unwillingness to address a problem that primarily affects poor children, many of them from ethnic and racial minority groups. If we choose the latter, and if the dominant political forces give at best only grudging support to this ameliorative effort, the danger is that the children of entire communities will continue to be exposed, albeit at gradually declining levels, to the subtle and life-altering effects of lead. Public health as an institution, in trying to define what an “acceptable” level is, could lose in the process its moral authority and its century-long commitment to prevention, yet with no viable coherent intellectual alternative. This is a conundrum that affects us all, for we console ourselves with partial victories, often framed as progress in the form of harm reduction rather than prevention. We have become willing to settle for half measures, especially when what is at issue is the health of others, not of oneself. Isn’t this, so to speak, the plague on all our houses? In this sense, we are all complicit in the “experiment” that allows certain classes of people to be subjected to possible harm in the expectation of avoiding it ourselves.