Читать книгу The Emperor of All Maladies - Siddhartha Mukherjee, Siddhartha Mukherjee - Страница 11

Оглавление

“A monster more insatiable than the guillotine”

The medical importance of leukemia25 has always been disproportionate to its actual incidence. . . . Indeed, the problems encountered in the systemic treatment of leukemia were indicative of the general directions in which cancer research as a whole was headed.

—Jonathan Tucker, Ellie: A Child’s Fight Against Leukemia

There were few successes in the treatment26 of disseminated cancer. . . . It was usually a matter of watching the tumor get bigger, and the patient, progressively smaller.

—John Laszlo, The Cure of Childhood Leukemia: Into the Age of Miracles

Sidney Farber’s package of chemicals happened to arrive at a particularly pivotal moment in the history of medicine. In the late 1940s, a cornucopia of pharmaceutical discoveries27 was tumbling open in labs and clinics around the nation. The most iconic of these new drugs were the antibiotics. Penicillin, that precious chemical that had to be milked to its last droplet during World War II (in 1939, the drug was reextracted28 from the urine of patients who had been treated with it to conserve every last molecule), was by the early fifties being produced in thousand-gallon vats. In 1942, when Merck had shipped29 out its first batch of penicillin—a mere five and a half grams of the drug—that amount had represented half of the entire stock of the antibiotic in America. A decade later, penicillin30 was being mass-produced so effectively that its price had sunk to four cents for a dose, one-eighth the cost of a half gallon of milk.

New antibiotics followed in the footsteps of penicillin: chloramphenicol in 194731, tetracycline in 194832. In the winter of 1949, when yet another miraculous antibiotic, streptomycin, was purified out of a clod of mold from a chicken farmer’s barnyard, Time magazine splashed the phrase “The remedies are in our own backyard,”33 prominently across its cover. In a brick building on the far corner34 of Children’s Hospital, in Farber’s own backyard, a microbiologist named John Enders was culturing poliovirus in rolling plastic flasks, the first step that culminated in the development of the Sabin and Salk polio vaccines. New drugs appeared at an astonishing rate: by 1950, more than half the medicines35 in common medical use had been unknown merely a decade earlier.

Perhaps even more significant than these miracle drugs, shifts in public health and hygiene also drastically altered the national physiognomy of illness. Typhoid fever36, a contagion whose deadly swirl could decimate entire districts in weeks, melted away as the putrid water supplies of several cities were cleansed by massive municipal efforts. Even tuberculosis37, the infamous “white plague” of the nineteenth century, was vanishing, its incidence plummeting by more than half between 1910 and 1940, largely due to better sanitation and public hygiene efforts. The life expectancy of Americans38 rose from forty-seven to sixty-eight in half a century, a greater leap in longevity than had been achieved over several previous centuries.

The sweeping victories of postwar medicine illustrated the potent and transformative capacity of science and technology in American life. Hospitals proliferated39—between 1945 and 1960, nearly one thousand new hospitals were launched nationwide; between 1935 and 1952, the number of patients admitted more than doubled from 7 million to 17 million per year. And with the rise in medical care came the concomitant expectation of medical cure. As one student observed,40 “When a doctor has to tell a patient that there is no specific remedy for his condition, [the patient] is apt to feel affronted, or to wonder whether the doctor is keeping abreast of the times.”

In new and sanitized suburban towns, a young generation thus dreamed of cures—of a death-free, disease-free existence. Lulled by the idea of the durability41 of life, they threw themselves into consuming durables: boat-size Studebakers, rayon leisure suits, televisions, radios, vacation homes, golf clubs, barbecue grills, washing machines. In Levittown, a sprawling suburban settlement built in a potato field on Long Island—a symbolic utopia—“illness” now ranked third42 in a list of “worries,” falling behind “finances” and “child-rearing.” In fact, rearing children was becoming a national preoccupation at an unprecedented level. Fertility rose steadily43—by 1957, a baby was being born every seven seconds in America. The “affluent society,”44 as the economist John Galbraith described it, also imagined itself as eternally young, with an accompanying guarantee of eternal health—the invincible society.


But of all diseases, cancer had refused to fall into step in this march of progress. If a tumor was strictly local (i.e., confined to a single organ or site so that it could be removed by a surgeon), the cancer stood a chance of being cured. Extirpations, as these procedures came to be called, were a legacy of the dramatic advances of nineteenth-century surgery. A solitary malignant lump in the breast, say, could be removed via a radical mastectomy pioneered by the great surgeon William Halsted at Johns Hopkins in the 1890s. With the discovery of X-rays in the early 1900s, radiation could also be used to kill tumor cells at local sites.

But scientifically, cancer still remained a black box, a mysterious entity that was best cut away en bloc rather than treated by some deeper medical insight. To cure cancer (if it could be cured at all), doctors had only two strategies: excising the tumor surgically or incinerating it with radiation—a choice between the hot ray and the cold knife.

In May 193745, almost exactly a decade before Farber began his experiments with chemicals, Fortune magazine published what it called a “panoramic survey” of cancer medicine. The report was far from comforting: “The startling fact is that no new principle of treatment, whether for cure or prevention, has been introduced. . . . The methods of treatment have become more efficient and more humane. Crude surgery without anesthesia or asepsis has been replaced by modern painless surgery with its exquisite technical refinement. Biting caustics that ate into the flesh of past generations of cancer patients have been obsolesced by radiation with X-ray and radium. . . . But the fact remains that the cancer ‘cure’ still includes only two principles—the removal and destruction of diseased tissue [the former by surgery; the latter by X-rays]. No other means have been proved.”

The Fortune article was titled “Cancer: The Great Darkness,” and the “darkness,” the authors suggested, was as much political as medical. Cancer medicine was stuck in a rut not only because of the depth of medical mysteries that surrounded it, but because of the systematic neglect of cancer research: “There are not over two dozen funds in the U.S. devoted to fundamental cancer research. They range in capital from about $500 up to about $2,000,000, but their aggregate capitalization is certainly not much more than $5,000,000. . . . The public willingly spends a third of that sum in an afternoon to watch a major football game.”

This stagnation of research funds stood in stark contrast to the swift rise to prominence of the disease itself. Cancer had certainly been present and noticeable in nineteenth-century America, but it had largely lurked in the shadow of vastly more common illnesses. In 1899, when Roswell Park46, a well-known Buffalo surgeon, had argued that cancer would someday overtake smallpox, typhoid fever, and tuberculosis to become the leading cause of death in the nation, his remarks had been perceived as a rather “startling prophecy,” the hyperbolic speculations of a man who, after all, spent his days and nights operating on cancer. But by the end of the decade, Park’s remarks were becoming less and less startling, and more and more prophetic by the day. Typhoid, aside from a few scattered outbreaks, was becoming increasingly rare. Smallpox was on the decline47; by 1949, it would disappear from America altogether. Meanwhile cancer was already outgrowing other diseases, ratcheting its way up the ladder of killers. Between 1900 and 191648, cancer-related mortality grew by 29.8 percent, edging out tuberculosis as a cause of death. By 1926, cancer49 had become the nation’s second most common killer, just behind heart disease.

“Cancer: The Great Darkness” wasn’t alone in building a case for a coordinated national response to cancer. In May that year,50 Life carried its own dispatch on cancer research, which conveyed the same sense of urgency. The New York Times published two reports on rising cancer rates, in April and June. When cancer appeared51 in the pages of Time in July 1937, interest in what was called the “cancer problem” was like a fierce contagion in the media.


Proposals to mount a systematic national response against cancer had risen and ebbed rhythmically in America since the early 1900s. In 1907, a group of cancer surgeons had congregated at the New Willard Hotel in Washington to create an organization to lobby Congress for more funds for cancer research. By 1910, this organization, the American Association for Cancer Research52, had convinced President Taft to propose to Congress a national laboratory dedicated to cancer research. But despite initial interest in the plan, the efforts had stalled in Washington after a few fitful attempts, largely because of a lack of political support.

In the late 1920s, a decade after Taft’s proposal had been tabled, cancer research found a new and unexpected champion—Matthew Neely, a dogged and ebullient former lawyer from Fairmont, West Virginia, serving his first term in the Senate. Although Neely had relatively little experience in the politics of science, he had noted the marked increase in cancer mortality in the previous decade—from 70,000 men and women in 191153 to 115,000 in 1927. Neely asked Congress54 to advertise a reward of $5 million for any “information leading to the arrest of human cancer.”

It was a lowbrow strategy—the scientific equivalent of hanging a mug shot in a sheriff’s office—and it generated a reflexively lowbrow response. Within a few weeks55, Neely’s office in Washington was flooded with thousands of letters from quacks and faith healers purporting every conceivable remedy for cancer: rubs, tonics, ointments, anointed handkerchiefs, salves, and blessed water. Congress, exasperated with the response, finally authorized $50,000 for Neely’s Cancer Control Bill, almost comically cutting its budget back to just 1 percent of the requested amount.

In 1937, the indefatigable Neely, reelected to the Senate, launched yet another effort to launch a national attack on cancer, this time jointly with Senator Homer Bone and Representative Warren Magnuson. By now, cancer had considerably magnified in the public eye. The Fortune and Time articles had fanned anxiety and discontent, and politicians were eager to demonstrate a concrete response. In June, a joint Senate-House conference56 was held to craft legislation to address the issue. After initial hearings, the bill raced through Congress and was passed unanimously by a joint session on July 23, 1937. Two weeks later, on August 5, President Roosevelt signed the National Cancer Institute Act.

The act created a new scientific unit called the National Cancer Institute (NCI), designed to coordinate cancer research and education.* An advisory council of scientists57 for the institute was assembled from universities and hospitals. A state-of-the-art laboratory space, with gleaming halls and conference rooms, was built among leafy arcades and gardens in suburban Bethesda, a few miles from the nation’s capital. “The nation is marshaling its forces58 to conquer cancer, the greatest scourge that has ever assailed the human race,” Senator Bone announced reassuringly while breaking ground for the building on October 3, 1938. After nearly two decades of largely fruitless efforts, a coordinated national response to cancer seemed to be on its way at last.

All of this was a bold, brave step in the right direction—except for its timing. By the early winter of 1938, just months after the inauguration of the NCI campus in Bethesda, the battle against cancer was overshadowed by the tremors of a different kind of war. In November, Nazi troops embarked on a nationwide pogrom against Jews in Germany, forcing thousands into concentration camps. By late winter, military conflicts had broken out all over Asia and Europe, setting the stage for World War II. By 1939, those skirmishes had fully ignited, and in December 1941, America was drawn inextricably into the global conflagration.

The war necessitated a dramatic reordering of priorities. The U.S. Marine Hospital59 in Baltimore, which the NCI had once hoped to convert into a clinical cancer center, was now swiftly reconfigured into a war hospital. Scientific research funding stagnated and was shunted into projects directly relevant to the war. Scientists, lobbyists, physicians, and surgeons fell off the public radar screen—“mostly silent,”60 as one researcher recalled, “their contributions usually summarized in obituaries.”

An obituary might as well have been written for the National Cancer Institute. Congress’s promised funds for a “programmatic response to cancer”61 never materialized, and the NCI languished in neglect. Outfitted with every modern facility imaginable in the 1940s, the institute’s sparkling campus turned into a scientific ghost town. One scientist jokingly called it “a nice quiet place out here in the country.62 In those days,” the author continued, “it was pleasant to drowse under the large, sunny windows.”*

The social outcry about cancer also drifted into silence. After the brief flurry of attention in the press, cancer again became the great unmentionable, the whispered-about disease that no one spoke about publicly. In the early 1950s, Fanny Rosenow63, a breast cancer survivor and cancer advocate, called the New York Times to post an advertisement for a support group for women with breast cancer. Rosenow was put through, puzzlingly, to the society editor of the newspaper. When she asked about placing her announcement, a long pause followed. “I’m sorry, Ms. Rosenow, but the Times cannot publish the word breast or the word cancer in its pages.64

“Perhaps,” the editor continued, “you could say there will be a meeting about diseases of the chest wall.”

Rosenow hung up, disgusted.


When Farber entered the world of cancer in 1947, the public outcry of the past decade had dissipated. Cancer had again become a politically silent illness. In the airy wards of the Children’s Hospital, doctors and patients fought their private battles against cancer. In the tunnels downstairs, Farber fought an even more private battle with his chemicals and experiments.

This isolation was key to Farber’s early success. Insulated from the spotlights of public scrutiny, he worked on a small, obscure piece of the puzzle. Leukemia was an orphan disease, abandoned by internists, who had no drugs to offer for it, and by surgeons, who could not possibly operate on blood. “Leukemia,” as one physician put it65, “in some senses, had not [even] been cancer before World War II.” The illness lived on the borderlands of illnesses, a pariah lurking between disciplines and departments—not unlike Farber himself.

If leukemia “belonged” anywhere66, it was within hematology, the study of normal blood. If a cure for it was to be found, Farber reasoned, it would be found by studying blood. If he could uncover how normal blood cells were generated, he might stumble backward into a way to block the growth of abnormal leukemic cells. His strategy, then, was to approach the disease from the normal to the abnormal—to confront cancer in reverse.

Much of what Farber knew about normal blood he had learned from George Minot. A thin, balding aristocrat with pale, intense eyes, Minot ran a laboratory in a colonnaded, brick-and-stone structure off Harrison Avenue in Boston, just a few miles down the road from the sprawling hospital complex on Longwood Avenue that included Children’s Hospital. Like many hematologists at Harvard, Farber had trained briefly with Minot in the 1920s before joining the staff at Children’s.

Every decade has a unique hematological riddle, and for Minot’s era, that riddle was pernicious anemia. Anemia is the deficiency of red blood cells—and its most common form arises from a lack of iron, a crucial nutrient used to build red blood cells. But pernicious anemia, the rare variant that Minot studied, was not caused by iron deficiency (indeed, its name derives from its intransigence to the standard treatment of anemia with iron). By feeding patients increasingly macabre concoctions—half a pound of chicken liver67, half-cooked hamburgers, raw hog stomach, and even once the regurgitated gastric juices68 of one of his students (spiced up with butter, lemon, and parsley69)—Minot and his team of researchers70 conclusively demonstrated in 192671 that pernicious anemia was caused by the lack of a critical micronutrient, a single molecule later identified as vitamin B12. In 1934, Minot and two of his colleagues72 won the Nobel Prize for this pathbreaking work. Minot had shown that replacing a single molecule could restore the normalcy of blood in this complex hematological disease. Blood was an organ whose activity could be turned on and off by molecular switches.

There was another form of nutritional anemia that Minot’s group had not tackled, an anemia just as “pernicious”—although in the moral sense of that word. Eight thousand miles away, in the cloth mills of Bombay73 (owned by English traders and managed by their cutthroat local middlemen), wages had been driven to such low levels that the mill workers lived in abject poverty, malnourished and without medical care. When English physicians tested these mill workers in the 1920s to study the effects of this chronic malnutrition, they discovered that many of them, particularly women after childbirth, were severely anemic. (This was yet another colonial fascination: to create the conditions of misery in a population, then subject it to social or medical experimentation.)

In 1928, a young English physician named Lucy Wills,74 freshly out of the London School of Medicine for Women, traveled on a grant to Bombay to study this anemia. Wills was an exotic among hematologists, an adventurous woman driven by a powerful curiosity about blood willing to travel to a faraway country to solve a mysterious anemia on a whim. She knew of Minot’s work. But unlike Minot’s anemia, she found that the anemia in Bombay couldn’t be reversed by Minot’s concoctions or by vitamin B12. Astonishingly, she found she could cure it with Marmite, the dark, yeasty spread then popular among health fanatics in England and Australia. Wills could not determine the key chemical nutrient of Marmite. She called it the Wills factor75.

Wills factor turned out to be folic acid, or folate, a vitamin-like substance found in fruits and vegetables (and amply in Marmite). When cells divide, they need to make copies of DNA—the chemical that carries all the genetic information in a cell. Folic acid is a crucial building block for DNA and is thus vital for cell division. Since blood cells are produced by arguably the most fearsome rate of cell division in the human body—more than 300 billion cells a day—the genesis of blood is particularly dependent on folic acid. In its absence (in men and women starved of vegetables, as in Bombay) the production of new blood cells in the bone marrow halts. Millions of half-matured cells spew out, piling up like half-finished goods bottlenecked in an assembly line. The bone marrow becomes a dysfunctional mill, a malnourished biological factory oddly reminiscent of the cloth factories of Bombay.


These links—between vitamins, bone marrow, and normal blood—kept Farber preoccupied in the early summer of 1946. In fact, his first clinical experiment, inspired by this very connection, turned into a horrific mistake. Lucy Wills had observed that folic acid, if administered to nutrient-deprived patients, could restore the normal genesis of blood. Farber wondered whether administering folic acid to children with leukemia might also restore normalcy to their blood. Following that tenuous trail, he obtained some synthetic folic acid, recruited a cohort of leukemic children, and started injecting folic acid into them.

In the months that passed, Farber found that folic acid, far from stopping the progression of leukemia, actually accelerated it. In one patient, the white cell count nearly doubled. In another, the leukemia cells exploded into the bloodstream and sent fingerlings of malignant cells to infiltrate the skin. Farber stopped the experiment in a hurry. He called this phenomenon acceleration76, evoking some dangerous object in free fall careering toward its end.

Pediatricians at Children’s Hospital were furious about Farber’s trial. The folate analogues had not just accelerated the leukemia; they had likely hastened the death of the children. But Farber was intrigued. If folic acid accelerated the leukemia cells in children, what if he could cut off its supply with some other drug—an antifolate? Could a chemical that blocked the growth of white blood cells stop leukemia?

The observations of Minot and Wills began to fit into a foggy picture. If the bone marrow was a busy cellular factory to begin with, then a marrow occupied with leukemia was that factory in overdrive, a deranged manufacturing unit for cancer cells. Minot and Wills had turned the production lines of the bone marrow on by adding nutrients to the body. But could the malignant marrow be shut off by choking the supply of nutrients? Could the anemia of the mill workers in Bombay be re-created therapeutically in the medical units of Boston?

In his long walks from his laboratory77 under Children’s Hospital to his house on Amory Street in Brookline, Farber wondered relentlessly about such a drug. Dinner, in the dark-wood-paneled rooms of the house, was usually a sparse, perfunctory affair. His wife, Norma, a musician and writer, talked about the opera and poetry; Sidney, of autopsies, trials, and patients. As he walked back to the hospital at night, Norma’s piano tinkling practice scales in his wake, the prospect of an anticancer chemical haunted him. He imagined it palpably, visibly, with a fanatic’s enthusiasm. But he didn’t know what it was or what to call it. The word chemotherapy, in the sense we understand it today, had never been used for anticancer medicines.* The elaborate armamentarium of “antivitamins” that Farber had dreamed up so vividly in his fantasies did not exist.


Farber’s supply of folic acid for his disastrous first trial had come from the laboratory of an old friend, a chemist, Yellapragada Subbarao—or Yella, as most of his colleagues called him. Yella was a pioneer in many ways, a physician turned cellular physiologist, a chemist who had accidentally wandered into biology. His scientific meanderings had been presaged by more desperate and adventuresome physical meanderings. He had arrived in Boston in 192378, penniless and unprepared, having finished his medical training in India and secured a scholarship for a diploma at the School of Tropical Health at Harvard. The weather in Boston, Yella discovered, was far from tropical. Unable to find a medical job in the frigid, stormy winter (he had no license to practice medicine in the United States), he started as a night porter at the Brigham and Women’s Hospital, opening doors, changing sheets, and cleaning urinals.

The proximity to medicine paid off. Subbarao made friends and connections at the hospital and switched to a day job as a researcher in the Division of Biochemistry. His initial project involved purifying molecules out of living cells, dissecting them chemically to determine their compositions—in essence, performing a biochemical “autopsy” on cells. The approach required more persistence than imagination, but it produced remarkable dividends. Subbarao purified a molecule called ATP, the source of energy in all living beings (ATP carries chemical “energy” in the cell), and another molecule called creatine, the energy carrier in muscle cells. Any one of these achievements should have been enough to guarantee him a professorship at Harvard. But Subbarao was a foreigner, a reclusive, nocturnal, heavily accented vegetarian who lived in a one-room apartment downtown, befriended only by other nocturnal recluses such as Farber. In 1940, denied tenure and recognition, Yella huffed off to join Lederle Labs, a pharmaceutical laboratory in upstate New York, owned by the American Cyanamid Corporation, where he had been asked to run a group on chemical synthesis.

At Lederle, Yella Subbarao quickly reformulated his old strategy and focused on making synthetic versions of the natural chemicals that he had found within cells, hoping to use them as nutritional supplements. In the 1920s, another drug company79, Eli Lilly, had made a fortune selling a concentrated form of vitamin B12, the missing nutrient in pernicious anemia. Subbarao decided to focus his attention on the other anemia, the neglected anemia of folate deficiency. But in 1946, after many failed attempts80 to extract the chemical from pigs’ livers, he switched tactics and started to synthesize folic acid from scratch, with the help of a team of scientists including Harriet Kiltie, a young chemist at Lederle.

The chemical reactions to make folic acid brought a serendipitous bonus. Since the reactions had several intermediate steps, Subbarao and Kiltie could create variants of folic acid through slight alterations in the recipe. These variants of folic acid—closely related molecular mimics—possessed counterintuitive properties. Enzymes and receptors in cells typically work by recognizing molecules using their chemical structure. But a “decoy” molecular structure—one that nearly mimics the natural molecule—can bind to the receptor or enzyme and block its action, like a false key jamming a lock. Some of Yella’s molecular mimics could thus behave like antagonists to folic acid.

These were precisely the antivitamins that Farber had been fantasizing about. Farber wrote to Kiltie and Subbarao asking them if he could use their folate antagonists on patients with leukemia. Subbarao consented. In the late summer of 1947, the first package of antifolate left Lederle’s labs in New York and arrived in Farber’s laboratory.

The Emperor of All Maladies

Подняться наверх