Читать книгу Pharmageddon - David Healy - Страница 9
Оглавление1
They Used to Call It Medicine
The careers of Alfred Worcester (1855–1951) and Richard Cabot (1868–1939) in Boston spanned the formative years of modern medicine. Worcester's medical training in the late nineteenth century included repeated visits to the homes of the sick and dying as a doctor's apprentice, whereas students of Cabot in the early twentieth century were trained in the new sciences basic to medicine, like bacteriology, and rarely got to meet the same patient more than once. Second-year medical students of 1912 knew more about diseases than the doctors of his generation ever did, Worcester conceded, but he argued that at the end of the day an exquisite knowledge of disease mechanisms was more likely to tell a doctor what his patients had died from rather than how to help them live or die. Older doctors, while not ignoring what they understood of disease processes, knew vastly more than their younger colleagues about human helplessness and were comfortable managing it. “It is when dealing with the mysteries of life that science fails the modern doctor,” Worcester said.1
Bemoaning a recent shift in Harvard that saw pharmacology now taught by someone who had never engaged in medical practice, Worcester noted that “in the modern medical schools science is enthroned. Carried away by the brilliance of etiological discoveries, the whole strength of the school is devoted to the study of disease. The art of medical practice is not taught; even its existence is hardly recognized.”2 “Little wonder is it,” he went on, that people “turn to the Christian
Scientists, or other charlatans who, either in their absurd denial of the existence of disease or for mercenary reasons, at least leave some hope in the sick room.”
Worcester's words have an uncomfortable ring of truth. But while we undoubtedly hope our doctor will be “old school,” few if any of us are prepared to give up the benefits science has brought to medicine over the past century. Aware of the hazards of a narrowly “medical” approach, many medical schools, Harvard in particular, attempt to ensure that students realize an illness is but an episode in their patient's lives.3 But despite these efforts, clinical practice still seems to be degenerating.
Patients treated by Worcester might have seen him with an apprentice in tow or when they went to see Cabot at Massachusetts General they could well have encountered medical students sitting in on the visit. But now the pharmaceutical industry has been able to persuade doctors to allow trainee drug reps to sit in on clinics—recently illustrated in the movie Love and Other Drugs. For example, as part of her training as a sales representative, Jeanette got to sit in with Dr. N on a “medication management clinic.” Dr. N is the fictional name for a real doctor— a high-volume prescriber of drugs—who is the subject of a research project looking at modern clinical practice.4 Jeanette was struck by the amount of paperwork he had to fill out on each patient—charts tracking both the doctor's and patient's perceptions of whether a drug was working and whether any side effects were apparent. He was so busy filling out the forms that he barely looked up during his 10-to 15-minute sessions with patients.
One day, a middle-aged man came in, and while Dr. N completed the paperwork for the previous patient, Jeanette engaged him. He seemed to be in a good mood considering he was in a wheelchair and recently had had both legs amputated because of vascular problems. Dr. N began to ask the usual questions, ticking boxes as he went.
Finally, the patient interrupted: “Look at me, Dr. Do you notice anything different about me?” He repeated this several times until Dr. N looked up and focused directly on the man, while pushing his glasses up with his thumb. He stared at the patient for several seconds and finally said, “No, I don't notice anything different, what's up?” The patient smiled and said excitedly, “I got my legs cut off since that last time you saw me!” Dr. N steered the conversation back to the patient's medication, and the session ended a few minutes later.
While this neglect may have been extreme even for the fictional Dr. N, many of us face something similar when we visit our doctors today; even the best seem to spend an increasing proportion of their time looking at computer screens rather than at us. While Cabot was more committed to the latest science than Worcester, there is little doubt that he would have been as appalled at this as Worcester might have been. There is moreover no reason to believe that an embrace of science should lead to such degradation of medical practice. The case of Dr. N, comically extreme as it is, puts in stark relief a type of medical practice encouraged by the dominant forces in healthcare today.
This book sets out to explain how we have come to a situation where a Dr. N can not just exist but may become something of the norm in the near future. As a first step we need to outline two histories, one a relatively traditional history of medicine's relation to drugs culminating with the emergence of a set of truly effective magic-bullet treatments in the middle years of the twentieth century and the other a history of marketing that starts in the late nineteenth century. These two histories initially have little in common but in recent years, as we shall show, medicine seems to have become the home of the most sophisticated marketing on earth. The transformation is undeniable but if we wish to change things it is not sufficient to simply assert there has been a transformation—we need to pinpoint the mechanisms driving the change.
FROM MEDICINE TO MARKETING
In a 1951 ad featuring American soldiers at war, the pharmaceutical company Eli Lilly outlined the contribution it and other companies were making to the United States:
A record of American Achievement. Thousands of Americans and of our allies too, are alive today because of the lifesaving gains made in World War II. The mortality rate of our own wounded dropped to the lowest level in the history of any army in the world. This was accomplished through better methods and techniques of medical care and especially through the use of new and improved pharmaceuticals. Tremendous quantities of penicillin, anesthetic agents, sulfonamides and processed blood were quickly supplied by such manufacturers as Eli Lilly and Company. The rewards of free enterprise had built an American industry large enough to do the job.5
Ironically, most of the treatments mentioned in this ad were produced not through free enterprise but through government funding or by the prewar German pharmaceutical industry cartel IG Farben. During the 1950s and 1960s, Czechoslovakia, then part of the Soviet bloc, produced more new drugs per capita than any other country. Nevertheless, capitalism and free markets at some level “work.” Everyone now recognizes this. And, given a choice between a system that produces the best hi-tech healthcare in the world and contributes to the development of breakthrough drugs but is run as a business or a system that isn't run as a business, puts a premium more on caring than on breakthrough drugs, and takes social factors into account in considering appropriate care, the average person will opt for breakthroughs every day of the week.
The problem with this free market view of the world, nowhere more so than in the pharmaceutical industry, is that the free markets that supposedly lead to better mousetraps were cannibalized in the twentieth century by what became a few large firms, one of them being Eli Lilly, that were then in a position to favor marketing over innovation as the ultimate key to their profits. Where the research and development budgets of large pharmaceutical companies like Lilly and Pfizer were once much greater than their marketing budgets, the reverse is now true. The pharmaceutical industry, for example, now spends $30 billion annually on marketing in the United States alone. In 2002, Pfizer devoted roughly $1.2 billion to marketing the statin Lipitor as a treatment for raised cholesterol levels, an amount equivalent to the US National Institutes of Health budget for research on Alzheimer's disease, arthritis, autism, epilepsy, influenza, multiple sclerosis, sickle cell disease, and spinal cord injury combined.6 Given that Lipitor was only one of six statins on the market at the time, anyone trying to get out a competing medical message—that statins have a very limited role in healthcare or that there are alternate ways to lower cholesterol through diet and other approaches—faces a daunting challenge. With money like this behind them, twenty-first-century corporate marketers are supremely confident they can sell anything. Bottled water, oxygen, and with the right packaging, even inferior mousetraps. And if they can do this, why not homeopathic or relatively worthless medicines?
Pharmaceutical companies have perhaps done more to undermine traditional markets than any other industrial companies. Not coincidentally, from Lilly's 1951 ad to a 2005 book by Hank McKinnell,7 then CEO of Pfizer, they are also the corporations most active in spreading the message that there is no alternative to a free enterprise system. But while in fact many industrial corporations came to the conclusion a century ago that the capitalism of cutthroat competition and free markets didn't work as well as it might—at least not for them—no other branch of industry has been able to pursue this agenda in quite the way pharmaceutical companies have.
To begin to see how pharmaceutical companies have engineered what they have, we need to return to the mid-nineteenth century when the first science-based companies began producing goods made possible by the new physical and chemical sciences. These sciences formed the basis of electrical manufacturing as well as the chemical and metal industries, leading to a string of new goods from automobiles to plastics, explosives, dyes, rubber products, artificial fibers, and, later, pharmaceuticals. In all these cases, competition between companies should in theory drive prices down in an open market, especially when increasing automation reduced the cost of production. Faced with the risk of falling profits, the new manufacturing companies commonly banded together in cartels to keep prices artificially high. A cartel presents the world with apparent competition between companies when in fact the companies have agreed among themselves to coordinate prices and market arrangements, allowing them to enjoy the advantages of de facto monopolies.8 But government resistance to cartels mounted in the United States and Europe at the end of the nineteenth century.
Companies needed to find another way to maintain or increase their profits. It was this that led to a turn to marketing. For manufacturers the problem was that as the genuine need for automobiles, plastics, dyes, nuts and bolts as well as mousetraps were increasingly met with capacity to spare, ever more production could only drive prices down. If prices could not be rigged by cartels, could demand be maintained or increased by tapping into what people might be persuaded they wanted or needed?9
An appreciation of the opportunities that marketing opened up led to the emergence of marketing departments within companies and the first university courses on marketing in the 1920s.10 In a supreme irony, much of the raw material for these courses came from a brilliant set of ad hoc developments that underpinned the marketing of proprietary, over-the-counter medicines in the nineteenth century. The early exponents of the new science of marketing realized that the true experts on understanding what people might be persuaded they needed were quacks pushing worthless medical remedies. It was these who more than anyone else created advertising and shaped modern marketing.11
One of the first lessons that proprietary medicines taught later marketers came in the 1830s, when a war broke out in the newspapers between Samuel Lee, Jr. of Connecticut who produced “Bilious Pills” and another Samuel Lee from Connecticut who also produced a Bilious Pills remedy. Far from leading to market collapse, this dispute spurred demand for Bilious Pills, and soon other proprietors across the country were joining in with their own preparations. In contrast to the producers of dyes and metal goods, these businessmen found competition among producers led to increase in sales for their commodities, without prices falling. In the drug domain this phenomenon has been demonstrated again and again from the marketing of Aspirin as a brand name product in the nineteenth century to that of Lipitor and Vioxx in this century.12
In 1804, there were some ninety over-the-counter proprietary medicines listed in New York. By 1857, this had swollen to more than fifteen hundred countrywide. The growth in business paralleled the growth of the press and literacy. Where there were two hundred newspapers in 1800, there were four thousand by 1860. The proprietors of these remedies took advantage of this explosive growth and were among the first to market nationally. They also marketed heavily: the proprietary medicines industry spent more than any other industry on advertising in the second half of the nineteenth century. By the end of that century up to $i million was being spent on telling the American people about the benefits of Scott's Emulsion, just one of an estimated fifty thousand compounds in a trade that had a retail value of several hundred million
dollars.13
The proprietary medicines industry was the first to market lifestyles rather than the compounds per se—this was marketing before the modern term had come into being. There was a simple reason for this: there was little if any value in these substances. The key ingredients were on the bottle rather than in it—the branding. It was no accident that one of the leading early lights in advertising, Claude Hopkins, would say that “the greatest advertising men of my day were schooled in the medicine field”14—schooled, that is, in how to persuade the public that Carter's Little Liver Pills, Lydia Pinkham's Vegetable Compound, Clark Stanley's Snake Oil Liniment, or Coca Cola or, later, 7-UP would restore health or beauty, resolve halitosis, conquer fatigue, or ward off calamity. The money thrown into the marketing campaigns came from huge markups in the selling price of these compounds, typically inflated to five times their cost of production.15
Marketing does exactly the same thing today when it gets us to believe that a particular running shoe or music system is not only necessary for its stated function but will deliver what we desire more generally in life—our unmet needs. These are the suggestions that underpin the marketing of bottled water and other such items that most people for most of the twentieth century never imagined could be marketed. And as the bottled waters of the twenty-first century along with the proprietary medicines of the nineteenth century (which contained little more than water) show so clearly, the differentiation between one product and another is seldom based on actual differences in the products but hinges rather on brand recognition, on how effective competing marketers are in encapsulating wish fulfillment, and in saturating potential purchasers with their message.
While there were differences in emphasis between medical men like Cabot and Worcester, during the period from 1850 to 1950 medicine was united in an implacable opposition to medical quackery and proprietary remedies—and by extension to marketing. In early nineteenth-century Europe and the United States, there was no licensing of physicians and no training in science as part of their education. Hospitals were almshouses for the poor rather than institutions with a mission to treat effectively. When people got sick, they increasingly turned to the remedies being advertised nationally and sold by salesmen, who peddled everything from cures for cancer to elixirs for love or eternal youth.
Faced with a proliferation in wild claims for cures for everything from consumption to nervous problems, the first European and American associations of physicians that emerged in the mid-nineteenth century committed themselves to ensuring medical doctors knew what they were prescribing and knew what there was to know about the conditions they were treating.16 In cases where no treatment seemed to work any better than judicious waiting, the new breed of doctors were educated in the virtue of waiting with their patients, trained to recognize as had Philippe Pinel that there was often a greater art in knowing when not to prescribe than in prescribing.
In Europe the greatest concern regarding these remedies was voiced over markups of 500 percent or more for compounds that contained little that might actually help patients.17 In the United States, where the proprietary industry flourished to the greatest extent, physicians expressed greater concern over the injuries some of these potions could cause. As Oliver Wendell Holmes put it at a meeting of the Massachusetts Medical Society in 1860, “I firmly believe that if the whole materia medica [the available drugs] could be sunk to the bottom of the sea it would be all the better for mankind—and all the worse for the fishes.”18
Medical concerns about these proprietary medicines led to calls for their regulation. This resulted in the establishment in the United States in 1906 of the Chemical Bureau, the first regulator of medicines, the forerunner of the Food and Drug Administration (FDA), established in 1938, and all such regulators since.19 The Chemical Bureau was set up to force companies marketing medicines to specify on the label what the product contained. This minimal intrusion into the market ironically initiated a process whereby increasing regulation forced smaller companies unable to meet the demands of regulation out of the market and the consequent growth of the surviving corporations, with ever greater marketing capabilities, that have a century later brought us to the brink of Pharmageddon.
Having fought for over 150 years against quackery first and alternate or complementary medicines latterly, it has become part of the medical profession's self-image that it has nothing to do with the kind of marketing that makes extravagant claims for spurious remedies. Buyers might have to beware in other domains of life but not when it comes to medical care.
This confident self-belief can still be found in most everyone involved in healthcare, from doctors and administrators to nurses, and it is taken at face value by a wider public. The idea that most doctors have been body-snatched and replaced by someone working for a faceless marketing department seems at first inconceivable to most people, the germ perhaps of an amusing idea for a television series, but nothing more.
Doctors, and many of the rest of us, think we see the marketing in medicine—some free pens and lunches, plenty of free samples, perhaps golfing trips for some select doctors to the Caribbean—and feel able to discount this. Modern safeguards are supposed to be in place to manage anything a marketing department dreams up. Our drugs have to pass the scrutiny of regulators and they can only do this if there are controlled trials to show that they work. After that they are available by prescription only from doctors who are increasingly constrained in what they can do by guidelines drawn up by experts on the basis of these treatment trials. These guidelines are built into standards of care across all areas of medicine, and doctors straying from these standards in response to some marketing gimmick risk being sued or losing their medical license.
With these safeguards in place, the idea that medicine could be sucked into a machine that has as little interest in the health of the people who consume its products as a shoe company has in their fitness but that has a lot of interest in using disease and ill health to distinguish drugs that are often as indistinguishable in their effects as one brand of bottled water is from another will be met with incredulity or incomprehension in many quarters. It doesn't make sense that seriously ill people would pay more heed to the color of the pill or its brand name than to the question of whether the treatment they are on is likely to do more good than harm. Marketing, after all, has little impact in states of famine. Starving people don't hold out for paninis or rye bread.
But this is to miss the point. In the current setup, it is the doctor not the patient who consumes, and the doctor is rarely in extremis. What doctors dispense will have a branded glow to it. They will be giving it for a condition as likely as not that they were never taught about in medical school, a condition, in fact, now being marketed by some company. Neither the doctor nor the patients who receive a treatment, nor those who draw up the treatment guidelines, are likely to find out that this treatment may actually be less effective than older treatments recently retired medical colleagues might have prescribed. The doctor is also unlikely to be aware that the markup on drugs like Lipitor, Prozac, and Nexium may be up to 2500 percent. In 1912, medical associations inspired major changes by questioning a markup of 500 percent on proprietary medicines of the day. In 2012 there is close to silence on markups of 2500 percent. This silence is linked closely to the failure by medical practitioners to understand how industry now brands its products.
THE ONCE AND FUTURE BRANDS
Philippe Pinel working in Paris in 1809 had few effective treatments aside from some herbs and metals, along with remedies drawn from opium for pain relief or sedation, willow bark also for pain, fever bark for malaria, and foxglove for heart failure. A few years later, in the 1820s, advances in chemistry made it possible to extract opiates from opium, quinine from fever bark, and digitalis from foxglove, so that standard doses of treatments could be given. Then in 1860, just as Oliver Wendell Holmes was consigning almost all available drugs to the bottom of the sea, a new set of medicines emerged, giving doctors like Worcester and Cabot by the 1890s new therapies to employ and the hope of more to come.
In the second half of the nineteenth century, a series of mostly German chemical companies, such as Bayer, BASF, and Hoechst grew rapidly because of the newfound ability to synthesize dyes from coal tars. These dyes transformed our wardrobes from a series of drab brown and green outfits to yield the blues and pinks and yellows we now have. They also allowed human tissues and cells to be distinguished, giving rise to histology, and bacteria to be stained, creating bacteriology. The ability to distinguish among bacteria made it possible for doctors to differentiate between diphtheria, the most common cause of childhood mortality, and other throat infections, for instance. Doctors like Worcester had been called repeatedly to homes to try to save the lives of children being literally garroted by membranes from the diphtherium bacteria forming deeper and deeper down in their windpipe. They would cut open the child's throat and pass in a tube and hope no membranes formed even lower down, though all too often this failed. Distinguishing diphtheria from other bacteria led to the synthesis of diphtheria antitoxin in 1894. As the membranes from the illness faded in response to an injection of the antitoxin, so also the nightmare of diphtheria began to fade away in areas where the antitoxin was available.20 Successes like this heralded a future of pharmaceutical magic bullets.
Discovering a new dye or drug in Germany of the late nineteenth century did not give a company sole rights to that product. If another company could demonstrate that it was able to produce the compound by a different process than that specified by the original company, the second company was entitled to manufacture its version of summer blue, say, or acetylsalicylic acid. When acetanilide, one of the drugs derived from dyes, was shown to be antipyretic (fever reducing), for example, well over ten companies found different ways to produce it. No company could easily make a blockbuster drug in these circumstances. Faced with this scenario, in 1886, another German company, Kalle, borrowing a practice from the proprietary industry, trademarked the name of their version of acetanilide—Antefebrin. Sales boomed. Other companies took note, and Bayer followed by registering Aspirin and Heroin in 1898, names with far more resonance to this day than acetylsalicylic acid or diacetylmorphine. The basis for both of these painkillers had first been created close to fifty years earlier, so Aspirin and Heroin were new social rather than pharmaceutical creations.21
The first brands had got a foothold within medicine, and the medical profession began to make an uneasy accommodation to this changed reality. Medical students were taught to refer to the generic names of drugs—diacetylmorphine rather than Heroin. For almost a century this denial of brands worked passably well, but to continue to pretend that doctors today typically still prescribe generic drugs rather than brands passes beyond denial into fantasy.
This denial was helped by the fact that the focus in promoting the new magic bullets from diphtheria antitoxin to Heroin was directed toward doctors rather than the public at large, and toward conquering the scourge of disease rather than capitalizing on the discontents of everyday life. Sales of these new compounds were promoted by visits from company representatives to doctors and ads in medical journals, rather than the hoopla customary to the proprietary medicines' industry. There was minimal effort to subvert medical judgment, in part perhaps because until the 1940s there was relatively little to sell.
Then, in 1937, prontosil red, a dye produced by the pharmaceutical conglomerate IG Farben, gave rise to the sulfa antibiotics, which had as great an impact on lethal conditions like bacterial endocarditis, an infection of the lining of the heart, as diphtheria antitoxin had on diphtheria forty years earlier. Other antibiotics followed in the 1940s and from the sulfa nucleus came a host of other drugs, including diuretics to remove excess body fluid in heart failure, antihypertensives to lower blood pressure, and oral hypoglycemics to lower blood sugars. In the 1950s, dyes such as methylene blue and summer blue led to the antipsychotics and antidepressants.
These new drugs all came branded by their manufacturers. These were brand names that spoke of medical diseases and or chemical contents rather than the old style panaceas whose promises were in inverse proportion to their efficacy. Thus just as Kalle had given doctors Antefebrin (antifever), Merck in the 1960s gave them Diuril (for diuresis), and Tryptizol (amitriptyline) as an antidepressant. Initially, as with brands such as Hoover and Mercedes, these new brands traded on quality. The brand stood for the fact that the drug was produced by a reputable company that was linked to previous breakthroughs and doctors could accordingly be confident about the pedigree of the product. This was an era of “magic bullets”—penicillin, the thiazide antihypertensives, and antipsychotics such as chlorpromazine—which would have marketed themselves, branded or not.
Indeed, by the 1960s it seemed to many doctors as if medicine had faced down the destructive forces of marketing as the proprietary medicines industry withered away with the advent of these new magic bullets. Few drug company invaders in the 1950s and 1960s in dead of night appeared to crawl out of the Trojan horse that brands had introduced to the medical citadel. But the fatal breach had been effected. Changes to the patent laws in the 1960s, allied to the fact that these new drugs were available by prescription only, laid the basis for the emergence of blockbuster branding in the 1980s.
PATENT MEDICINES
Patents offer an exclusive right to produce a good or service. They are granted by a state, are even older than brands, and once provoked almost as much hostility within medicine as brands. Patenting drugs, and thereby restricting access to them either physically or by virtue of the increased price that comes with a monopoly, was for centuries regarded as incompatible with a vocation to alleviate disease. In the case of modern drugs, this period of monopoly lasts for twenty years.
The first patent law was enacted in Venice 1474, and the idea then spread rapidly throughout Europe.22 In Britain, after widespread complaints about abusive patent monopolies being granted by the Crown for long-existing technologies, the law was tightened in 1624 to limit grants of monopolies to “the sole working or making of any manner of new manufactures within this realm, to the true and first inventor and inventors of such manufactures.”23
Being the exclusive patent holder of a good or service meant that you could produce it at a higher price than was possible in a competitive market. This, it was hoped, might lure innovative producers to Britain, and their activities would in turn stimulate commerce and improve national revenues.24 However, in return for this benefit the producer had to show plans to create something novel that plausibly brought some benefit to the wider community.
In Britain, patents went hand in hand with the enclosure of common lands in the sixteenth century, and critics of patenting since have referred to the anticommons effect of the practice. Because science hinges on common access to all data, many scientists and free market advocates have been hostile to patenting. But the deepest hostility to patents throughout the nineteenth and twentieth centuries came from within medicine. Neither the doctors who treated patients nor the pharmacists who dispensed remedies a doctor ordered regarded the remedies they gave as industrial or commercial products or their own activities as either industrial or commercial.
In France, the Revolution led to promulgation of a new law in 1791 that permitted drugs to be patented.25 Chemists and trade associations on the one hand argued at the time for the rights of inventors to be recognized. But French physicians and pharmacists argued against patents; their vocation, they said, was to treat the sick, not to make a profit. Furthermore, patents, they predicted, would lead to an increase in the price of medicines, which would be detrimental to public health.26 In 1844, the French National Assembly reversed the 1791 law and removed medicines from the domain of patentable products.
German law did not permit drugs to be patented, but it did allow companies to defend their product by taking out a patent on the process used to make the compound. Another company could get around the monopoly that these patents created, if they could find another way to make a compound. In some cases, as with acetanilide, this was easy, and it was this that led Kalle and Bayer to trademark their new compounds, which gave them exclusive use of the brand name they chose for their product.
American law, in contrast, allowed patents to be taken out on drugs, even though some of the fathers of the Republic were hostile to patents. Benjamin Franklin refused to take out a patent on a stove he invented while Jefferson, referring scornfully to England's willingness to let anything be patented, refused to patent a hemp-brake he invented, stating that “nations which refused monopolies of invention are as fruitful as England in new and useful devices.” In this spirit, the nation's patent office was initially stringent in its review of applications for drug patents. In 1922, for example, Lilly attempted but failed to get around the patent on the production of insulin held in the public interest by the University of Toronto.27 When in the following year, Harry Steenbock discovered that ultraviolet light activated vitamin D and sought to patent this use, he found himself accused of attempting to patent the sun and the application was thrown out. Referring back to this case in the 1950s, Jonas Salk exemplified the attitude of many American doctors at the time when he refused to patent the polio vaccine.28
The issue of patents came to a head in Britain during World War II, when Ernst Chain and Howard Florey at Oxford University demonstrated penicillin's efficacy for bacterial infections and came up with a method to produce it. Chain suggested patenting the method but Florey and the rest of the group, along with the Medical Research Council that funded the research, were opposed to patenting something so important for clinical care. This was later seen by many as a lost national opportunity, and a new law was passed in 1949 that permitted patenting of medical products.
After the war, the position of an American or British company with a patented product was more secure than a German company with a process patent, but still these patents only applied to a national territory and so the monopoly they offered was limited. For example in the case of amitriptyline, the best-selling antidepressant during the 1960s, Merck held a patent on it in the United States, Roche (in fact the first to make the drug) held one in Switzerland, and Lundbeck in Denmark, as did a laboratory in Czechoslovakia.29 Given the possibility that others might be able to make the very same product, no company could plan to market the drug profitably throughout the developed world. As a result, while some compounds that came to market during the 1950s and 1960s did extremely well, it didn't make sense for companies to invest huge effort in any one compound; except within the United States and Britain, there was no protection against another company making the same drug and cutting into profits.
The important patent changes in international drug markets came in 1960 when France, the country that had been most opposed to patenting medicines, switched to product patents, followed in 1967 by Germany, the country that had developed more pharmaceuticals than all other countries combined. Once companies knew that applying for product patents in all major countries simultaneously blocked the development of any competing products, the way was cleared for the development of blockbusters. The possibility of truly global blockbusters came in the 1980s with the creation of the World Trade Organization's Trade Related Aspects of Intellectual Property Rights (TRIPS), which extended patent protection worldwide.30 If the patent is valid, this gives a company the possibility of a monopoly on a new product worldwide for twenty years from the date of filing. From that point onward, there could only be one Lipitor, one Nexium, one Prozac, and the way was open for a company to maximize the possibilities inherent in branding—and to go as global as Coca Cola.31
Compared with the vigorous debates in France that led in 1844 to a rollback in patenting medicines and the discussion of the moves that blocked the patenting of penicillin in Britain and polio vaccine in the United States, there was virtual silence in the face of these more recent changes. No one argued that patenting and commerce were incompatible with progress in science and the principles of medicine, as they had earlier.
Several historical factors probably contributed to the silence. World War II had seen heavy state investment in medical research. This investment created partnerships between scientists, universities, and pharmaceutical companies that capitalized knowledge and contributed to the development of what is now termed the knowledge economy. This led in the 1940s and 1950s to an astonishing development of truly novel and extremely effective agents, from the antibiotics and cortisone to the diuretics, antihypertensives, hypoglycemics, and psychotropic drugs, as well as the first chemotherapy for cancer. It seemed we were set on a course in which genuine developments would succeed each other for years to come. The era of snake oil was over. Academic understanding and medical research had developed as never before, laying the basis for real progress, and pharmaceutical companies had played a part in this progress. Besides, even with the change in what could be patented, the spirit of the patent laws and the expectations of the medical community at least notionally remained aimed at providing businesses with a period of monopoly but only in return for a genuine novelty that offered a distinct benefit to the public. Such an arrangement seemed to be an engine for harnessing commercial vigor to public purpose.
It hasn't turned out that way. When assessing the patent application for a drug, the examining officer is supposed to look at whether the structure of a molecule is substantially different from compounds already on the market, and whether it provides a clear clinical benefit, a solution to a problem of medical care for which we have not previously had an answer.32 It is in the interest of a drug company, however, to argue that differences that may appear to be trivial are in fact substantial and innovative, as in some cases they are. But, if a country wishes to build up its pharmaceutical sector, as the United States was intent on doing in the postwar decades, one way to do so is to make it easy to take out patents. The notions of benefit to the community and of novelty can be shaved, so that companies might be awarded patents for trivial variations on a compound that does not clearly confer any benefit in terms of health or other public value.
Against this background let us look at the patenting of Depakote. The American patent on Depakote was taken out in 1991 but the drug in fact came from a French anticonvulsant, sodium valproate, first produced in 1962. By the mid-1960s, it was known that the sedative effects of sodium valproate could be useful in the treatment of mania. When Abbott filed for a patent on semi-sodium valproate in 1991, it was on the basis that minimally reducing the amount of sodium in the compound, which was completely irrelevant to the mode of action of the drug, made it novel. Had Abbott proposed to test this compound, which was trivially different from sodium valproate, for a hitherto incurable disorder, such a stretching of the spirit of the patent law might have been warranted on the basis of clinical need. But all Abbot planned to do was to put it into trials for use in mania, with a result that was a foregone conclusion. That Depakote was granted a patent is indicative of how lax the application of American patent law had become. The reason to go to all this trouble was that sodium valproate was now off-patent—any company could make it—and without marketing exclusivity Abbott thought that it could make little or no money.
Faced with an application for its use in mania, the FDA then licensed Depakote. Surely clinicians would not use the much more expensive on-patent semi-sodium valproate over the far less costly off-patent but essentially identical sodium valproate? Such a prediction ignores the power of the kind of branding that product patents made possible. Clinicians were faced with a brand new compound, a brand new class of drug—a mood stabilizer—and a brand new illness—bipolar disorder—and they fell hard for the package. Depakote became a billion dollar global blockbuster and manic-depressive illness was consigned to the dustbin of history, greatly increasing the costs of healthcare in the process. The success of Depakote lay entirely in Abbott's ability to distinguish between two drops of water—but it was the ability to take out a product patent with global reach that made it worth their while to do so.
In the case of Zyprexa, an antipsychotic and mood stabilizer, the story is as extraordinary. The first generation of antipsychotics ran into problems in the 1970s with million-dollar legal settlements against their manufacturers for a disfiguring neurological side effect of treatment—tardive dyskinesia. This led to a period of almost twenty years when no new antipsychotic came on the market. The only antipsychotic that did not cause this problem was clozapine, but clozapine had been withdrawn in 1975 because it was associated with a higher rate of mortality than other antipsychotics.
The way forward seemed to lie in producing a safe clozapine. There were two ways to attempt this. One was to develop a drug that bound to the key brain receptors that clozapine bound to; this method underpinned the patenting of Risperdal (risperidone) and Geodon (ziprasidone). Another way was to make minor adjustments to the clozapine molecule. Tweaking a molecule risks producing a compound with all the hazards and none of the benefits of the parent. This is what Lilly did: in 1974 the company produced a series of compounds that were all abandoned because of toxicity.
As the patent life of that series ebbed away, Lilly had to decide whether to abandon the hunt. This was a company in serious financial trouble, facing potential takeover. On April 29, 1982, they opted to move forward with a compound from the original series that by definition was not novel—olanzapine, later branded as Zyprexa. To make Zyprexa commercially viable, they needed a new patent, which meant demonstrating some benefit not found with other antipsychotics. In 1991, the only novelty presented in the company's new patent application, which was approved, was a study in dogs in which Zyprexa produced less elevation of blood cholesterol levels than another never- marketed drug.
Zyprexa has since turned out to be one of the drugs most likely in all of medicine to increase cholesterol levels in man. Lilly has settled over $2 billion worth of claims that Zyprexa has raised cholesterol and caused diabetes and other metabolic problems. There was arguably a better case to be made for patenting it to raise cholesterol than to treat psychosis.33 Lilly's patent was declared invalid in Canada, though not in the United States or Europe. Despite this, Zyprexa has been one of the biggest selling drugs of all time, grossing $4–5 billion per annum from the late 1990s through 2010. There was no basis to think this drug was any more effective than dozens of others and a lot of reasons to think it was more problematic for patients, but the marketing power that came with its patented status enabled Lilly to hype its benefits and conceal its hazards and steer doctors to write enough Zyprexa prescriptions to save the company.
In our brave new world, companies can make blockbuster profits out of a Depakote or Zyprexa. If these two compounds were exceptions, the price might be worth paying for a set of drugs that were otherwise innovative and were leading to treatments for serious conditions that previously went untreated. Many might sigh but most would reconcile themselves to the situation—this is the way the world works. But that world does not seem to be working anymore. Where there were a handful of new tranquilizers, antipsychotics, antidepressants, and stimulants introduced annually year after year from the 1950s onward, the flow of novel psychotropic drugs dried up in the mid-1980s.
The decline of the antidepressants illustrates this all too well. The antidepressant drugs produced from 1958 to 1982 were used primarily for severe mood disorders and as such had a much smaller volume of sales than the benzodiazepine group of drugs, of which the best known, Valium and Librium, quite literally became household names—these were mother's little helpers. Valium and the other benzodiazepines were marketed as tranquilizers for anxiety from 1960 onward. In the 1980s, claims that they caused dependence led to a backlash against the benzodiazepines, leaving the market open for a new group of drugs which, however, could not be called tranquilizers as this term was now too closely linked to dependence and withdrawal. The strategy seemed clear to the major drug companies—to persuade doctors that behind every case of anxiety lay a case of depression. And to persuade them that a new group of drugs, the SSRIs (selective serotonin reuptake inhibitors), were both antidepressant and a therapeutic advance, when in fact the companies had almost consigned the SSRIs to the dustbin in the early 1980s as they were not as effective as either the tranquilizers or older antidepressants. They were also not especially novel, most of them being simple derivatives of preexisting antihistamines, many of which work as well as the SSRIs for nervous problems. Nevertheless this molecular group appeared to offer a modest but patentable amount of novelty and therapeutic benefit. The profits that came with the patent status, amounting to $15 billion per year for the group as a whole, provided the means to transform psychiatry's views of common nervous disorders—until the patent on these drugs expired soon after 2000 and clinicians had to be reeducated that the very same patients were now suffering from bipolar disorder and in reality needed a mood stabilizer.
If the SSRIs had been a bridge to a more effective group of compounds this again might have seemed acceptable, but since 2000 almost the only novelty has come from three isomers of earlier SSRIs. Many drugs come in mirror image, or left and right hand (isomer) forms. Typically only one of the hands is active. Until the 1990s it was inconvenient to attempt to separate these isomers. But then in the mid-1990s Sepracor isolated esfluoxetine and dexfluoxetine (Zalutria) from Prozac (fluoxetine). Lundbeck isolated escitalopram (Lexapro) from Celexa (citalopram), and Wyeth isolated desvenlafaxine (Pristiq) from Efexor (venlafaxine). The astonishing thing is that companies have been permitted to take patents out on these compounds, which are as alike to the parent compounds as two drops of water.34
The paucity of genuinely new drugs coming on the market in recent years is not some odd quirk of psychotropic drug development. The best-selling drug for minimizing acid secretion in the gut in the 1980s and 1990s was the proton pump inhibitor Prilosec (omeprazole). Before the patent on omeprazole expired in 2002, its parent company, Astra Zeneca, simply introduced Nexium (es-omeprazole), an isomer of Prilosec, and clinicians shifted from a cheap drop of water to an identical but vastly more expensive drop of water. If a drug does not come in isomeric forms, companies have instead in recent years patented the metabolites of a parent compound and released these as a novel drug with the approval of the US patent office and almost no resistance from medicine.
When the breakthrough drugs of the 1950s emerged, there were great hopes that not only would they offer remedies for illnesses that we did not have treatments for, but they would also shed light on the nature of the illnesses being treated. In 1896, the advent of diphtheria antitoxin had demonstrated that not all throat infections were diphtheria, thus opening up the idea that bacteriology would be able to carve a mass of respiratory, gut, and other problems into a series of discrete illnesses each of which could be tackled individually. In the 1950s, there were great hopes that the new treatments for disorders like arthritis, depression, hypertension, or schizophrenia might similarly clarify whether these were single diseases or the end result of common pathways into which several different diseases fed.
The new drugs, to use a celebrated phrase, would help us carve nature at its joints, just as distinctions among bacteria had enabled us to carve infectious diseases at their joints. But we have now arrived at almost the precisely opposite point. Rather than drugs being used to carve nature at its joints, nature instead is being used to differentiate drugs whose differences are essentially trivial.
With the new possibilities for profit opened up by a lax to nonexistent application of patent laws, the medical arena has ceased to be a domain in which scientists using new molecular tools push back boundaries. Indeed since the passage of the Bayh-Dole Act in 1980, which encouraged scientists, including medical academics, to consider patenting the products of their research, clinicians and scientists have seemed keener on making patent applications themselves and setting up start-up companies than in advancing medical knowledge or healthcare. Pharmaceutical companies meanwhile have no interest in what molecules might reveal about how humans work. Molecules are only interesting insofar as they can be used to capture market niches. Medicine may look the same as it has always done to onlookers; the marketers know it's not.
BRANDS AND PATENTS
The emergence of product patents transformed the importance of branding. By the 1980s, when H-2 blockers for ulcers, statins for cholesterol, SSRIs for depression, and other drugs were in the pipeline, branding had become so important to companies that the job was outsourced to specialist Manhattan-based companies like InterBrand and MediBrand. No longer would drugs be called Diuril or Tryptizol. We were about to get Prozac, Viagra, Zestril, and Nexium, names that bore no relation to the underlying chemical or disease but were aimed at differentiating between the new Nikes and Reeboks of the medical world and hinted at the restoration to youthful vigor that nineteenth-century brands had shamelessly promoted.
Branding now extends far beyond generating and market-testing fancy names for drugs. Brands nest within brands. The new marketers brand drug classes and diseases with far-reaching implications for medicine and society.
For instance when the makers of the new antidepressants of the 1990s needed to distinguish their drugs from older drug treatments, the term SSRI (selective serotonin reuptake inhibitor) emerged. This is not a medical or scientific term. Serotonin is a neurotransmitter in the brain, but both new and old antidepressants acted on it and the new drugs were in fact no more selective than some older drugs. The term SSRI came from the marketing department of SmithKline Beecham as part of their effort to distinguish their Paxil from Lilly's Prozac and Pfizer's Zoloft, but all three companies used the term to create the appearance of a new class of drugs and provide a common platform from which to launch marketing efforts designed to marginalize older—and demonstrably more effective—treatments.35
To this day, the brand names of drugs do not feature in medical textbooks, but these same books all include sections on statins, SSRIs, and ACE (angiotensin-converting enzyme) inhibitors as though these are medical terms, when in fact they are brand-like names that replace medical terminology. Statins such as Lipitor are just one subset of lipid-lowering drugs that include equally effective older drugs such as nicotinic acid. Zestril and its sister compounds hit the market in the 1980s as “ACE inhibitors,” rather than simply as antihypertensives, and became bestsellers as the SSRIs did—replacing cheaper and more effective antihypertensives.
One of the most striking instances of the branding of a new drug class has been creation of the idea of a “mood stabilizer.” This once rarely used term was summoned up by Abbott Laboratories in the 1990s and pressed into use in the marketing of their newly patented Depakote. Depakote as we have seen was approved by the FDA in 1995. But it was only approved for the treatment of the manic pole of what was once called manic-depressive illness. Such approval was not surprising—giving any sedative to manic patients will produce a change that can be portrayed as a benefit. More surprising was the company's application for approval. There are comparatively few manic patients, and a lot of sedatives were already in use to manage their illness. If there was to be any serious money in Abbott's move, it had to lie in the much larger market of people whose moods could be portrayed as fluctuating unhelpfully, who were in need of “mood stabilization.” But Abbott's license did not include warrants to claim Depakote was prophylactic—they couldn't claim it would stop moods swinging—or indeed even that it was a treatment for manic-depressive illness.
However, from the start ads for Depakote carried a claim that it was a mood stabilizer. Had Abbott said prophylactic, indicating that this drug had been shown to prevent mood swings, they would have broken the law. The beauty of the term mood stabilizer is that it had no precise meaning. But what else would a mood stabilizer be if not prophylactic? And this verbal construction would lead prescription writers to use it for that purpose, even though no controlled trials have ever demonstrated Depakote to be prophylactic. Far from being a well-grounded scientific idea, the term mood stabilizer was an almost perfect advertising term— as successful a brand as the term tranquilizer had been in the 1950s and SSRI in the 1990s.
All of a sudden everyone seemed to know what a mood stabilizer was. There was an exponential increase in the number of articles in medical journals with this term in the title—from none in 1990 to over a hundred per year by 2000. Within a few years, all psychopharmacology books had sections on mood stabilizers. It was as if in the middle of a TV drama series like Buffy the Vampire Slayer the main character is given a sister she never knew she had. When it comes to entertainment we can accommodate developments like this without blinking, but it is not the kind of thing we expect to be happening in science or medicine without solid evidence.
The emergence of mood stabilizers coincided with increasing estimates of the prevalence of what, in another successful piece of rebranding, was now almost always called bipolar disorder. Up to the launch of Depakote in 1995, almost everyone had heard of manic-depressive illness but soon this term all but disappeared, replaced by bipolar disorder. By 2005 over five hundred articles per year in the medical literature referred to bipolar disorder in their titles, with almost none mentioning manic-depressive illness.
This rebranding reengineered the disorder from the ground up. Manic-depressive illness had been a rare and serious condition affecting ten people per million, who invariably had to be admitted to hospital. Bipolar disorder, in contrast, supposedly affects up to 50,000 people per million, and efforts are now underway to persuade primary care clinicians that a wide range of the minor nervous problems they see are indicative of underlying bipolar disorder rather than anxiety or depression, and that these patients should be treated with newer and more costly mood stabilizers, such as Zyprexa or Seroquel, rather than older and cheaper antidepressants or tranquilizers.36
Bipolar disorder became intensely fashionable with extraordinary rapidity, promoted by assiduous disease awareness campaigns through direct-to-consumer advertising on television in the United States, and patient educational material in Europe, encouraging patients to complete self-assessments and ask their doctor whether bipolar disorder might be the cause of their problems. It became fashionable to the point where clothes and accessories could be bought online celebrating the wearer's bipolarity.37 Within a decade, one of the most serious of mental illnesses had gone from being a devastating disease to being a lifestyle option.
Everybody, it seems, stood to gain—physicians, companies, and patients. Bipolar disorder could be portrayed as a genetic disorder—not a parent's fault. While no one likes to have a biological disease, this one was portrayed in pharmaceutical company sponsored booklets38 and ads as a disease linked to creativity that supposedly had affected major artistic figures of the nineteenth and twentieth centuries from Vincent Van Gogh and Robert Schumann to Robert Lowell and Sylvia Plath. Public authorities meanwhile could support screening programs such as Teenscreen, introduced in many American schools beginning in 2005, to detect the condition and trigger treatment as early as possible in order to avoid any number of social and individual ills such as suicide, divorce, career failure, crime, and substance misuse that might stem from a failure to detect and treat.39
For the specialists new journals appeared—Bipolar Disorder, The Journal of Bipolar Disorders, Clinical Approaches in Bipolar Disorders, and others. made possible by unrestricted educational grants from pharmaceutical companies. From 1995 onward a slew of societies and global conferences appeared as well—The International Society for Bipolar Disorders, The International Review of Bipolar Disorders, The International Society for Affective Disorders, The Organization for Bipolar
Affective Disorders, The European Bipolar Forum, The Australasian Society for Bipolar Disorders, and many others.
In just the same way impotence vanished and was replaced by erectile dysfunction, frigidity by female sexual desire disorder, boisterousness in children by ADHD. The skill lies in understanding the market and positioning a drug accordingly. In 1980, for instance, the newly created panic disorder was viewed as a severe form of anxiety; the marketing goal for Upjohn was to get Xanax on the market for panic disorder in the expectation that creating the perception that Xanax was good for severe anxiety would lead to leakage into prescriptions for other forms of anxiety also.40 As we shall see in later chapters, marketing like this can conjure diseases like osteopenia, restless leg syndrome, and fibromyalgia out of thin air. This is now called disease mongering. But even more alarming, an “opportunity cost” of marketing like this is that medical diseases with a pedigree going back two millennia, such as catatonia, can vanish if no company stands to make money out of helping medical or nursing staff to recognize its presence and as a result patients may die, when the means to treat them may be lying inches away.41
Once the addition of a branded drug to a doctor's arsenal was a minor addition to medical culture, but now the insertion of a Viagra or a Vioxx into the medical marketplace will often replace existing medical culture in an area of treatment, as the examples of mood stabilizers and bipolar disorders illustrate. Disorders that were once defined by patients' needs for medical services and doctors' perceptions of their pathology are now increasingly defined by the goals of marketers. Furthermore this now happens on a global basis. Whereas once the brand names of drugs differed from country to country and huge differences existed between Japanese and American medicine, say, and between French and German medicine, from the mid-1990s drugs like Zyprexa, Lipitor, and Viagra have been launched globally with essentially the same marketing in every country. Partly as a result of these onslaughts, differences in medical cultures are flattening down to a common pharmaceutical denominator. Where almost no one had bipolar disorder, osteoporosis, or female sexual dysfunction two decades ago, these new conditions are now global epidemics.
Claims like these are cheap. If it were so simple to capture the institution of medicine just by coming up with fancy names for drugs, drug classes, and diseases, the proprietary medicines industry of the nineteenth century would never have died out. But driven by a real desire to care for the most vulnerable in society and a commitment to science, medicine eliminated these imitations of medical remedies for a century, with the exception of some holdovers from the former era such as Listerine and Clearasil. Good medicines clearly pushed out bad ones, in large part because they were based on good science. And good science continues—we now have astonishing developments in genetics and in medical imaging, so the argument presented here needs to pull back the curtains on the tricks of the pharmaceutical trade and show not only how modern marketing has closed in on the holy grail of fooling all of the people all of the time, but also why there has been so little resistance among doctors.
A great deal of the marketers' sleight-of-hand has involved a manipulation of the appearances of science. There is the early twentieth-century science that produced the sulfa drugs and other antibiotics such as penicillin that let the dying rise from their deathbeds. Science like this cuts across marketing. The results were so dramatic that the drugs in effect sold themselves. But the best-selling drugs today aren't like this. They come wrapped in numbers that appear to come from science but that have been fashioned by marketers to indicate abnormalities of lipids, blood pressure, blood sugar, mood, bone density, and respiratory flow, as well as penile stiffness and clitoral sensitivity that their company's drugs just happen to treat.
But science on its own, however artfully presented, would not have produced the comprehensive shift toward lifestyle drugs we have seen in recent decades or permitted pharmaceutical companies to penetrate the inner sanctums of medicine and transform it from a profession deeply hostile to marketing into a marketer's dream. There has been more involved. We have dealt with one structural element—the change in patent laws. We will now move on to two others—the emergence of prescription-only status for new drugs and the turn to controlled trials in the evaluation of drugs.
CLIMATE CHANGE
In retrospect the twenty-five years stretching from 1937 when the sulfa drugs were first introduced to 1962 when the US Food and Drugs Act was revised to tighten up regulations governing pharmaceuticals seems like a golden age. There were more novel agents introduced during this period than at any time before or since—the first antibiotics, antihypertensives, antipsychotics, and antidepressants, and the first oral antidiabetic drug. The period had not started well, however. Soon after sulfanilamide was introduced in 1937, a pharmacist in Oklahoma, unaware of the risk of ethylene glycol, sold sulfanilamide made up in this solvent, leading to over a hundred deaths.42 In response, in 1938, American politicians stepped in to regulate commerce in medicines, through the Food, Drugs and Cosmetics Act. In 1962, American politicians stepped in again to regulate the industry with consequences that will follow us through to the end of the book.
Up to the late 1950s, prior to the passage of the 1962 amendments, in a history all but forgotten, the American Medical Association (AMA) had laboratories where they conducted their own testing of new drugs. They vetted any advertisements run in their journal, the Journal of the American Medical Association (JAMA), for accuracy and only permitted those that earned their Seal of Approval. They regularly ran assessments of new treatments that were not beholden to the pharmaceutical industry. They were known for their support of generic formulations of drugs in preference to branded drugs. But in the 1950s these curbs on promotion stopped. The Seal of Approval scheme was watered down as the AMA sought further advertising revenue from pharmaceutical and other companies to fight Democratic plans to introduce a bill for Medicaid in Congress. With the new advertising, their revenues doubled.
In the 1950s there emerged a new set of discontents with the practices of the pharmaceutical industry and the prices these companies were charging for their drugs. The discontents were brought to public focus by the Democratic senator from Tennessee, Estes Kefauver. Kefauver's interest was stimulated when members of his staff found that several versions of the same antibiotic, marketed by different companies, had identical prices, and that the prices being charged were of the order of a 1000 percent of the price of manufacture. As they explored the issues, Kefauver's staff found compelling evidence that companies were secretly engaging in cartel practices to maintain the price of medicines and corrupting doctors with backdoor payments to prescribe on-patent and more expensive drugs. There seemed to be, as Kefauver put it, “an upside down competition where prices continue to go up even when production remains low or declines.”43 As the chair of the Senate antitrust and monopoly subcommittee he had the mandate to investigate what might be behind the apparent price-rigging.
Another concern of Kefauver's was the advertising for drugs. There was the sheer volume. As Walter Griffith of Parke Davis told Kefauver, “the ethical pharmaceutical industry of this country” had turned out “3,790,908,000 pages of paid journal advertising” and “741,213,700 direct mail impressions.”44 But of greater concern was that the ads were commonly misleading and in many cases downright fallacious. Kefauver's staff unearthed one ad for an antibiotic which displayed two chest X-rays, giving the impression of clinical improvement when the X-rays in fact came from two different patients neither of whom had had the antibiotic featured. As Dale Console, a former medical director at the Squibb pharmaceutical company later put it at Kefauver's Senate hearings, “If an automobile does not have a motor, no amount of advertising can make it appear to have one. On the other hand, with a little luck, proper timing, and a good promotion program, a bag of asafetida with a unique chemical side chain can be made to look like a wonder drug.”45
Yet other concerns lay in drug company practices of withholding safety data on drugs, their lack of testing of new drugs on animals prior to marketing to humans and, more problematically, the fact that the regulators had no procedures in place to ensure a drug worked. The 1938 Food, Drugs and Cosmetics Act solely required companies to demonstrate safety in a number of patients without even basic toxicology testing in animals. As Kefauver's staff noted, if a drug didn't work for a condition for which it was marketed or worked less well than an already available product, then it was inherently unsafe. These discontents led in 1959 to the establishment of the Kefauver-Harris Senate hearings on pharmaceutical practices.46
Kefauver's main target was the patent system, which he thought was primarily responsible for the artificially high prices American patients uniquely faced. At the hearings, he elicited some revealing testimony from Frederick Meyers, a University of California professor of pharmacology who admitted that “most of the program [in drug research] has come from European and British researchers.” The purpose of much of the work done by American drug firms was, according to Meyers, “partly to exploit and market” these foreign products but “mostly to modify the original drug just enough to get a patentable derivative.”47 Was this a good idea? Kefauver's staff produced figures to show that out of 77 countries surveyed, 28 allowed product patents and in these countries the prices of drugs ranged from 18 to 255 times higher than in the nonpatent countries, with both American-made and European- made drugs costing far less in Europe than in the United States.
But as Kefauver found, “These drug fellows pay for a lobby that makes the steel boys look like popcorn vendors…anyone who dares seek the truth will be accused of being a persecutor.”48 Up for reelection in 1960, he found himself branded a “socialist hell-bent on ruining healthcare.” He was reelected comfortably, but when it came to his bill, despite having been the 1956 Democratic vice-presidential candidate, Kefauver had no support from the Kennedy administration, who were at the time trying to get Medicaid through Congress and did not want to antagonize the pharmaceutical industry. He also had no support from the American Medical Association, even for something as basic as a requirement that companies prove their drugs work before they are let on the market. The AMA was gearing up to fight Medicaid and was dependent on the increasing revenue it was receiving from pharmaceutical companies advertising in its journals.
Kefauver's bill (S. 1552) was rewritten by his congressional opponents to make it more company friendly and in this form it seemed to have good prospects of passage. But then reports began to surface from Germany of the effects of a drug called thalidomide. Thalidomide was a sleeping pill sold over the counter in Germany and about to be marketed in the United States by Merrell Pharmaceuticals, when it was linked to a new and disturbing problem—babies of mothers who had taken the drug were born limbless or with useless flippers (phocomelia) where limbs should have been. The makers, Chemie-Grunenthal, fought the linkage to their drug and only removed thalidomide from the German market under pressure. Almost a year after the first reports, Merrell were mailing samples of the compound to American doctors, even though it had still not been licensed in the United States.
These events transformed the political imperative. Kefauver's bill was resurrected and rushed through both House and Senate, resulting in the 1962 amendments to the Food and Drugs Act. This mandated proper animal testing of drugs for toxicity before launch, and gave the FDA control over advertising. The new bill contained three further provisions whose far-reaching ramifications will be explored in chapters 2 and 3— it maintained prescription-only status for all new drugs, it required that companies demonstrate their drugs worked for a specified condition (where before they only had to prove safety), and it required companies to use controlled studies to demonstrate drug benefits. Kefauver's bill, however, was stripped of its provisions to change patent law, despite support from the Chief Patent Officer. And because the patent law wasn't changed, the 1962 amendments had no effect on Kefauver's primary target—control of the prices of drugs.
While it failed in its primary objective, the stripped-down bill was passed to wide acclaim. Kefauver, flanked by the junior senator from
Tennessee, Albert Gore, Sr., had been given the honor of speaking to it on the Senate floor. The disturbing changes in the climate of medicine would be stopped or even reversed, he hoped. Kennedy and Kefauver basked in the glow of success. Frances Kelsey, a staffer at the FDA, whose bureaucratic delay in reviewing and handling the license application for thalidomide undoubtedly restricted the number of American children exposed to the drug in utero, received a President's Award for Distinguished Federal Civilian Service. The reforms to the FDA were copied by other regulatory agencies worldwide. When it came to drugs, the management of pregnancy became the one area of medicine that most closely conformed to Pinel's hopes for all of medicine—that doctors in knowing when not to prescribe would demonstrate the highest medical art. Many still think this to be the case, but today's reality is quite different.