Читать книгу Bad Pharma: How Medicine is Broken, And How We Can Fix It - Ben Goldacre - Страница 12

How much data is missing?

Оглавление

If you want to prove that trials have been left unpublished, you have an interesting problem: you need to prove the existence of studies you don’t have access to. To work around this, people have developed a simple approach: you identify a group of trials you know have been conducted and completed, then check to see if they have been published. Finding a list of completed trials is the tricky part of this job, and to achieve it people have used various strategies: trawling the lists of trials that have been approved by ethics committees (or ‘institutional review boards’ in the USA), for example; or chasing up the trials discussed by researchers at conferences.

In 2008 a group of researchers decided to check for publication of every trial that had ever been reported to the US Food and Drug Administration for all the antidepressants that came onto the market between 1987 and 2004.12 This was no small task. The FDA archives contain a reasonable amount of information on all the trials that were submitted to the regulator in order to get a licence for a new drug. But that’s not all the trials, by any means, because those conducted after the drug has come onto the market will not appear there; and the information that is provided by the FDA is hard to search, and often scanty. But it is an important subset of the trials, and more than enough for us to begin exploring how often trials go missing, and why. It’s also a representative slice of trials from all the major drug companies.

The researchers found seventy-four studies in total, representing 12,500 patients’ worth of data. Thirty-eight of these trials had positive results, and found that the new drug worked; thirty-six were negative. The results were therefore an even split between success and failure for the drugs, in reality. Then the researchers set about looking for these trials in the published academic literature, the material available to doctors and patients. This provided a very different picture. Thirty-seven of the positive trials – all but one – were published in full, often with much fanfare. But the trials with negative results had a very different fate: only three were published. Twenty-two were simply lost to history, never appearing anywhere other than in those dusty, disorganised, thin FDA files. The remaining eleven which had negative results in the FDA summaries did appear in the academic literature, but were written up as if the drug was a success. If you think this sounds absurd, I agree: we will see in Chapter 4, on ‘bad trials’, how a study’s results can be reworked and polished to distort and exaggerate its findings.

This was a remarkable piece of work, spread over twelve drugs from all the major manufacturers, with no stand-out bad guy. It very clearly exposed a broken system: in reality we have thirty-eight positive trials and thirty-six negative ones; in the academic literature we have forty-eight positive trials and three negative ones. Take a moment to flip back and forth between those in your mind: ‘thirty-eight positive trials, thirty-six negative’; or ‘forty-eight positive trials and only three negative’.

If we were talking about one single study, from one single group of researchers, who decided to delete half their results because they didn’t give the overall picture they wanted, then we would quite correctly call that act ‘research misconduct’. Yet somehow when exactly the same phenomenon occurs, but with whole studies going missing, by the hands of hundreds and thousands of individuals, spread around the world, in both the public and private sector, we accept it as a normal part of life.13 It passes by, under the watchful eyes of regulators and professional bodies who do nothing, as routine, despite the undeniable impact it has on patients.

Even more strange is this: we’ve known about the problem of negative studies going missing for almost as long as people have been doing serious science.

This was first formally documented by a psychologist called Theodore Sterling in 1959.14 He went through every paper published in the four big psychology journals of the time, and found that 286 out of 294 reported a statistically significant result. This, he explained, was plainly fishy: it couldn’t possibly be a fair representation of every study that had been conducted, because if we believed that, we’d have to believe that almost every theory ever tested by a psychologist in an experiment had turned out to be correct. If psychologists really were so great at predicting results, there’d hardly be any point in bothering to run experiments at all. In 1995, at the end of his career, the same researcher came back to the same question, half a lifetime later, and found that almost nothing had changed.15

Sterling was the first to put these ideas into a formal academic context, but the basic truth had been recognised for many centuries. Francis Bacon explained in 1620 that we often mislead ourselves by only remembering the times something worked, and forgetting those when it didn’t.16 Fowler in 1786 listed the cases he’d seen treated with arsenic, and pointed out that he could have glossed over the failures, as others might be tempted to do, but had included them.17 To do otherwise, he explained, would have been misleading.

Yet it was only three decades ago that people started to realise that missing trials posed a serious problem for medicine. In 1980 Elina Hemminki found that almost half the trials conducted in the mid-1970s in Finland and Sweden had been left unpublished.18 Then, in 1986, an American researcher called Robert Simes decided to investigate the trials on a new treatment for ovarian cancer. This was an important study, because it looked at a life-or-death question. Combination chemotherapy for this kind of cancer has very tough side effects, and knowing this, many researchers had hoped it might be better to give a single ‘alkylating agent’ drug first, before moving on to full chemotherapy. Simes looked at all the trials published on this question in the academic literature, read by doctors and academics. From this, giving a single drug first looked like a great idea: women with advanced ovarian cancer (which is not a good diagnosis to have) who were on the alkylating agent alone were significantly more likely to survive longer.

Then Simes had a smart idea. He knew that sometimes trials can go unpublished, and he had heard that papers with less ‘exciting’ results are the most likely to go missing. To prove that this has happened, though, is a tricky business: you need to find a fair, representative sample of all the trials that have been conducted, and then compare their results with the smaller pool of trials that have been published, to see if there are any embarrassing differences. There was no easy way to get this information from the medicines regulator (we will discuss this problem in some detail later), so instead he went to the International Cancer Research Data Bank. This contained a register of interesting trials that were happening in the USA, including most of the ones funded by the government, and many others from around the world. It was by no means a complete list, but it did have one crucial feature: the trials were registered before their results came in, so any list compiled from this source would be, if not complete, at least a representative sample of all the research that had ever been done, and not biased by whether their results were positive or negative.

When Simes compared the results of the published trials against the pre-registered trials, the results were disturbing. Looking at the academic literature – the studies that researchers and journal editors chose to publish – alkylating agents alone looked like a great idea, reducing the rate of death from advanced ovarian cancer significantly. But when you looked only at the pre-registered trials – the unbiased, fair sample of all the trials ever conducted – the new treatment was no better than old-fashioned chemotherapy.

Simes immediately recognised – as I hope you will too – that the question of whether one form of cancer treatment is better than another was small fry compared to the depth charge he was about to set off in the medical literature. Everything we thought we knew about whether treatments worked or not was probably distorted, to an extent that might be hard to measure, but that would certainly have a major impact on patient care. We were seeing the positive results, and missing the negative ones. There was one clear thing we should do about this: start a registry of all clinical trials, demand that people register their study before they start, and insist that they publish the results at the end.

That was 1986. Since then, a generation later, we have done very badly. In this book, I promise I won’t overwhelm you with data. But at the same time, I don’t want any drug company, or government regulator, or professional body, or anyone who doubts this whole story, to have any room to wriggle. So I’ll now go through all the evidence on missing trials, as briefly as possible, showing the main approaches that have been used. All of what you are about to read comes from the most current systematic reviews on the subject, so you can be sure that it is a fair and unbiased summary of the results.

One research approach is to get all the trials that a medicines regulator has record of, from the very early ones done for the purposes of getting a licence for a new drug, and then check to see if they all appear in the academic literature. That’s the method we saw used in the paper mentioned above, where researchers sought out every paper on twelve antidepressants, and found that a 50/50 split of positive and negative results turned into forty-eight positive papers and just three negative ones. This method has been used extensively in several different areas of medicine:

 Lee and colleagues, for example, looked for all of the 909 trials submitted alongside marketing applications for all ninety new drugs that came onto the market from 2001 to 2002: they found that 66 per cent of the trials with significant results were published, compared with only 36 per cent of the rest.19

 Melander, in 2003, looked for all forty-two trials on five antidepressants that were submitted to the Swedish drug regulator in the process of getting a marketing authorisation: all twenty-one studies with significant results were published; only 81 per cent of those finding no benefit were published.20

 Rising et al., in 2008, found more of those distorted write-ups that we’ll be dissecting later: they looked for all trials on two years’ worth of approved drugs. In the FDA’s summary of the results, once those could be found, there were 164 trials. Those with favourable outcomes were a full four times more likely to be published in academic papers than those with negative outcomes. On top of that, four of the trials with negative outcomes changed, once they appeared in the academic literature, to favour the drug.21

If you prefer, you can look at conference presentations: a huge amount of research gets presented at conferences, but our current best estimate is that only about half of it ever appears in the academic literature.22 Studies presented only at conferences are almost impossible to find, or cite, and are especially hard to assess, because so little information is available on the specific methods used in the research (often as little as a paragraph). And as you will see shortly, not every trial is a fair test of a treatment. Some can be biased by design, so these details matter.

The most recent systematic review of studies looking at what happens to conference papers was done in 2010, and it found thirty separate studies looking at whether negative conference presentations – in fields as diverse as anaesthetics, cystic fibrosis, oncology, and A&E – disappear before becoming fully-fledged academic papers.23 Overwhelmingly, unflattering results are much more likely to go missing.

If you’re very lucky, you can track down a list of trials whose existence was publicly recorded before they were started, perhaps on a register that was set up to explore that very question. From the pharmaceutical industry, up until very recently, you’d be very lucky to find such a list in the public domain. For publicly-funded research the story is a little different, and here we start to learn a new lesson: although the vast majority of trials are conducted by the industry, with the result that they set the tone for the community, this phenomenon is not limited to the commercial sector.

 By 1997 there were already four studies in a systematic review on this approach. They found that studies with significant results were two and a half times more likely to get published than those without.24

 A paper from 1998 looked at all trials from two groups of triallists sponsored by the US National Institutes of Health over the preceding ten years, and found, again, that studies with significant results were more likely to be published.25

 Another looked at drug trials notified to the Finnish National Agency, and found that 47 per cent of the positive results were published, but only 11 per cent of the negative ones.26

 Another looked at all the trials that had passed through the pharmacy department of an eye hospital since 1963: 93 per cent of the significant results were published, but only 70 per cent of the negative ones.27

The point being made in this blizzard of data is simple: this is not an under-researched area; the evidence has been with us for a long time, and it is neither contradictory nor ambiguous.

Two French studies in 2005 and 2006 took a new approach: they went to ethics committees, and got lists of all the studies they had approved, and then found out from the investigators whether the trials had produced positive or negative results, before finally tracking down the published academic papers.28 The first study found that significant results were twice as likely to be published; the second that they were four times as likely. In Britain, two researchers sent a questionnaire to all the lead investigators on 101 projects paid for by NHS R&D: it’s not industry research, but it’s worth noting anyway. This produced an unusual result: there was no statistically significant difference in the publication rates of positive and negative papers.29

But it’s not enough simply to list studies. Systematically taking all the evidence that we have so far, what do we see overall?

It’s not ideal to lump every study of this type together in one giant spreadsheet, to produce a summary figure on publication bias, because they are all very different, in different fields, with different methods. This is a concern in many meta-analyses (though it shouldn’t be overstated: if there are lots of trials comparing one treatment against placebo, say, and they’re all using the same outcome measurement, then you might be fine just lumping them all in together).

But you can reasonably put some of these studies together in groups. The most current systematic review on publication bias, from 2010, from which the examples above are taken, draws together the evidence from various fields.30 Twelve comparable studies follow up conference presentations, and taken together they find that a study with a significant finding is 1.62 times more likely to be published. For the four studies taking lists of trials from before they started, overall, significant results were 2.4 times more likely to be published. Those are our best estimates of the scale of the problem. They are current, and they are damning.

All of this missing data is not simply an abstract academic matter: in the real world of medicine, published evidence is used to make treatment decisions. This problem goes to the core of everything that doctors do, so it’s worth considering in some detail what impact it has on medical practice. Firstly, as we saw in the case of reboxetine, doctors and patients are misled about the effects of the medicines they use, and can end up making decisions that cause avoidable suffering, or even death. We might also choose unnecessarily expensive treatments, having been misled into thinking they are more effective than cheaper older drugs. This wastes money, ultimately depriving patients of other treatments, since funding for health care is never infinite.

It’s also worth being clear that this data is withheld from everyone in medicine, from top to bottom. NICE, for example, is the National Institute for Health and Clinical Excellence, created by the British government to conduct careful, unbiased summaries of all the evidence on new treatments. It is unable either to identify or to access data that has been withheld by researchers or companies on a drug’s effectiveness: NICE has no more legal right to that data than you or I do, even though it is making decisions about effectiveness, and cost-effectiveness, on behalf of the NHS, for millions of people. In fact, as we shall see, the MHRA and EMA (the European Medicines Agency) – the regulators that decide which drugs can go on the market in the UK – often have access to this information, but do not share it with the public, with doctors, or with NICE. This is an extraordinary and perverse situation.

So, while doctors are kept in the dark, patients are exposed to inferior treatments, ineffective treatments, unnecessary treatments, and unnecessarily expensive treatments that are no better than cheap ones; governments pay for unnecessarily expensive treatments, and mop up the cost of harms created by inadequate or harmful treatment; and individual participants in trials, such as those in the TGN1412 study, are exposed to terrifying, life-threatening ordeals, resulting in lifelong scars, again quite unnecessarily.

At the same time, the whole of the research project in medicine is retarded, as vital negative results are held back from those who could use them. This affects everyone, but it is especially egregious in the world of ‘orphan diseases’, medical problems that affect only small numbers of patients, because these corners of medicine are already short of resources, and are neglected by the research departments of most drug companies, since the opportunities for revenue are thinner. People working on orphan diseases will often research existing drugs that have been tried and failed in other conditions, but that have theoretical potential for the orphan disease. If the data from earlier work on these drugs in other diseases is missing, then the job of researching them for the orphan disease is both harder and more dangerous: perhaps they have already been shown to have benefits or effects that would help accelerate research; perhaps they have already been shown to be actively harmful when used on other diseases, and there are important safety signals that would help protect future research participants from harm. Nobody can tell you.

Finally, and perhaps most shamefully, when we allow unflattering data to go unpublished, we betray the patients who participated in these studies: the people who have given their bodies, and sometimes their lives, in the implicit belief that they are doing something to create new knowledge, that will benefit others in the same position as them in the future. In fact, their belief is not implicit: often it’s exactly what we tell them, as researchers, and it is a lie, because the data might be withheld, and we know it.

So whose fault is this?

Bad Pharma: How Medicine is Broken, And How We Can Fix It

Подняться наверх