Читать книгу Bad Pharma: How Medicine is Broken, And How We Can Fix It - Ben Goldacre - Страница 9

1 Missing Data Sponsors get the answer they want

Оглавление

Before we get going, we need to establish one thing beyond any doubt: industry-funded trials are more likely to produce a positive, flattering result than independently-funded trials. This is our core premise, and you’re about to read a very short chapter, because this is one of the most well-documented phenomena in the growing field of ‘research about research’. It has also become much easier to study in recent years, because the rules on declaring industry funding have become a little clearer.

We can begin with some recent work: in 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – then measured two key features: were they positive, and were they funded by industry?1 They found over five hundred trials in total: 85 per cent of the industry-funded studies were positive, but only 50 per cent of the government-funded trials were. That’s a very significant difference.

In 2007, researchers looked at every published trial that set out to explore the benefit of a statin.2 These are cholesterol-lowering drugs which reduce your risk of having a heart attack, they are prescribed in very large quantities, and they will loom large in this book. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were twenty times more likely to give results favouring the test drug. Again, that’s a very big difference.

We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a ten-year period, finding 542 trial outcomes in total. Industry sponsors got favourable outcomes for their own drug 78 per cent of the time, while independently-funded trials only gave a positive result in 48 per cent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: you would only win a measly 28 per cent of the time.3

These are dismal, frightening results, but they come from individual studies. When there has been lots of research in a field, it’s always possible that someone – like me, for example – could cherry-pick the results, and give a partial view. I could, in essence, be doing exactly what I accuse the pharmaceutical industry of doing, and only telling you about the studies that support my case, while hiding the reassuring ones from you.

To guard against this risk, researchers invented the systematic review. We’ll explore this in more detail soon (p.14), since it’s at the core of modern medicine, but in essence a systematic review is simple: instead of just mooching through the research literature, consciously or unconsciously picking out papers here and there that support your pre-existing beliefs, you take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring that your evidence is as complete and representative as possible of all the research that has ever been done.

Systematic reviews are very, very onerous. In 2003, by coincidence, two were published, both looking specifically at the question we’re interested in. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results.4 A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: it found twenty more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.5

I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favourable results, and that is not my opinion, or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively, without anybody stepping out to take effective action, as we shall see.

There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers, and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).

Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial, and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug. There is a small punch-line coming, and to understand it we need to cover a little of what an academic paper looks like. In general, the results section is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The ‘ranges’ are given, subgroups are perhaps explored, statistical tests are conducted, and each detail of the result is described in table form, and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.

In Fries and Krishnan [2004] this level of detail was unnecessary. The results section is a single, simple, and – I like to imagine – fairly passive-aggressive sentence:

The results from every RCT (45 out of 45) favored the drug of the sponsor.

This extreme finding has a very interesting side effect, for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: if it was funded by industry, you could know with absolute certainty that the trial found the drug was great.

How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. It may be that companies are more likely to run trials when they’re more confident their treatment is going to ‘win’; this sounds reasonable, although even this conflicts with the ethical principle that you should only do a trial when there’s genuine uncertainty about which treatment is best (otherwise you’re exposing half of your participants to a treatment you already know to be inferior). Sometimes the chances of one treatment winning can be increased with outright design flaws. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good (which is – for interesting reasons we shall discuss – statistical poison). And so on.

But before we get to these fascinating methodological twists and quirks, these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not, there is something very much simpler at hand.

Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science. It distorts findings in fields as diverse as brain imaging and economics, it makes a mockery of all our efforts to exclude bias from our studies, and despite everything that regulators, drug companies and even some academics will tell you, it is a problem that has been left unfixed for decades.

In fact, it is so deep-rooted that even if we fixed it today – right now, for good, forever, without any flaws or loopholes in our legislation – that still wouldn’t help, because we would still be practising medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is – as you’ve now seen – fundamentally distorted.

But there is a way ahead.

Bad Pharma: How Medicine is Broken, And How We Can Fix It

Подняться наверх