Читать книгу Bad Science - Ben Goldacre - Страница 21
Meta-analysis
ОглавлениеThis will be our last big idea for a while, and this is one that has saved the lives of more people than you will ever meet. A metaanalysis is a very simple thing to do, in some respects: you just collect all the results from all the trials on a given subject, bung them into one big spreadsheet, and do the maths on that, instead of relying on your own gestalt intuition about all the results from each of your little trials. It’s particularly useful when there have been lots of trials, each too small to give a conclusive answer, but all looking at the same topic.
So if there are, say, ten randomised, placebo-controlled trials looking at whether asthma symptoms get better with homeopathy, each of which has a paltry forty patients, you could put them all into one meta-analysis and effectively (in some respects) have a four-hundred-person trial to work with.
In some very famous cases—at least, famous in the world of academic medicine—meta-analyses have shown that a treatment previously believed to be ineffective is in fact rather good, but because the trials that had been done were each too small, individually, to detect the real benefit, nobody had been able to spot it.
As I said, information alone can be life-saving, and one of the greatest institutional innovations of the past thirty years is undoubtedly the Cochrane Collaboration, an international not-for-profit organisation of academics, which produces systematic summaries of the research literature on healthcare research, including meta-analyses.
The logo of the Cochrane Collaboration features a simplified ‘blobbogram’, a graph of the results from a landmark meta-analysis which looked at an intervention given to pregnant mothers. When people give birth prematurely, as you might expect, the babies are more likely to suffer and die. Some doctors in New Zealand had the idea that giving a short, cheap course of a steroid might help improve outcomes, and seven trials testing this idea were done between 1972 and 1981. Two of them showed some benefit from the steroids, but the remaining five failed to detect any benefit, and because of this, the idea didn’t catch on.
Eight years later, in 1989, a meta-analysis was done by pooling all this trial data. If you look at the blobbogram in the logo on the previous page, you can see what happened. Each horizontal line represents a single study: if the line is over to the left, it means the steroids were better than placebo, and if it is over to the right, it means the steroids were worse. If the horizontal line for a trial touches the big vertical ‘nil effect’ line going down the middle, then the trial showed no clear difference either way. One last thing: the longer a horizontal line is, the less certain the outcome of the study was.
Looking at the blobbogram, we can see that there are lots of not-very-certain studies, long horizontal lines, mostly touching the central vertical line of ‘no effect’; but they’re all a bit over to the left, so they all seem to suggest that steroids might be beneficial, even if each study itself is not statistically significant.
The diamond at the bottom shows the pooled answer: that there is, in fact, very strong evidence indeed for steroids reducing the risk—by 30 to 50 per cent—of babies dying from the complications of immaturity. We should always remember the human cost of these abstract numbers: babies died unnecessarily because they were deprived of this life-saving treatment for a decade. They died, even when there was enough information available to know what would save them, because that information had not been synthesised together, and analysed systematically, in a meta-analysis.
Back to homeopathy (you can see why I find it trivial now). A landmark meta-analysis was published recently in the Lancet. It was accompanied by an editorial titled: ‘The End of Homeopathy?’ Shang et al. did a very thorough meta-analysis of a vast number of homeopathy trials, and they found, overall, adding them all up, that homeopathy performs no better than placebo.
The homeopaths were up in arms. If you mention this meta-analysis, they will try to tell you that it was a stitch-up. What Shang et al. did, essentially, like all the previous negative meta-analyses of homeopathy, was to exclude the poorer-quality trials from their analysis.
Homeopaths like to pick out the trials that give them the answer that they want to hear, and ignore the rest, a practice called ‘cherry-picking’. But you can also cherry-pick your favourite meta-analyses, or misrepresent them. Shang et al. was only the latest in a long string of meta-analyses to show that homeopathy performs no better than placebo. What is truly amazing to me is that despite the negative results of these meta-analyses, homeopaths have continued—right to the top of the profession—to claim that these same meta-analyses support the use of homeopathy. They do this by quoting only the result for all trials included in each meta-analysis. This figure includes all of the poorer-quality trials. The most reliable figure, you now know, is for the restricted pool of the most ‘fair tests’, and when you look at those, homeopathy performs no better than placebo. If this fascinates you (and I would be very surprised), then I am currently producing a summary with some colleagues, and you will soon be able to find it online at badscience.net, in all its glorious detail, explaining the results of the various meta-analyses performed on homeopathy.
Clinicians, pundits and researchers all like to say things like ‘There is a need for more research,’ because it sounds forward-thinking and open-minded. In fact that’s not always the case, and it’s a little-known fact that this very phrase has been effectively banned from the British Medical Journal for many years, on the grounds that it adds nothing: you may say what research is missing, on whom, how, measuring what, and why you want to do it, but the hand-waving, superficially open-minded call for ‘more research’ is meaningless and unhelpful.
There have been over a hundred randomised placebo-controlled trials of homeopathy, and the time has come to stop. Homeopathy pills work no better than placebo pills, we know that much. But there is room for more interesting research. People do experience that homeopathy is positive for them, but the action is likely to be in the whole process of going to see a homeopath, of being listened to, having some kind of explanation for your symptoms, and all the other collateral benefits of old-fashioned, paternalistic, reassuring medicine. (Oh, and regression to the mean.)
So we should measure that; and here is the final superb lesson in evidence-based medicine that homeopathy can teach us: sometimes you need to be imaginative about what kinds of research you do, compromise, and be driven by the questions that need answering, rather than the tools available to you.
It is very common for researchers to research the things which interest them, in all areas of medicine; but they can be interested in quite different things from patients. One study actually thought to ask people with osteoarthritis of the knee what kind of research they wanted to be carried out, and the responses were fascinating: they wanted rigorous real-world evaluations of the benefits from physiotherapy and surgery, from educational and coping strategy interventions, and other pragmatic things. They didn’t want yet another trial comparing one pill with another, or with placebo.
In the case of homeopathy, similarly, homeopaths want to believe that the power is in the pill, rather than in the whole process of going to visit a homeopath, having a chat and so on. It is crucially important to their professional identity. But I believe that going to see a homeopath is probably a helpful intervention, in some cases, for some people, even if the pills are just placebos. I think patients would agree, and I think it would be an interesting thing to measure. It would be easy, and you would do something called a pragmatic ‘waiting-list-controlled trial’.
You take two hundred patients, say, all suitable for homeopathic treatment, currently in a GP clinic, and all willing to be referred on for homeopathy, then you split them randomly into two groups of one hundred. One group gets treated by a homeopath as normal, pills, consultation, smoke and voodoo, on top of whatever other treatment they are having, just like in the real world. The other group just sits on the waiting list. They get treatment as usual, whether that is ‘neglect’, ‘GP treatment’ or whatever, but no homeopathy. Then you measure outcomes, and compare who gets better the most.
You could argue that it would be a trivial positive finding, and that it’s obvious the homeopathy group would do better; but it’s the only piece of research really waiting to be done. This is a ‘pragmatic trial’. The groups aren’t blinded, but they couldn’t possibly be in this kind of trial, and sometimes we have to accept compromises in experimental methodology. It would be a legitimate use of public money (or perhaps money from Boiron, the homeopathic pill company valued at $500 million), but there’s nothing to stop homeopaths from just cracking on and doing it for themselves: because despite the homeopaths’ fantasies, born out of a lack of knowledge, that research is difficult, magical and expensive, in fact such a trial would be very cheap to conduct.
In fact, it’s not really money that’s missing from the alternative therapy research community, especially in Britain: it’s knowledge of evidence-based medicine, and expertise in how to do a trial. Their literature and debates drip with ignorance, and vitriolic anger at anyone who dares to appraise the trials. Their university courses, as far as they ever even dare to admit what they teach on them (it’s all suspiciously hidden away), seem to skirt around such explosive and threatening questions. I’ve suggested in various places, including at academic conferences, that the single thing that would most improve the quality of evidence in CAM would be funding for a simple, evidence-based medicine hotline, which anyone thinking about running a trial in their clinic could phone up and get advice on how to do it properly, to avoid wasting effort on an ‘unfair test’ that will rightly be regarded with contempt by all outsiders.
In my pipe dream (I’m completely serious, if you’ve got the money) you’d need a handout, maybe a short course that people did to cover the basics, so they weren’t asking stupid questions, and phone support. In the meantime, if you’re a sensible homeopath and you want to do a GP-controlled trial, you could maybe try the badscience website forums, where there are people who might be able to give some pointers (among the childish fighters and trolls…).
But would the homeopaths buy it? I think it would offend their sense of professionalism. You often see homeopaths trying to nuance their way through this tricky area, and they can’t quite make their minds up. Here, for example, is a Radio 4 interview, archived in full online, where Dr Elizabeth Thompson (consultant homeopathic physician, and honorary senior lecturer at the Department of Palliative Medicine at the University of Bristol) has a go.
She starts off with some sensible stuff: homeopathy does work, but through non-specific effects, the cultural meaning of the process, the therapeutic relationship, it’s not about the pills, and so on. She practically comes out and says that homeopathy is all about cultural meaning and the placebo effect. ‘People have wanted to say homeopathy is like a pharmaceutical compound,’ she says, ‘and it isn’t, it is a complex intervention.’
Then the interviewer asks: ‘What would you say to people who go along to their high street pharmacy, where you can buy homeopathic remedies, they have hay fever and they pick out a hay-fever remedy, I mean presumably that’s not the way it works?’ There is a moment of tension. Forgive me, Dr Thompson, but I felt you didn’t want to say that the pills work, as pills, in isolation, when you buy them in a shop: apart from anything else, you’d already said that they don’t.
But she doesn’t want to break ranks and say the pills don’t work, either. I’m holding my breath. How will she do it? Is there a linguistic structure complex enough, passive enough, to negotiate through this? If there is, Dr Thompson doesn’t find it: ‘They might flick through and they might just be spot-on … [but] you’ve got to be very lucky to walk in and just get the right remedy.’ So the power is, and is not, in the pill: ‘P, and not-P’, as philosophers of logic would say.
If they can’t finesse it with the ‘power is not in the pill’ paradox, how else do the homeopaths get around all this negative data? Dr Thompson—from what I have seen—is a fairly clear-thinking and civilised homeopath. She is, in many respects, alone. Homeopaths have been careful to keep themselves outside of the civilising environment of the university, where the influence and questioning of colleagues can help to refine ideas, and weed out the bad ones. In their rare forays, they enter them secretively, walling themselves and their ideas off from criticism or review, refusing to share even what is in their exam papers with outsiders.
It is rare to find a homeopath engaging on the issue of the evidence, but what happens when they do? I can tell you. They get angry, they threaten to sue, they scream and shout at you at meetings, they complain spuriously and with ludicrous misrepresentations—time-consuming to expose, of course, but that’s the point of harassment—to the Press Complaints Commission and your editor, they send hate mail, and accuse you repeatedly of somehow being in the pocket of big pharma (falsely, although you start to wonder why you bother having principles when faced with this kind of behaviour). They bully, they smear, to the absolute top of the profession, and they do anything they can in a desperate bid to shut you up, and avoid having a discussion about the evidence. They have even been known to threaten violence (I won’t go into it here, but I manage these issues extremely seriously).
I’m not saying I don’t enjoy a bit of banter. I’m just pointing out that you don’t get anything quite like this in most other fields, and homeopaths, among all the people in this book, with the exception of the odd nutritionist, seem to me to be a uniquely angry breed. Experiment for yourself by chatting with them about evidence, and let me know what you find.
By now your head is hurting, because of all those mischievous, confusing homeopaths and their weird, labyrinthine defences: you need a lovely science massage. Why is evidence so complicated? Why do we need all of these clever tricks, these special research paradigms? The answer is simple: the world is much more complicated than simple stories about pills making people get better. We are human, we are irrational, we have foibles, and the power of the mind over the body is greater than anything you have previously imagined.