Читать книгу Bad Pharma: How Medicine is Broken, And How We Can Fix It - Ben Goldacre - Страница 22

Trying to get trial data from drug companies: the story of Tamiflu

Оглавление

Governments around the world have spent billions of pounds on stockpiling a drug called Tamiflu. In the UK alone we have spent several hundred million pounds – the total figure is not yet clear – and so far we’ve bought enough tablets to treat 80 per cent of the population if an outbreak of bird flu occurs. I’m very sorry if you have the flu, because it’s horrid being ill; but we have not spent all this money to reduce the duration of your symptoms in the event of a pandemic by a few hours (though Tamiflu does do this, fairly satisfactorily). We have spent this money to reduce the rate of ‘complications’: a medical euphemism, meaning pneumonia and death.

Lots of people seem to think Tamiflu will do this. The US Department of Health and Human Services said it would save lives and reduce hospital admissions. The European Medicines Agency said it would reduce complications. The Australian drugs regulator said so too. Roche’s website said it reduces complications by 67 per cent.77 But what is the evidence that Tamiflu really will reduce complications? Answering questions like this is the bread and butter of the Cochrane Collaboration, which you will remember is the vast and independent non-profit international collaboration of academics producing hundreds of systematic reviews on important questions in medicine every year. In 2009 there was concern about a flu pandemic, and an enormous amount of money was being spent on Tamiflu. Because of this, the UK and Australian governments specifically asked the Cochrane Acute Respiratory Infections Group to update its earlier reviews on the drug.

Cochrane reviews are in a constant review cycle, because evidence changes over time as new trials are published. This should have been a pretty everyday piece of work: the previous review, in 2008, had found some evidence that Tamiflu does indeed reduce the rate of complications. But then a Japanese paediatrician called Keiji Hayashi left a comment that would trigger a revolution in our understanding of how evidence-based medicine should work. This wasn’t in a publication, or even a letter: it was a simple online comment, posted underneath the Tamiflu review on the Cochrane website.

You’ve summarised the data from all the trials, he explained, but your positive conclusion is really driven by data from just one of the papers you cite, an industry-funded meta-analysis led by an author called Kaiser. This, ‘the Kaiser paper’, summarises the findings of ten earlier trials, but from these ten trials, only two have ever been published in the scientific literature. For the remaining eight, your only information comes from the brief summary in this secondary, industry source. That’s not reliable enough.

In case it’s not immediately obvious, this is science at its best. The Cochrane review is readily accessible online; it explains transparently the methods by which it looked for trials, and then analysed them, so any informed reader can pull the review apart, and understand where the conclusions came from. Cochrane provides an easy way for readers to raise criticisms. And, crucially, these criticisms did not fall on deaf ears. Tom Jefferson is an editor at the Cochrane Acute Respiratory Infections Group, and the lead author on the 2008 review. He realised immediately that he had made a mistake in blindly trusting the Kaiser data. He said so, without any defensiveness, and then set about getting the information in a straight, workpersonlike fashion. This began a three-year battle, which is still not resolved, but which has thrown stark light on the need for all researchers to have access to clinical study reports on trials wherever possible.

First, the Cochrane researchers wrote to the authors of the Kaiser paper, asking for more information. In reply, they were told that this team no longer had the files, and that they should contact Roche, the manufacturer of Tamiflu. So naturally they wrote to Roche and asked for the data.

This is where the problems began. Roche said it would hand over some data, but the Cochrane reviewers would need to sign a confidentiality agreement. This was an impossibility for any serious scientist: it would prevent them from conducting a systematic review with any reasonable degree of openness and transparency. More than this, the proposed contract also raised serious ethical issues, in that it would have required the Cochrane team to actively withhold information from the reader: it included a clause saying that on signing it, the reviewers would never be allowed to discuss the terms of this secrecy agreement; and more than that, they would be forbidden from ever publicly acknowledging that it even existed. Roche was demanding a secret contract, with secret terms, requiring secrecy about trial data, in a discussion about the safety and efficacy of a drug that has been taken by hundreds of thousands of people around the world. Jefferson asked for clarification, and never received a reply.

Then, in October 2009, the company changed tack: they would like to hand the data over, they explained, but another meta-analysis was being conducted elsewhere. Roche had given them the study reports, so Cochrane couldn’t have them. This was a simple non-sequitur: there is no reason why many groups shouldn’t all work on the same question. In fact, quite the opposite: replication is the cornerstone of good science. Roche’s excuse made no sense. Jefferson asked for clarification, but never received a reply.

One week later, unannounced, Roche sent seven short documents, each around a dozen pages long. These contained excerpts of internal company documents on each of the clinical trials in the Kaiser meta-analysis. This was a start, but it didn’t contain anything like enough information for Cochrane to assess the benefits, or the rate of adverse events, or fully to understand exactly what methods were used in the trials.

At the same time, it was rapidly becoming clear that there were odd inconsistencies in the information on this drug. Firstly, there was considerable disagreement at the level of the broad conclusions drawn by different organisations. The FDA said there were no benefits on complications, while the Centers for Disease Control and Prevention (in charge of public health in the USA – some wear nice naval uniforms in honour of their history on the docks) said it did reduce complications. The Japanese regulator made no claim for complications, but the EMA said there was a benefit. In a sensible world, we might think that all these organisations should sing from the same hymn sheet, because all would have access to the same information. Of course, there is also room for occasional, reasonable disagreement, especially where there are close calls: this is precisely why doctors and researchers should have access to all the information about a drug, so that they can make their own judgements.

Meanwhile, reflecting these different judgements, Roche’s own websites said completely different things in different jurisdictions, depending on what the local regulator had said. It’s naïve, perhaps, to expect consistency from a drug company, but from this and other stories it’s clear that industry utterances are driven by the maximum they can get away with in each territory, rather than any consistent review of the evidence.

In any case, now that their interest had been piqued, the Cochrane researchers also began to notice that there were odd discrepancies between the frequency of adverse events in different databases. Roche’s global safety database held 2,466 neuropsychiatric adverse events, of which 562 were classified as ‘serious’. But the FDA database for the same period held only 1,805 adverse events in total. The rules vary on what needs to be notified to whom, and where, but even allowing for that, this was odd.

In any case, since Roche was denying them access to the information needed to conduct a proper review, the Cochrane team concluded that they would have to exclude all the unpublished Kaiser data from their analysis, because the details could not be verified in the normal way. People cannot make treatment and purchasing decisions on the basis of trials if the full methods and results aren’t clear: the devil is often in the detail, as we shall see in Chapter 4, on ‘bad trials’, so we cannot blindly trust that every study is a fair test of the treatment.

This is particularly important with Tamiflu, because there are good reasons to think that these trials were not ideal, and that published accounts were incomplete, to say the least. On closer examination, for example, the patients participating were clearly unusual, to the extent that the results may not be very relevant to normal everyday flu patients. In the published accounts, patients in the trials are described as typical flu patients, suffering from normal flu symptoms like cough, fatigue, and so on. We don’t do blood tests on people with flu in routine practice, but when these tests are done – for surveillance purposes – then even during peak flu season only about one in three people with ‘flu’ will actually be infected with the influenza virus, and most of the year only one in eight will really have it. (The rest are sick from something else, maybe just a common cold virus.)

Two thirds of the trial participants summarised in the Kaiser paper tested positive for flu. This is bizarrely high, and means that the benefits of the drug will be overstated, because it is being tested on perfect patients, the very ones most likely to get better from a drug that selectively attacks the flu virus. In normal practice, which is where the results of these trials will be applied, doctors will be giving the drug to real patients who are diagnosed with ‘flu-like illness’, which is all you can realistically do in a clinic. Among these real patients, many will not actually have the influenza virus. This means that in the real world, the benefits of Tamiflu on flu will be diluted, and many more people will be exposed to the drug who don’t actually have flu virus in their systems. This, in turn, means that the side effects are likely to creep up in significance, in comparison with any benefits. That is why we strive to ensure that all trials are conducted in normal, everyday, realistic patients: if they are not, their findings may not be relevant to the real world.

So the Cochrane review was published without the Kaiser data in December 2009, alongside some explanatory material about why the Kaiser results had been excluded, and a small flurry of activity followed. Roche put the short excerpts it had sent over online, and committed to make full study reports available (it still hasn’t done so).

What Roche posted was incomplete, but it began a journey for the Cochrane academics of learning a great deal more about the real information that is collected on a trial, and how that can differ from what is given to doctors and patients in the form of brief, published academic papers. At the core of every trial is the raw data: every single record of blood pressure of every patient, the doctors’ notes describing any unusual symptoms, investigators’ notes, and so on. A published academic paper is a short description of the study, usually following a set format: an introductory background; a description of the methods; a summary of the important results; and then finally a discussion, covering the strengths and weaknesses of the design, and the implications of the results for clinical practice.

A clinical study report, or CSR, is the intermediate document that stands between these two, and can be very long, sometimes thousands of pages.78 Anybody working in the pharmaceutical industry is very familiar with these documents, but doctors and academics have rarely heard of them. They contain much more detail on things like the precise plan for analysing the data statistically, detailed descriptions of adverse events, and so on.

These documents are split into different sections, or ‘modules’. Roche has shared only ‘module 1’, for only seven of the ten study reports Cochrane has requested. These modules are missing vitally important information, including the analysis plan, the randomisation details, the study protocol (and the list of deviations from that), and so on. But even these incomplete modules were enough to raise concerns about the universal practice of trusting academic papers to give a complete story about what happened to the patients in a trial.

For example, looking at the two papers out of ten in the Kaiser review which were published, one says: ‘There were no drug-related serious adverse events,’ and the other doesn’t mention adverse events. But in the ‘module 1’ documents on these same two studies, there are ten serious adverse events listed, of which three are classified as being possibly related to Tamiflu.79

Another published paper describes itself as a trial comparing Tamiflu against placebo. A placebo is an inert tablet, containing no active ingredient, that is visually indistinguishable from the pill containing the real medicine. But the CSR for this trial shows that the real medicine was in a grey and yellow capsule, whereas the placebos were grey and ivory. The ‘placebo’ tablets also contained something called dehydrocholic acid, a chemical which encourages the gall bladder to empty.80 Nobody has any clear idea of why, and it’s not even mentioned in the academic paper; but it seems that this was not actually an inert, dummy pill placebo.

Simply making a list of all the trials conducted on a subject is vitally important if we want to avoid seeing only a biased summary of the research done on a subject; but in the case of Tamiflu even this proved to be almost impossible. For example, Roche Shanghai informed the Cochrane group of one large trial (ML16369), but Roche Basel seemed not to know of its existence. But by setting out all the trials side by side, the researchers were able to identify peculiar discrepancies: for example, the largest ‘phase 3’ trial – one of the large trials that are done to get a drug onto the market – was never published, and is rarely mentioned in regulatory documents.fn3

There were other odd discrepancies. Why, for example, was one trial on Tamiflu published in 2010, ten years after it was completed?82 Why did some trials report completely different authors, depending on where they were being discussed?83 And so on.

The chase continued. In December 2009 Roche had promised: ‘full study reports will also be made available on a password-protected site within the coming days to physicians and scientists undertaking legitimate analyses’. This never happened. Then an odd game began. In June 2010 Roche said: Oh, we’re sorry, we thought you had what you wanted. In July it announced that it was worried about patient confidentiality (you may remember this from the EMA saga). This was an odd move: for most of the important parts of these documents, privacy is no issue at all. The full trial protocol, and the analysis plan, are both completed before any single patient is ever touched. Roche has never explained why patient privacy prevents it from releasing the study reports. It simply continued to withhold them.

Then in August 2010 it began to make some even more bizarre demands, betraying a disturbing belief that companies are perfectly entitled to control access to information that is needed by doctors and patients around the world to make safe decisions. Firstly, it insisted on seeing the Cochrane reviewers’ full analysis plan. Fine, they said, and posted the whole protocol online. Doing so is completely standard practice at Cochrane, as it should be for any transparent organisation, and allows people to suggest important changes before you begin. There were few surprises, since all Cochrane reports follow a pretty strict manual anyway. Roche continued to withhold its study reports (including, ironically, its own protocols, the very thing it demanded Cochrane should publish, and that Cochrane had published, happily).

By now Roche had been refusing to publish the study reports for a year. Suddenly, the company began to raise odd personal concerns. It claimed that some Cochrane researchers had made untrue statements about the drug, and about the company, but refused to say who, or what, or where. ‘Certain members of Cochrane Group involved with the review of the neuraminidase inhibitors,’ it announced, ‘are unlikely to approach the review with the independence that is both necessary and justified.’ This is an astonishing state of affairs, where a company feels it should be allowed to prevent individual researchers access to data that should be available to all; but still Roche refused to hand over the study reports.

Then it complained that the Cochrane reviewers had begun to copy journalists in on their emails when responding to Roche staff. I was one of the people copied in on these interactions, and I believe that this was exactly the correct thing to do. Roche’s excuses had become perverse, and the company had failed to keep its promise to share all study reports. It’s clear that the modest pressure exerted by researchers in academic journals alone was having little impact on Roche’s refusal to release the data, and this is an important matter of public health, both for the individual case of this Tamiflu data, and for the broader issue of companies and regulators harming patients by withholding information.

Then things became even more perverse. In January 2011 Roche announced that the Cochrane researchers had already been given all the data they need. This was simply untrue. In February it insisted that all the studies requested were published (meaning academic papers, now shown to be misleading on Tamiflu). Then it declared that it would hand over nothing more, saying: ‘You have all the detail you need to undertake a review.’ But this still wasn’t true: it was still withholding the material it had publicly promised to hand over ‘within a few days’ in December 2009, a year and a half earlier.

At the same time, the company was raising the broken arguments we have already seen: it’s the job of regulators to make these decisions about benefit and risk, it said, not academics. Now, this claim fails on two important fronts. Firstly, as with many other drugs, we now know that not even the regulators had seen all the data. In January 2012 Roche claimed that it ‘has made full clinical study data available to health authorities around the world for their review as part of the licensing process’. But the EMA never received this information for at least fifteen trials. This was because the EMA had never requested it.

And that brings us on to our final important realisation: regulators are not infallible. They make outright mistakes, and they make decisions which are open to judgement, and should be subject to second-guessing and checking by many eyes around the world. In the next chapter we will see more examples of how regulators can fail, behind closed doors, but here we will look at one story that illustrates the benefit of ‘many eyes’ perfectly.

Rosiglitazone is a new kind of diabetes drug, and lots of researchers and patients had high hopes that it would be safe and effective.84 Diabetes is common, and more people develop the disease every year. Sufferers have poor control of their blood sugar, and diabetes drugs, alongside dietary changes, are supposed to fix this. Although it’s nice to see your blood sugar being controlled nicely in the numbers from lab tests and machines at home, we don’t control these figures for their own sake: we try to control blood sugar because we hope that this will help reduce the chances of real-world outcomes, like heart attack and death, both of which occur at a higher rate in people with diabetes.

Rosiglitazone was first marketed in 1999, and from the outset it was a magnet for disappointing behaviour. In that first year, Dr John Buse from the University of North Carolina discussed an increased risk of heart problems at a pair of academic meetings. The drug’s manufacturer, GSK, made direct contact in an attempt to silence him, then moved on to his head of department. Buse felt pressured to sign various legal documents. To cut a long story short, after wading through documents for several months, in 2007 the US Senate Committee on Finance released a report describing the treatment of Dr Buse as ‘intimidation’.

But we are more concerned with the safety and efficacy data. In 2003 the Uppsala Drug Monitoring Group of the World Health Organization contacted GSK about an unusually large number of spontaneous reports associating rosiglitazone with heart problems. GSK conducted two internal meta-analyses of its own data on this, in 2005 and 2006. These showed that the risk was real, but although both GSK and the FDA had these results, neither made any public statement about them, and they were not published until 2008.

During this delay, vast numbers of patients were exposed to the drug, but doctors and patients only learned about this serious problem in 2007, when cardiologist Professor Steve Nissen and colleagues published a landmark meta-analysis. This showed a 43 per cent increase in the risk of heart problems in patients on rosiglitazone. Since people with diabetes are already at increased risk of heart problems, and the whole point of treating diabetes is to reduce this risk, that finding was big potatoes. His findings were confirmed in later work, and in 2010 the drug was either taken off the market or restricted, all around the world.

Now, my argument is not that this drug should have been banned sooner, because as perverse as it sounds, doctors do often need inferior drugs for use as a last resort. For example, a patient may develop idiosyncratic side effects on the most effective pills, and be unable to take them any longer. Once this has happened, it may be worth trying a less effective drug, if it is at least better than nothing.

The concern is that these discussions happened with the data locked behind closed doors, visible only to regulators. In fact, Nissen’s analysis could only be done at all because of a very unusual court judgement. In 2004, when GSK was caught out withholding data showing evidence of serious side effects from paroxetine in children, the UK conducted an unprecedented four-year-long investigation, as we saw earlier. But in the US, the same bad behaviour resulted in a court case over allegations of fraud, the settlement of which, alongside a significant payout, required GSK to commit to posting clinical trial results on a public website.

Professor Nissen used the rosiglitazone data, when it became available, found worrying signs of harm, and published this to doctors, which is something that the regulators had never done, despite having the information years earlier. (Though before doctors got to read it, Nissen by chance caught GSK discussing a copy of his unpublished paper, which it had obtained improperly.85)

If this information had all been freely available from the start, regulators might have felt a little more anxious about their decisions, but crucially, doctors and patients could have disagreed with them, and made informed choices. This is why we need wider access to full CSRs, and all trial reports, for all medicines, and this is why it is perverse that Roche should be able even to contemplate deciding which favoured researchers should be allowed to read the documents on Tamiflu.

Astonishingly, a paper published in April 2012 by regulators from the UK and Europe suggests that they might agree to more data sharing, to a limited extent, within limits, for some studies, with caveats, at the appropriate juncture, and in the fullness of time.86 Before feeling any sense of enthusiasm, we should remember that this is a cautious utterance, wrung out after the dismal fights I have already described; that it has not been implemented; that it must be set against a background of broken promises from all players across the whole field of missing data; and that in any case, regulators do not have all the trial data anyway. But it is an interesting start.

Their two main objections – if we accept their goodwill at face value – are interesting, because they lead us to the final problem in the way we tolerate harm to patients from missing trial data. Firstly, they raise the concern that some academics and journalists might use study reports to conduct histrionic or poorly conducted reviews of the data: to this, again, I say, ‘Let them,’ because these foolish analyses should be conducted, and then rubbished, in public.

When UK hospital mortality statistics first became easily accessible to the public, doctors were terrified that they would be unfairly judged: the crude figures can be misinterpreted, after all, because one hospital may have worse figures simply because it is a centre of excellence, and takes in more challenging patients than its neighbours; and there is random variation to be expected in mortality rates anyway, so some hospitals might look unusually good, or bad, simply through the play of chance. Initially, to an extent, these fears were realised: there were a few shrill, unfair stories, and people overinterpreted the results. Now, for the most part, things have settled down, and many lay people are quite able to recognise that crude analyses of such figures are misleading. For drug data, where there is so much danger from withheld information, and so many academics desperate to conduct meaningful analyses, and so many other academics happy to criticise them, releasing the data is the only healthy option.

But secondly, the EMA raises the spectre of patient confidentiality, and hidden in this concern is one final prize.

So far I have been talking about access to trial reports, summaries of patients’ outcomes in trials. There is no good reason to believe that this poses any threat to patient confidentiality, and where there are specific narratives that might make a patient identifiable – a lengthy medical description of one person’s idiosyncratic adverse event in a trial, perhaps – these can easily be removed, since they appear in a separate part of the document. These CSRs should undoubtedly, without question, be publicly available documents, and this should be enforced retrospectively, going back decades, to the dawn of trials.

But all trials are ultimately run on individual patients, and the results of those individual patients are all stored and used for the summary analysis at the end of the study. While I would never suggest that these should be posted up on a public website – it would be easy for patients to be identifiable, from many small features of their histories – it is surprising that patient-level data is almost never shared with academics.

Sharing data of individual patients’ outcomes in clinical trials, rather than just the final summary result, has several significant advantages. Firstly, it’s a safeguard against dubious analytic practices. In the VIGOR trial on the painkiller Vioxx, for example, a bizarre reporting decision was made.87 The aim of the study was to compare Vioxx against an older, cheaper painkiller, to see if it was any less likely to cause stomach problems (this was the hope for Vioxx), and also if it caused more heart attacks (this was the fear). But the date cut-off for measuring heart attacks was much earlier than that for measuring stomach problems. This had the result of making the risks look less significant, relative to the benefits, but it was not declared clearly in the paper, resulting in a giant scandal when it was eventually noticed. If the raw data on patients was shared, games like these would be far easier to spot, and people might be less likely to play them in the first place.

Occasionally – with vanishing rarity – researchers are able to obtain raw data, and re-analyse studies that have already been conducted and published. Daniel Coyne, Professor of Medicine at Washington University, was lucky enough to get the data on a key trial for epoetin, a drug given to patients on kidney dialysis, after a four-year-long fight.88 The original academic publication on this study, ten years earlier, had switched the primary outcomes described in the protocol (we will see later how this exaggerates the benefits of treatments), and changed the main statistical analysis strategy (again, a huge source of bias). Coyne was able to analyse the study as the researchers had initially stated they were planning to in their protocol; and when he did, he found that they had dramatically overstated the benefits of the drug. It was a peculiar outcome, as he himself acknowledges: ‘As strange as it seems, I am now the sole author of the publication on the predefined primary and secondary results of the largest outcomes trial of epoetin in dialysis patients, and I didn’t even participate in the trial.’ There is room, in my view, for a small army of people doing the very same thing, reanalysing all the trials that were incorrectly analysed, in ways that deviated misleadingly from their original protocols.

Data sharing would also confer other benefits. It allows people to conduct more exploratory analyses of data, and to better investigate – for example – whether a drug is associated with a particular unexpected side effect. It would also allow cautious ‘subgroup analyses’, to see if a drug is particularly useful, or particularly useless, in particular types of patients.

The biggest immediate benefit from data sharing is that combining individual patient data into a meta-analysis gives more accurate results than working with the crude summary results at the end of a paper. Let’s imagine that one paper reports survival at three years as the main outcome for a cancer drug, and another reports survival at seven years. To combine these two in a meta-analysis, you’d have a problem. But if you were doing the meta-analysis with access to individual patient data, with treatment details and death dates for all of them, you could do a clean combined calculation for three-year survival.

Darby S, McGale P, Correa C, Taylor C, Arriagada R, Clarke M, Cutter D, Davies C, Ewertz M, Godwin J, Gray R, Pierce L, Whelan T, Wang Y, Peto R.Albain K, Anderson S, Arriagada R, Barlow W, Bergh J, Bliss J, Buyse M, Cameron D, Carrasco E, Clarke M, Correa C, Coates A, Collins R, Costantino J, Cutter D, Cuzick J, Darby S, Davidson N, Davies C, Davies K, Delmestri A, Di Leo A, Dowsett M, Elphinstone P, Evans V, Ewertz M, Gelber R, Gettins L, Geyer C, Goldhirsch A, Godwin J, Gray R, Gregory C, Hayes D, Hill C, Ingle J, Jakesz R, James S, Kaufmann M, Kerr A, MacKinnon E, McGale P, McHugh T, Norton L, Ohashi Y, Paik S, Pan HC, Perez E, Peto R, Piccart M, Pierce L, Pritchard K, Pruneri G, Raina V, Ravdin P, Robertson J, Rutgers E, Shao YF, Swain S, Taylor C, Valagussa P, Viale G, Whelan T, Winer E, Wang Y, Wood W, Abe O, Abe R, Enomoto K, Kikuchi K, Koyama H, Masuda H, Nomura Y, Ohashi Y, Sakai K, Sugimachi K, Toi M, Tominaga T, Uchino J, Yoshida M, Haybittle JL, Leonard CF, Calais G, Geraud P, Collett V, Davies C, Delmestri A, Sayer J, Harvey VJ, Holdaway IM, Kay RG, Mason BH, Forbes JF, Wilcken N, Bartsch R, Dubsky P, Fesl C, Fohler H, Gnant M, Greil R, Jakesz R, Lang A, Luschin-Ebengreuth G, Marth C, Mlineritsch B, Samonigg H, Singer CF, Steger GG, Stöger H, Canney P, Yosef HM, Focan C, Peek U, Oates GD, Powell J, Durand M, Mauriac L, Di Leo A, Dolci S, Larsimont D, Nogaret JM, Philippson C, Piccart MJ, Masood MB, Parker D, Price JJ, Lindsay MA, Mackey J, Martin M, Hupperets PS, Bates T, Blamey RW, Chetty U, Ellis IO, Mallon E, Morgan DA, Patnick J, Pinder S, Olivotto I, Ragaz J, Berry D, Broadwater G, Cirrincione C, Muss H, Norton L, Weiss RB, Abu-Zahra HT, Portnoj SM, Bowden S, Brookes C, Dunn J, Fernando I, Lee M, Poole C, Rea D, Spooner D, Barrett-Lee PJ, Mansel RE, Monypenny IJ, Gordon NH, Davis HL, Cuzick J, Lehingue Y, Romestaing P, Dubois JB, Delozier T, Griffon B, Mace Lesec’h J, Rambert P, Mustacchi G, Petruzelka, Pribylova O, Owen JR, Harbeck N, Jänicke F, Meisner C, Schmitt M, Thomssen C, Meier P, Shan Y, Shao YF, Wang X, Zhao DB, Chen ZM, Pan HC, Howell A, Swindell R, Burrett JA, Clarke M, Collins R, Correa C, Cutter D, Darby S, Davies C, Davies K, Delmestri A, Elphinstone P, Evans V, Gettins L, Godwin J, Gray R, Gregory C, Hermans D, Hicks C, James S, Kerr A, MacKinnon E, Lay M, McGale P, McHugh T, Sayer J, Taylor C, Wang Y, Albano J, de Oliveira CF, Gervásio H, Gordilho J, Johansen H, Mouridsen HT, Gelman RS, Harris JR, Hayes D, Henderson C, Shapiro CL, Winer E, Christiansen P, Ejlertsen B, Ewertz M, Jensen MB, Møller S, Mouridsen HT, Carstensen B, Palshof T, Trampisch HJ, Dalesio O, de Vries EG, Rodenhuis S, van Tinteren H, Comis RL, Davidson NE, Gray R, Robert N, Sledge G, Solin LJ, Sparano JA, Tormey DC, Wood W, Cameron D, Chetty U, Dixon JM, Forrest P, Jack W, Kunkler I, Rossbach J, Klijn JG, Treurniet-Donker AD, van Putten WL, Rotmensz N, Veronesi U, Viale G, Bartelink H, Bijker N, Bogaerts J, Cardoso F, Cufer T, Julien JP, Rutgers E, van de Velde CJ, Cunningham MP, Huovinen R, Joensuu H, Costa A, Tinterri C, Bonadonna G, Gianni L, Valagussa P, Goldstein LJ, Bonneterre J, Fargeot P, Fumoleau P, Kerbrat P, Luporsi E, Namer M, Eiermann W, Hilfrich J, Jonat W, Kaufmann M, Kreienberg R, Schumacher M, Bastert G, Rauschecker H, Sauer R, Sauerbrei W, Schauer A, Schumacher M, Blohmer JU, Costa SD, Eidtmann H, Gerber G, Jackisch C, Loibl S, von Minckwitz G, de Schryver A, Vakaet L, Belfiglio M, Nicolucci A, Pellegrini F, Pirozzoli MC, Sacco M, Valentini M, McArdle CS, Smith DC, Stallard S, Dent DM, Gudgeon CA, Hacking A, Murray E, Panieri E, Werner ID, Carrasco E, Martin M, Segui MA, Galligioni E, Lopez M, Erazo A, Medina JY, Horiguchi J, Takei H, Fentiman IS, Hayward JL, Rubens RD, Skilton D, Scheurlen H, Kaufmann M, Sohn HC, Untch M, Dafni U, Markopoulos C, Dafni D, Fountzilas G, Mavroudis D, Klefstrom P, Saarto T, Gallen M, Margreiter R, de Lafontan B, Mihura J, Roché H, Asselain B, Salmon RJ, Vilcoq JR, Arriagada R, Bourgier C, Hill C, Koscielny S, Laplanche A, Lê MG, Spielmann M, A’Hern R, Bliss J, Ellis P, Kilburn L, Yarnold JR, Benraadt J, Kooi M, van de Velde AO, van Dongen JA, Vermorken JB, Castiglione M, Coates A, Colleoni M, Collins J, Forbes J, Gelber RD, Goldhirsch A, Lindtner J, Price KN, Regan MM, Rudenstam CM, Senn HJ, Thuerlimann B, Bliss JM, Chilvers CE, Coombes RC, Hall E, Marty M, Buyse M, Possinger K, Schmid P, Untch M, Wallwiener D, Foster L, George WD, Stewart HJ, Stroner P, Borovik R, Hayat H, Inbar MJ, Robinson E, Bruzzi P, Del Mastro L, Pronzato P, Sertoli MR, Venturini M, Camerini T, De Palo G, Di Mauro MG, Formelli F, Valagussa P, Amadori D, Martoni A, Pannuti F, Camisa R, Cocconi G, Colozza A, Passalacqua R, Aogi K, Takashima S, Abe O, Ikeda T, Inokuchi K, Kikuchi K, Sawa K, Sonoo H, Korzeniowski S, Skolyszewski J, Ogawa M, Yamashita J, Bastiaannet E, van de Velde CJ, van de Water W, van Nes JG, Christiaens R, Neven P, Paridaens R, Van den Bogaert W, Braun S, Janni W, Martin P, Romain S, Janauer M, Seifert M, Sevelda P, Zielinski CC, Hakes T, Hudis CA, Norton L, Wittes R, Giokas G, Kondylis D, Lissaios B, de la Huerta R, Sainz MG, Altemus R, Camphausen K, Cowan K, Danforth D, Lichter A, Lippman M, O’Shaughnessy J, Pierce LJ, Steinberg S, Venzon D, Zujewski JA, D’Amico C, Lioce M, Paradiso A, Chapman JA, Gelmon K, Goss PE, Levine MN, Meyer R, Parulekar W, Pater JL, Pritchard KI, Shepherd LE, Tu D, Whelan T, Nomura Y, Ohno S, Anderson A, Bass G, Brown A, Bryant J, Costantino J, Dignam J, Fisher B, Geyer C, Mamounas EP, Paik S, Redmond C, Swain S, Wickerham L, Wolmark N, Baum M, Jackson IM, Palmer MK, Perez E, Ingle JN, Suman VJ, Bengtsson NO, Emdin S, Jonsson H, Del Mastro L, Venturini M, Lythgoe JP, Swindell R, Kissin M, Erikstein B, Hannisdal E, Jacobsen AB, Varhaug JE, Erikstein B, Gundersen S, Hauer-Jensen M, Høst H, Jacobsen AB, Nissen-Meyer R, Blamey RW, Mitchell AK, Morgan DA, Robertson JF, Ueo H, Di Palma M, Mathé G, Misset JL, Levine M, Pritchard KI, Whelan T, Morimoto K, Sawa K, Takatsuka Y, Crossley E, Harris A, Talbot D, Taylor M, Martin AL, Roché H, Cocconi G, di Blasio B, Ivanov V, Paltuev R, Semiglazov V, Brockschmidt J, Cooper MR, Falkson CI, A’Hern R, Ashley S, Dowsett M, Makris A, Powles TJ, Smith IE, Yarnold JR, Gazet JC, Browne L, Graham P, Corcoran N, Deshpande N, di Martino L, Douglas P, Hacking A, Høst H, Lindtner A, Notter G, Bryant AJ, Ewing GH, Firth LA, Krushen-Kosloski JL, Nissen-Meyer R, Anderson H, Killander F, Malmström P, Rydén L, Arnesson LG, Carstensen J, Dufmats M, Fohlin H, Nordenskjöld B, Söderberg M, Carpenter JT, Murray N, Royle GT, Simmonds PD, Albain K, Barlow W, Crowley J, Hayes D, Gralow J, Green S, Hortobagyi G, Livingston R, Martino S, Osborne CK, Adolfsson J, Bergh J, Bondesson T, Celebioglu F, Dahlberg K, Fornander T, Fredriksson I, Frisell J, Göransson E, Iiristo M, Johansson U, Lenner E, Löfgren L, Nikolaidis P, Perbeck L, Rotstein S, Sandelin K, Skoog L, Svane G, af Trampe E, Wadström C, Castiglione M, Goldhirsch A, Maibach R, Senn HJ, Thürlimann B, Hakama M, Holli K, Isola J, Rouhento K, Saaristo R, Brenner H, Hercbergs A, Martin AL, Roché H, Yoshimoto M, Paterson AH, Pritchard KI, Fyles A, Meakin JW, Panzarella T, Pritchard KI, Bahi J, Reid M, Spittle M, Bishop H, Bundred NJ, Cuzick J, Ellis IO, Fentiman IS, Forbes JF, Forsyth S, George WD, Pinder SE, Sestak I, Deutsch GP, Gray R, Kwong DL, Pai VR, Peto R, Senanayake F, Boccardo F, Rubagotti A, Baum M, Forsyth S, Hackshaw A, Houghton J, Ledermann J, Monson K, Tobias JS, Carlomagno C, De Laurentiis M, De Placido S, Williams L, Hayes D, Pierce LJ, Broglio K, Buzdar AU, Love RR, Ahlgren J, Garmo H, Holmberg L, Liljegren G, Lindman H, Wärnberg F, Asmar L, Jones SE, Gluz O, Harbeck N, Liedtke C, Nitz U, Litton A, Wallgren A, Karlsson P, Linderholm BK, Chlebowski RT, Caffier H.

Bad Pharma: How Medicine is Broken, And How We Can Fix It

Подняться наверх