Published on October 3rd, 2014 | by Daniel Jolley0
The 2nd Horseman: Quality Evidence
Our understanding of what makes for quality medical research has improved dramatically over the past three decades. We understand that research must be ethical; should be reproducible; free of bias, so that we may make accurate conclusions; and that confounders be minimised and controlled for. We understand that prospective is best, and large blinded randomized trials are king.
We can articulate that a study must be appropriately powered to answer the question we are asking – but also not over-powered so that we waste resources and goodwill, or continue a study after an answer is known.
To meet the needs of ‘quality medical research’ ambitious study designs have been developed: massive, multicenter randomized controlled trials; longterm cohort studies; nested case-control and case cohort. Advanced biostatistic techniques, like multiple logistic regression analysis are now commonly used, along with combining tens or even hundreds of smaller studies into single large meta-analyses.
Surely (medical) truth is now knowable?
Nonetheless, despite the obvious improvement in the quality of many large, landmark trials, our appreciation of what makes a quality trial has highlighted how even today many do not live up to the standard required.
“It usually comes as a surprise … to learn that some (perhaps most) published articles belong in the bin, and should certainly not be used to inform practice.” — Trisha Greenhalgh, How to read a paper.
Medical meta-researcher, Dr. John Ioannidis, depressingly concludes that 90% of medical research is fundamentally flawed. In his landmark 2005 paper in PLoS Medicine, Ioannidis demonstrated that 80% of non-randomized studies were wrong, and among randomized controlled studies 25% were incorrect. Even large, multicenter, randomized clinical trials were predictably wrong in 10% of studies.
“Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.” — Dr. John Ioannidis
To further explore the extent of the problem Ioannidis investigated just under 50 of the most significant and highly regarded medical research findings from 1990 to 2003. Of 45 that concluded their interventions were effective, 34 had had their hypothesis retested. Of these 34, over 40% (14) were subsequently shown to be incorrect or exaggerated. Forty percent of some of the most highly regarded, practice-changing medical evidence from the 20th century subsequently disproven! 1
Ioannidis then focused on how clinicians adjust their views when highly cited evidence is later refuted by better quality research. Do we correct our misconceptions when quality, contrary evidence is available? Ioannidis’ team investigated this by examining the persistence in the belief of several big-ticket 1990s errors: vitamin E’s supposed cardiovascular benefits, beta carotene’s anti-cancer effects and oestrogen’s Alzheimer-protection. 2 Although early observational epidemiological studies had supported these theories, all had been subsequently refuted by large, randomised controlled trials a decade later. Surely researchers had adjusted their understanding of the evidence?
Surprisingly the earlier disproven observational studies were all still positively cited in 50% or more of peer reviewed publications – despite the well-established contrary evidence! It seems that even when we have conflicting better-quality evidence, the established belief persists independently of its merit.
What are we to do?
The quality of medical evidence is far poorer than we belief, even among the those studies that we perceive as most reliable. But most worryingly, even when bad evidence is subsequently disproven and corrected, the incorrect conclusions persist in the minds of doctors for decades.
We need be both more critical and questioning of research conclusions even as we incorporate new knowledge into our practise — but simultaneously we must avoid becoming wedded to the dogmatic; quick to change our decisions and care when the weight of evidence guides us.
- Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005 Jul 13;294(2):218-28. ↩
- Tatsioni A1, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA. 2007 Dec 5;298(21):2517-26. ↩