Avoiding bad science through better research design
Health policies and treatments should be based on solid evidence. But what if such evidence is wrong and the resulting treatments and policies are therefore ineffective, ineffective or harmful? Professor Stephen Soumerai says more in this blog.
Improving the design of research for health policies and treatments
What if policymakers, science journalists and even scientists couldn’t distinguish between the weak and trustworthy research studies that underpin our healthcare decisions?
Many studies of treatments and health policies fail to prove cause and effect relationships as they suffer from flawed research designs. The result is a series of mistakes and corrections: Early studies of new treatments tend to show dramatic positive health effects, which either diminish or disappear as more rigorous studies are conducted.
This is because when research data experts conduct systematic reviews of research studies, they typically exclude 50-75% because they do not meet the basic research design standards required to produce reliable conclusions.
In many such studies, researchers attempt to statistically manipulate the data to “fit” for irreconcilable differences between the intervention and control groups. Yet it is these very differences that often create the reported, but invalid, effects of the treatments or policies that have been studied.
In this accessible, graphically-filled article recently released by the U.S. Centers for Disease Control and Prevention, we describe five case examples showing how some of the most common biases and flawed study designs impact research. on important health policies and interventions, such as comparative effectiveness. medical treatments, cost containment policies and health information technologies.
Each case is followed by examples of weak study designs that cannot control for bias, more solid designs that can, and unintended clinical and policy consequences that can result from the seemingly dramatic reporting of poorly designed studies.
Flawed studies dictated national influenza vaccination policies, prompted a generation of clinicians to mistakenly believe that specific sedatives could cause hip fractures in the elderly, overstated the health benefits and cost savings of costs of electronic health records and grossly overstated the mortality benefits of hospital safety. programs, resulting in billions of dollars spent on interventions with little demonstrated health benefit.
As Ross Koppel and I recently stated in an editorial in US News and World Report, “Our concern is that as a research community we are losing our humility and caution in the face of declining research funding, the need to publish and the need to show useful results. Maybe it’s getting harder and harder to admit that our so-called big data discoveries aren’t as powerful as we’d like them to be, or are, at best, uninterpretable.
This ranking should help the public, policy makers, media and research trainees distinguish between biased and credible findings from health care studies.
In our article, we provide a simple ranking of the ability of most research models to control for common biases. This ranking should help the public, policy makers, media and research trainees distinguish between biased and credible findings from health care studies.
Medical journal editors can also benefit from this design hierarchy by refusing to publish research based on study designs that fail even the minimum criteria for inclusion in a self-respecting systematic review of the evidence (which they do not). often do not at present).
It is time to embrace the quality rather than the quantity of studies published daily in our field. We are slowly losing the credibility of the public who find it hard to believe many clinical studies and health policies – especially when they seem to contradict other recent studies.
It is not too late to rectify this error. We hope that some of the simple illustrations of common biases and strong research designs will be useful to the large population of health research users.