Congratulations. Your Study Went Nowhere.

Sep 24, 2018 · 133 comments
KRS (Brookfield, WI)
Let us discuss a medical study design. In a double blind study, a potent drug given for a short time, efficacy can easily be proven. If it is a weak agent , we need a large group. Otherwise, we have many false positives and false negatives. Even Meta-analysis will not solve the problem. When we do long term studies like fish oil or a dietary component, we are looking at cardio vascular disease(CVD). For example, with a potent agent like statin, it took a decade or more to prove the efficacy on CVD. Moreover, it needed multiple more studies. This is because there are many major factors that cause CVD which change with time to compete. How can we prove fish oil which is a minor factor will decrease CVD? Even with thousands of patients in two groups this is impossible to prove.Many do fish oil studies with small number of patients , for a short time, and get negative and positive results. Even meta analysis would not help. Two chemists in 1972 studying Eskimos came with a theory that fish oil helps in heart disease. Marketing industry made the theory into a fact with billions of dollars profit. Many short term studies were done with opposing results . With all the confusion NIH has funded multi million dollar study (VITAL study) lasting only 5 years. It will probably conclude as ineffective. This is what happens when some prominent scientist publishes an article and marketing industry takes over. Celebrities concur. We can look at what happened with vaccines and autism.
Ranger (New York City)
The results of a cancer drug trial I participated in (Amgen 655) were never published. I accept that inconclusive or negative research results are all part of the work, and that as I may have received the experimental drug or a placebo. But I don't accept that Amgen didn't publish the results. I searched and I contacted Amgen, but there was nothing. I was in the trial in 2009. I assume Amgen found the drug unsuitable and moved on. Patient perspective (mine!) as a cartoon here http://cancerissofunny.blogspot.com/search?q=Clinical+trial&m=1
Old Yeller (SLC UT USA)
NYTimes could do more to follow up on studies reported in the Science and Health sections. Summaries of studies are the life blood of the Science section, and I love to read them. But it is not enough to report the results of a study, without a mechanism for reporting other conflicting or debunking information. Reporters can't do it all. Fortunately there are many readers that are willing to help. NYTimes could leverage that enthusiasm to everyone's benefit. I suggest that when publishing an article on a study, the NYTimes put a link on the Science page for reader/researchers to cite other relevant studies.
CLC (San Diego)
You would think, in matters of life and death, that 1) the party testing a product for efficacy and harm would be a party other than the one that holds the patent on the product and stands to profit if the product is approved. 2) it would be illegal for drugmakers to pay the FDA for fast-tracking the review of their product. 3) clinical trial operators would be required to report every health problem (physical or psychiatric), including fever, rash, blood pressure changes and death that befalls research participants during clinical trials because it is inconceivable that they would only have to report the problems that they believe might have been caused by their drug. 4) Drugmakers must actively question research participants about health problems they have reason to expect their drug causes, because people might not report them unless they are asked. 5) The FDA would never knowingly approve a drug that can cause significant injury, pain, disease, violence, mental health symptoms or death. 6) When the FDA withdraws approval of a certain drug for a certain patient population because its own research proved that it increased people from that group's odds of dying within approximately 3 weeks by 60%, any doctor or hospital that prescribes that drug to patients in that population will be subject to steep penalties and/or criminal prosecution. Not one of those six common-sense provisions is in place. And CDC wonders why life-expectance declined in 2016 and 2017.
David Hothem (Arlington, VA)
@CLC The CDC has pretty good data that the opiod crisis is a significant cause.
Dustin (Orange, CA)
Rather than requiring that all the registered studies be published, the FDA could require that the study data be uploaded to the registration site. Once there it could be publicly available for scrutiny. The former requires going through the peer-review process, thus difficulties there are a potential excuse for not publishing. However, there should are barriers to uploading data.
Rr (NY)
We researchers do embrace our negative findings and appreciate how they contribute to our research story. Unfortunately, grant review study sections, journal reviewers and promotion and tenure committees do not share in this enthusiasm
CLC (San Diego)
@Rr Drop Julian Assange a line.
Robin Harrison Hart (Eugene, OR)
Nope. I'll tell you why. This is basically the same attitude that always failed us in the past. You have failed; there is no optimism possible on the "True Path" of... Christianity, probably. Failing both your faith, and the scientific process, with this article. oIo
Nate Hilts (Honolulu, Hawaii)
Inspired by my epidemiology and medical sociology professors, who bemoaned the very things in this NYT article, I had always wanted to start a journal dedicated exclusively to studies where the results were not “significant” (in the statistical sense), the hypothesis turned out to be wrong, or for some other reason the research didn’t end up where the researchers had intended. I’d call it the File Drawer Journal, a node to psychologist Robert Rosenthal, who noted in the 1970s that such research often goes no further than the researchers’ file drawers.
Tim Clair (Columbia MD)
I recommend Stuart Firestein's two short books, "Failure" and "Ignorance" for a great explanation of how we actually navigate the rocky road of science He addresses directly the issue of negative results and their value.
VPS (Illinois)
Thank you for raising the awareness of a fundamental problem with the way medical research is done and reported. I would add the validity (double meaning intended) of the statistical analyses used on the data set to your list of concerns.
CG (NC)
Unfortunately, there are no rewards for null results in a study. It can't be published and therefore has no value to a researcher. Publish or perish and receiving grant dollars pressure researchers to stretch the truth, even when there is learning in negative findings. The system has to change before the outcomes will.
Rick Morty (U.S.A.)
@CG This is an attitude I never understand. Negative results are pathways that other researchers don't have to waste time and money exploring. And saying "it can't be published" is absolutely untrue. Even this article discusses that negative results *are* currently being published....just not enough. For the truly pessimistic, there is even an entire journal dedicated to negative results; the appropriately named Journal of Negative Results.
Paul Kramer (Poconos)
The same goes for almost everything else in the health industry. Try Googling; e.g., to find pluses and minuses for a surgery you are considering and you'll find little but accolades and "go for it!". Then go to public forums and discover what actual recipients have to say about the results; i.e., not so hot after all.
TNflash (TN)
One of the best discoveries of my lab career came about from looking at the failed trials of another project. The rejects did not work for the purpose intended but one of the rejects worked great for another application. Remember the discovery of the glue for the Post It Note. It came from a failed experiment but became a billion dollar business. The failures do matter and its not always bad.
Christine Johnson (NYC)
In 2000, a group of colleagues and I formed the online Journal of Negative Results - Ecology and Evolutionary Biology as a publishing outlet for 'negative' results. The purpose was to redress the tendency of journals to publish sexy articles that might lead to a biased, untrue representation of what happens in nature. We still plug away at this, funding and maintaining the publishing outlet ourselves. Other journals that followed ours and published negative results in the medical sciences flourished - we still have a ways to go. But the important point is non-results/negative results matter - good science matters. http://www.jnr-eeb.org/index.php/jnr
John (Port Clinton, OH)
Reinventing six-sigma. Control groups are not isolated. Everyone is exposed to similar external stimuli that have interactions effecting the output. Testing methodologies for the industry is 100 years out of date. The author is right. The people selling the drugs want to control the perception of the product. We need taxpayer funded FDA research to conduct designed experiments that will determine wither the effect of the drug was STATISTICALLY SIGNIFICANT. If random noises have a bigger impact on the results than the drug, don't waste your money. Don't forget to block the studies to verify the results are repeatable. And show me the data. I don't care about your interpretation. Let me make my own conclusions.
Mavis Johnson (New Mexico)
The most disturbing thing here is the passivity. There won't be any research on how this affected public health, ruined lives,, added to the number of misdiagnosis, and even endangered children. The writer is a pediatrician, yet he makes no observations about children exposed to these drugs. Perhaps he has only treated the offspring of the entitled. The taboos here, all imposed by pharma and the editors, in order to downplay the level of corruption, misleading information and insidious pharma marketing. Marketing these phony pharmaceuticals used to be against the law, now they are repeated nightly and cross advertised by celebrities and content marketers on social media. The victims have been silenced, after all with the expanded definition of mental illness, they might be crazy. The entitled don't have to live with the ramifications of this misleading industry funded science. They don't have to care for family members with brain damage or untreated mental disorders. No one counts the number of hospital admissions, suicides, and adverse events. The industry made sure no agency would track any of this. We are all being Gas Lighted, this truncated view of what would have been criminal behavior 30 years ago, is just business as usual. Neoliberalism tells the readers that anyone's despair is an opportunity for profit. They seem to be OK with drugging children in foster homes, and minority children in abject poverty.
Chelmian (Chicago, IL)
Actually all children suffer from bad drugs. Wealth doesn't make drugs function better.
Ben M. (PA)
"This doesn’t mean we should discount all results from medical trials. " Yes, it does.
W.A. Spitzer (Faywood, NM)
@Ben M......You are conflating medical trial results that are reported to the FDA with medical trial results published in medical journals. They are not the same thing.
David Mellor (Charlottesville, VA)
Underreporting of negative results is a serious problem that undermines credibility of all scientific research. There are many reasons that this happens, but it basically boils down to biases we all share: novel results that show a different make us shout "Eureka!", whereas null results lead us to puzzle "Hmm, I wonder what went wrong" despite the fact that such results can be just as credible. There is a solution to this. Peer review and decide whether or not to publish before results are known. Editors and peer reviewers can set criteria for final publication that are not dependent on the final outcome of the results (e.g. tests that show that the study was conducted to high standards), but otherwise all null results deserve to be published. These Registered Reports are currently published by over 130 journals, https://cos.io/rr Scientists and members of the public should demand these be offered by all scientific, peer reviewed journals that publish hypothesis-testing research.
pontificatrix (CA)
Agreed in general, but find it irritating that Dr Carroll picks on antidepressants in particular. Antidepressants have been thoroughly analyzed by several meta-analyses that included unpublished data, and they appear to have approximately 30% better effect than placebo. (Kirsch et al PLoS Medicine 2008, Purgato et al Cochrane 2014, Hieronymus et al Molecular Psychiatry 2017, etc.) Overall effect size of antidepressants for depression is estimated at 0.33, which is more than twice the size of the effect of statins for lowering cholesterol or of aspirin for prevention of vascular disease, and yet nobody is running around saying statins and aspirin are ineffective. So, why all the hate for antidepressants?
richard (the west)
@pontificatrix The studies you've cited have a number of methodological and statistical issues themselves. Pharma itself has set up the widespread and growing disaffection with antidepressants by conducting an ongoing campaign which misleadingly implies that these compounds address well-understood neuro-physiological conditions. They clearly do not. I would be interested to see a well-designed study that documents the effect-size you've claimed. To my knowledge no such study exists for a single anti-depressant medication, let alone the entire class.
Kathryn Hollenbach (San Diego, CA)
It is important to recognize that not finding a significant difference is not the same as saying there is equality and there are many study flaws that can negate finding a significant difference. That said, nonsignificant findings should be reported. We need to encourage scientists to report them and journals should be more open to publishing them. They are, unfortunately, somehow not as exciting as significant findings and thus are overlooked by journal editors. When they are published, they are not reported by the press.
Burt Auerbach (Ventura, CA)
It is just as important, maybe even more important, to know what NOT to do as it is what to do. New surgical procedures are regularly presented at professional meetings with great enthusiasm and often stellar early results. What you rarely see, however, are follow-up studies a few years later showing that perhaps this procedure wasn't such a great idea after all.
Usok (Houston)
Hopefully these negative results will be published at least verbally in conventions and/or professional meetings. At least people will know progress is being made in this particular experiment. One more thing, PhD thesis is not based on the positive experimental results but a new experiment or approach that hasn't been tried or tested before. One of these PhD's eventually will obtain positive results in their career.
Benjamin T (Los Angeles)
I was surprised that this didn’t reference “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” by Turner et al. in the New England Journal of Medicine. The article is widely-cited and very important to the topic.
jpn (Albany, NY)
As a practicing physician, for years I have advocated for the establishment of a journal solely dedicated to negative outcomes. There is no shortage of high quality studies that are rejected simply because they failed to demonstrate a difference, and yet these results can have profound consequences. For example, if there is no difference between a cheap and an expensive drug, or if there is no difference in efficacy between two treatments but one has worse side effects. It is often just as helpful to know when there is no difference as it is to know when there is a difference.
Gregory D. MELLOYY (Paonia, CO USA)
@jpn The only way I can see to achieve a more principled environment for us all to to get active situations where we effectively can resource systems that clean the other's shovel a bit. The only way I can see the resourcing potential persisting is to build guiding institutions that help consumers buy into what will really favor the goals and principles they value. It may take a bit of governing oversight to make sure the "shovel cleaning process" stays sound and civil. Yet, I believe is it possible and would be very beneficial in many areas where information clarity and integrity are missing from our decisions (could) rely (more) heavily on those who have gone before (and potentially at a high cost) and it is being lost by "their posterity".
JoJo (Boston)
Interesting article. For decades I have advocated "Results-Blind Science Publishing" as a solution to some of the problems Dr.Carroll discusses, i.e., the decision to publish is based only on the judged importance of the research question & soundness of methodology of the study. Locascio, J. J. Results blind science publishing. Basic and Applied Social Psychology, 2017, Vol 39(5), 239-246.
ubique (New York)
A scientist that can’t comprehend the value of learning the existence of a negative has no place in the sciences. If we want more comprehensive research to be done, we have to stop trying to find financial shortcuts to the scientific method.
cheryl (yorktown)
This is not something I had considered, and it's not something the average medical news consumer would be able to ferret out on their own. It's hard enough to get behind headlines, to understand the published research, and to place any new results into a context of existing verified accounts It IS something that medical reporters, especially those who are trying to interpret results for the public, should be attuned to and explain.
leonardeuler (OTB)
Epidemiology studies show psychological disorders afflict fifteen percent or less of the population yet Drs encouraged by drug companies write prescriptions for 30% of the population. That means 15% are malpractice! Remove profit motive now and nationalize the drug companies also need to do a better job training drs.
Independent Citizen (Kansas)
I am intrigued by criticism of this article by some commentators here in the comment section. Most seem to claim that not publishing negative results is common sense (because who is interested in them anyway?) It makes sense to keep record of all results for a particular drug. One commentator points out that not reporting negative (or all) results is illegal, but not publishing them is journal editor's decision. That is a good point. But journal publications have great influence on prescribing doctor's perception of the efficacy of that drug, and due to snowball effect, its popularity goes up. For various reasons, doctors are not reading results reported to FDA, but rather what is published in journals or what has become popular due to publication bias. That is harmful to patients in the long run.
Kara Ben Nemsi (On the Orient Express)
A problem I have not seen discussed here is that 1. The medical literature is already growing at such a rate that it is almost impossible to keep up, except within a narrow field, and 2. It is so much easier to generate negative results and boatloads of it that the literature would simply be smothered by all that noise. Negative outcomes should be summarized on the clinical trials website of the NIH, so if someone has the same idea, they don't start the same futile attempts over and over again, but not in the general literature, unless they lead to a clear conclusion. Smothering the literature with low information noise would have a worse effect, by reducing the probability of finding key information leading to truly novel therapies in all the mass of negative data.
Ana Luisa (Belgium)
@Kara Ben Nemsi What you're saying is that doctors in general just want to apply proven information, rather than being interested in thinking about how a human body operates - because of course, if your real goal is to understand, "negative" outcomes are as important to know as positive ones. My experience is even worse, as a patient. Doctors don't read the "positive" literature either. How many patients were treated for years for a disease they didn't have, with treatments that only made things (much) worse, before they finally meet a doctor who wants to THINK before doing something, and as a consequence (1) is aware of all the new discoveries, and (2) discovers that the patient has a totally different and only 2 years earlier discovered illness ... ? The excuse of those who don't think and are just looking for a box to put each "case" into? They don't have the time to read medical literature, because there are too many patients to see/treat ...
Francis (Florida)
Your suggestions make complete sense. You forget, however, that Josef Mengele was not the last medical professional without a moral rudder. The profession, its publishers and profiteers are also motivated by profit, by any means. The oath labelled Hippocratic seemingly contains "profit first; damn the consequences". Our profession and its trough is malignantly far beneath the ethics of the Great Healer. Todays NYT Dorothy Pear's article on Drugmakers provides more evidence. "Physician heal thyself" is a mantra which currently applies to all aspects of this once supposedly noble service.
Diana Senechal (Szolnok, Hungary)
Thank you, Dr. Carroll, for this bracing and refreshing article. One of your points at the very end needs more attention: "We can celebrate and elevate negative results, in both our arguments and reporting, as we do positive ones." As things stand right now, positive results are often regarded as "successes" and negative results as "failures." That is misleading; any careful and illuminating study should be treated as a success. Also, if the researchers themselves are hoping for a particular result (and do, in fact, obtain it), they should energetically question each step of their work and solicit criticism from others. A result with appropriate caveats and qualifications is far more helpful than a quick takeaway.
Audaz (US)
Negative results are often caused by poor research design. If you study the phenomenon ahead of time and observe the conditions under which it appears, you should not get negative results. Negative results are also often the result of lumping together a number of studies done under different conditions. So we get those studies that say anti-depressants don't work. Doctors and patients know they do. (Not unfortunately over long periods, but research isn't done over long periods.)
Ana Luisa (Belgium)
@Audaz "Patients know" isn't a scientifically valid argument at all, remember? Ever heard about "placebo effect" ... ? And then we're not even talking about horrible side effects yet. Conclusion: you cannot possibly discover a hypothesis that can be proven to be true if you don't first try out all options, which of course includes those that once investigated can be proven to be false. To imagine that you can "know" on beforehand which hypothesis is false is the most anti-scientific claim I've ever heard ...
lm (cambridge)
This reminds me of a similar bias by analogy that I have noticed in my personal experience with doctors : many of them don’t want to hear that their suggested diagnosis or treatment did not resolve the problem. They are happy and helpful when you report good results to treatment, but will turn on you and blame you when you don’t - instead of being intellectually stimulated and wanting to try and figure out the problem, as the best doctors do. Positive over negative patient feedback, one predetermined solution to a problem is how too many doctors approach healing.
cheryl (yorktown)
@lm Alas, unfortunately this does happen too often. There is an assumption, often, of non-compliance. With some populations, the negative reaction to a report of a failed treatment, no matter how subtle, actual stops patients from reporting failure. And this the lead said Dr to believe that he has more success with a particular treatment than the facts would support. And thus he continues to use the treatment with absolute and unjustified confidence.
H Smith (Den)
Richard Feynman talked about “cargo cults” - cults in the Pacific Islands that imitated Air Force operations. They expected cargo planes to land, as if by magic, if they built a landing strip. Feynman compared much of science to cargo cults. Imitate good science work and you will get something valuable - science. But it's not real because the experiments were just an imitation of the real thing.
Sivaram Pochiraju (Hyderabad, India)
Apart from Publication bias and Outcome reporting bias, there is a bigger issue that’s not mentioned here. Not only research students but Professors or guides badly want as many research publications as possible. The more number of publications one has the more popular is that person in the scientific hierarchy. As such a number of Professors manipulate in such a way that they become the first author and the one who did all the hard work becomes the second author. Normally reverse should be the order. It’s not understood why FDA considered as negative results in the case of ten of those results which were considered positive by the researchers. Negative results should also be published to make people aware of the real outcome of a research since it’s the patients, who ultimately suffer or even die in the event of consuming a wrong medicine.
Doctor (Iowa)
Research paradigm flaws: --Negative studies aren't published. --Bias of drug company funding. --Bias of study design to prove a researcher's perspective. --Poor study design leads to any desired perspective being demonstrated. --Improper incentive to publish or perish. --The high number of studies done make it increasingly likely that small statistical errors or anomalies will lead to a positive publishable finding. --Surgical studies do not take into account that every surgeon is different (different skill level, different technique, etc.); the only thing that a surgical study can show is the comparative results _for that surgeon_. --For-profit journals publish anything for profit. --Statistically-significant differences do not imply clinically-relevant differences. In the comparison of logical observation versus "evidence-based" data, logical observation has the stronger likelihood of leading to correct treatment.
W.A. Spitzer (Faywood, NM)
This article makes mistake at two levels. First in the case of clinical trials, all results are reported to the FDA. For drugs that have been approved all of the clinical data is available to the public through the freedom of information act. There is an important difference between not publishing negative results which often makes good sense, and not reporting negative results to the FDA, which is illegal. Second, there is an assumption that the results of all clinical trials are equally valid. They are not. There are many ways to get a negative result from a clinical trial with a drug that is effective, and essential no way to to get positive clinical trial result from a drug that is ineffective. Meaning that if you run ten clinical trials with a drug in which nine give negative results and one gives a positive result, the correct interpretation is that the drug is effective. Finally, the key to a published result should be, if you run the clinical trial in exactly the way described in the paper, you will get the result described in the paper. The important element is not whether any given trial is positive or negative, but rather, is it reproducible.
The Pooch (Wendell, MA)
@W.A. Spitzer There are lots of ways to get a positive result from ineffective or dangerous drugs: hiding the data on side-effects, mysterious early terminations of drug trials, focusing on surrogate markers instead of real outcomes, and outright falsification. Pharma industry does this stuff all the time.
H Smith (Den)
@W.A. Spitzer - This is not entirely about the FDA. All aspects of science have these problems.
W.A. Spitzer (Faywood, NM)
@The Pooch....All clinical trials are strictly regulated by the FDA. All clinical trial results are scrupulously reported to the FDA. I worked in research for a major pharmaceutical company and I know what is reported to the FDA. Your allegations are not only wrong, they are slanderous. Now if you want to talk about what the Pharmas select to report or not report in medical journals that is a different story. But it is rather important that you make that distinction.
William Smith (United States)
I listen to the Ultimate Health Podcast and it's full of conflicting information. One doctor says, "No grains". Another doctor says, "Meat is bad". Another says, "That's founded on bad science". Another says, "Plants hate people." So confusing...
Khaganadh Sommu (Saint Louis MO)
If only it is applied to politicians,political parties and politics in general too !
James Williams (Atlanta )
It might also help if Universities provided a path to tenure built around teaching and service that was open to at least a certain percentage of their faculty.
Frank (Colorado)
This is a tenet of critical thinking that has been overcome by "right" answer orientation. Eliminating the negatives from consideration saves time and money. Negative outcomes are valuable.
W.A. Spitzer (Faywood, NM)
@Frank.... Negative outcomes are valuable.. No, sometimes negative results are valuable. I am an organic chemist. I might run the same general synthetic reaction a dozen different times, making slight adjustments each time. If I publish, most often I will only publish the best way to run the reaction. I should not be judged by running negative reactions, but rather by another labs ability to reproduce the result I published, assuming they carefully follow my published procedure
Andy (Salt Lake City, Utah)
My understanding was all the lesser evils stem from publication bias. Scientific journals won't publish negative results. The researcher will therefore design, report, and spin studies that aim for positive results. Researchers know they have a better chance at publication with positive results so they choose research questions designed to produce positive results. Supply and demand. We shouldn't focus on the researchers. That's a red herring. Their doing what they need to do in order to survive. We should focus instead on why academia places so much emphasis on publication. Publication is a performance metric for researchers. Naturally, researchers want to do well on their performance evaluation. Journal publications are the gate keepers. Convince journals to print negative results and scientists will stop producing positive results in proportion to the journal's acceptance rate. Finally we need to recognize, as Jorge Cham has humorously noted, researchers are sometimes bad at their job or simply lazy. There will always be bad publications out there. Check your sources and be well.
Vladek (NJ)
In the same way that very few movie studios release films in which the Hero _doesn't_ triumph over evil, so too, very few granting agencies/journals want to hear a prosaic scientific story about an intervention that doesn't work. Currently, academic scientists live and die by their "Success" ("Success" being simple, clear, easily explained stories that point towards curing of disease in the distant future). The whole system is oriented towards hype. And if you resist the urge to hype, your ability to do any science suffers, as your support will dwindle to a trickle at best. Cynical? Yup.
W.A. Spitzer (Faywood, NM)
@Vladek..."The whole system is oriented towards hype."....Just go ahead and try hyping the FDA some time.
H Smith (Den)
@W.A. Spitzer That is very good to read. Glad for the FDA
Dave in Northridge (North Hollywood, CA)
Remember the genesis of the opioid crisis? A paper was published that claimed opioids didn't present an addiction problem, and citation bias set in almost immediately. This article is not about a theoretical problem; there are real consequences to what Dr. Carroll is reporting.
H Smith (Den)
@Dave in Northridge Yes, it was a poor quality study run decades ago. But with that study, doctors and institutions had a free hand. And it was an easy way out. This is a major problem. Institutions can pick the science they like, right or wrong. Its more than medicine, think climate change and pollution.
Josh M (Washington, DC)
Good thing this type of boas never- repeat- never- happens with vaccine trials. Well, hold on, maybe it does- "Here’s a case that typifies this problem and illustrates how beneficial it can be when critical findings get published. In 2005, Lone Simonsen, who was then with the National Institute of Allergy and Infectious Diseases, and her colleagues published a study in JAMA Internal Medicine showing that the flu vaccine prevented fewer deaths than expected in people over 65. “I had interesting conversations with vaccine people. They said, ‘What are you doing, Lone? You are ruining everything,’” recalls Dr. Simonsen, who is now a global public health researcher at George Washington University. Her work helped lead to the development of a more effective flu vaccine for older people, yet she felt ostracized. “I felt it personally, because I wasn’t really invited to meetings,” she says. “It took a good decade before it was no longer controversial.”" NYT 8-4-2018
Pete (Houston)
This topic is also addressed in the current issue of Science, 21 September 2018. The two articles, "A Recipe for Rigor" and "The Truth Squad" amplify and extend the information provided by Mr. Carroll.
rick baldwin (Hartford,CT USA)
"Studies" like polls are easily manipulated apparently. The lack of morality in today's world is appalling.
Ana Luisa (Belgium)
Being a researcher myself, I couldn't agree more. I've been taught to only publish when I have positive results for such a long time, that I can't even imagine publishing negatives ones ... as if they mean that my basic intuition and subsequent research hypothesis were false and as a consequence prove my utter incompetency as a researcher, whereas indeed, all that it actually proves is that this COULD have been a research question that would have led to a positive outcome, so it was worth testing it, but after having done so, the knowledge acquired is that of knowing that no, there's nothing to see here. The only way to have truly objective scientific articles is to finally start valuing research with no positive outcome as much as research where the investigation leads to evidence confirming the initial hypothesis.
Peg Graham (New York)
As someone who has been advocating for innovation in basic home medical equipment commonly used by older adults aging with mobility-related disabilities, I applaud your critique. If more research were done that revealed the depth and breadth of UNMET need of this population (via studies on interventions that did not work) we would have more data to drive innovations that DO work.
4Average Joe (usa)
worked at an acute psychiatric state hospital day center when one of these "studies" came to use our clients. They had criteria to exclude: no chemical dependency, no aggressive behavior that landed them in acute, no homelessness, and regularity to attend follow ups. In short, only the very healthy, relatively stable schizophrenics were in the study, with remarkable and great results. I am a skeptic, and I think they could have gotten the same results with that group if they were given candy.
John O'Brien (London, UK)
The problem is that the whole concept of value and quality is mangled in research. Equating "positive" with good, "negative" with bad is not even good science. John Ioannidis papers have helped by asking much needed questions and help highlight the amount of waste in science generally.
Tricia (California)
Great piece. This is a result of a complete misunderstanding of what science is or can do. Science can rarely prove the positive, but can often rule out. Science is a continual exploration, with ongoing findings that frequently find contradictions in previous findings. We need to educate the populace about what science is and what it isn’t.
Eilat (New York)
@Tricia totally agree. science is the new religion/God in the West. Of some value, yes, a promising hope for mankind, yes, but yet majorly flawed and not capable of providing all much less any definitive answers, because it is hopelessly tainted by human ego, arrogance, greed, bias, short-sightedness, and frailty. Yet try explaining this to the masses.
Ro M (Baltimore)
I work in drug development. FDA needs to catch up to EMA on disclosure regulation (and enforcement). The EMA has regulation for data transparency that includes public disclosure of the data results as well as patient lay summaries, both of which will be required for all trials and posted on their forthcoming EU portal. The average clinical trial participant is not going to the medical journals to understand what happened during their trial. Regulations are also likely forthcoming for individual patients to receive their individual results following trial participation. For trials in the US, there is a requirement to post aggregate results of all trials, though this has gone largely unenforced, on clinicaltrials.gov. Regardless, transparency is on the rise!
C Wolf (Virginia)
It's actually more complex. Many studies seek to isolate a single variable while hoping that randomization controls for all the other variables. The problem is they may not measure the variable's pre-post serum levels, therefore they're not sure what the variable's dose is doing (after all body size affects serum levels). Then other relevant variables may not be parametrically distributed. If you read all the bone research, for example, you can see work that finds 20+ variables affecting bone strength. How often do you see a disclaimer saying that in the research? Or you can look at injury research and see bone geometry, vit D insufficiency, calcium insufficiency, iron anemia, mileage, stride length, muscle imbalances, etc. all predicting injury rates. How does the coach or physician know how to reduce/treat injuries? The problem is that complex research designs cost money and there are more researchers than available multi-million dollar funding. And so it goes.
Billie Tanner (Battery Park, NYC)
Well, it’s been a billion years since I studied statistics but here goes. What is not studied is as important as what is. Quick example. My podiatrist asked me if I had had any recent falls. I told him no, then queried, “Why do you ask?”He answered with a curt,”Because you’re over fifty!”I then queried,”Do you ask your younger patients that same question?”He told me that he did not. “Why not, Doctor?”His answer:”Because we don’t see it in younger people!”I thought a moment, then retorted,”Well, just because you don’t see it, doesn’t mean that it doesn’t happen.I fell plenty of times in my teens and twenties off of my clogs, stilettos, platforms and even Earth shoes.” I went on.”I think that researchers have failed to associate ‘tumbles’ with ‘youth’ because they have not studied the association of ‘tumbles’ with ‘youth.’I believe that many assumptions about today’s ‘old age’ are flawed because those assumptions were reached with yesterday’s data, rather than with today’s.The doctor disagreed,reaffirming that the statistical data were firm in their conclusions.As I pulled on my boots,I recalled how a prominent professor of statistics from Yale had insisted(on CNN,no less!)that Clinton had four percentage points on Trump and,that if Trump won,he’d eat a bug. Guess who was last seen”chowing-down”on a cricket(a chocolate-covered one,at that!)during his post-election interview with Don Lemon?The good professor himself.His erroneous error?Flawed sampling. It happens!
andy b (hudson, fl.)
@Billie Tanner She won by over 2 percentage points. The statistics were correct within the margin of error but somehow we forgot to factor in that we are not a democracy when it comes to presidential elections. It had nothing to do with flawed sampling of voters. Plenty to do with the electoral college.
Pete (CT)
The FDA should also require drug companies to use a standard objective system of reporting result. Tailored to the type of drug being studied and using numerical scores for various positive and negative results. This should then be required to be included with any published results.
W.A. Spitzer (Faywood, NM)
@Pete..."The FDA should also require drug companies to use a standard objective system of reporting result."....They do, and they always have. You are conflating not publishing negative results with not reporting negative results.
Ben (Austin)
It would be interesting to hear Dr Carroll's review the healthcare reporting of this paper. I regularly read articles with quotes from doctors who receive large amounts of remuneration from the companies who's products they are endorsing. There are also lots of articles written about the headline grabbing results of studies done with tiny sample sizes or other serious flaws.
Abby Kurzman (Boston, MA)
There was an article on this subject on the SATs recently.
Grunchy (Alberta)
This story boils down to "don't be deceptive" on your research papers. The real reason is because all these studies are always subject to confirmation studies, and if there is a subsequent lack of correlation, that's what causes embarrassment down the line.
Cody McCall (tacoma)
If we removed greed-driven capitalism from the medical field, this wouldn't happen. These data biases are driven by greed. Remove the money incentives, remove the greed, focus on making better medical practices and products. I know, a pipe dream. But true.
Frank (South Orange)
One potential solution is for journals to demand copies of the study protocol, data collection forms, amendments, and IRB approval letters when an author submits a manuscript for publication. If the manuscript fails to address the primary endpoints listed in the protocol, the manuscript should be returned to the author for revision or it should rejected outright. Some journals adhere to this policy. Others do not. This may be due in part to a lack of qualified or interested reviewers available to the journal. Journals themselves need to be more proactive in policing this issue. It's infinitely more difficult to fix the problem after the fact.
Eyal Shemesh (New York)
This is an excellent, excellent article and I am going to use it in my teaching of medical residents. Thank you very much.
Greg Maguire (La Jolla, CA)
The biases reported here belie a much more fundamental problem in clinical studies, namely the misuse and misunderstanding of statistics in these studies. The false positive risk (FPR), as discussed by professor David Colquhoun of University College London, is one fundamental statistical issue that is rarely considered in clinical trials. With P values between 0.01 and 0.05 there is a chance of at least 26% that your result is a false positive (Colquhoun, 2014). Because Aaron Carrol is a concerned physician, and important voice, regarding these matters of clinical trial performance, I implore him to look at the work of Colquhoun (https://www.biorxiv.org/content/biorxiv/early/2017/10/25/144337.full.pdf). Clinical science is wasting much time and money on meaningless trials, endangering patients, and equally important, fomenting a distrust of science. Many of us, including Dr. Carroll, know we should, and can, do much better.
Pete (Santa Cruz)
@Greg Maguire Great point...the link to Colquhoun's paper didn't work for me, but it's worth tracking down. Search for "David Colquhoun false positive risk" on www.biorxiv.org; worked for me.
Janet D (Portland, OR)
It should be noted that the main reason negative results aren’t published as frequently as positive ones is because there are many reasons why results may be negative, but usually only one way in which a result is positive. Obviously this is why we establish hypotheses and study designs that control for such ‘noise’, but it’s natural that unforeseen factors can obviate subtler outcomes that nonetheless are meaningful upon closer examination. The broader problem is perhaps that so much of the public, and perhaps even the medical community, lacks this understanding of science and statistics.
Brent Graham (Toronto)
I am Editor-in_chief of the Journal of Hand Surgery. Even though we recognize that our Impact Factor, a measure of journal influence, is reduced by publishing negative results -- because the papers aren't cited -- we strongly encourage authors to send us submissions that replicate earlier findings and that report negative results. The reason we don't publish more is because authors don't submit them! They don't see merit (and see risk) in reporting the series of cases they did that didn't turn out the way they thought. In the future some of these results will come from large (and anonymous) administrative databases.
Dr. TLS (Austin Texas)
We vote ourselves into the worst healthcare in the developed world every chance we get. Enjoy the small government and deregulation we keep voting for. To trust relegation of your healthcare to profit driven, multinational corporations is foolish. We are killing the planet based on junk science, why not ourselves too?
Grunchy (Alberta)
@Dr. TLS There's some good psychological reasoning to suggest the fault of "fake news" lies precisely with everyone participating on social media. Vox did a story recently about it: https://youtu.be/wZSRxfHMr5s
Joseph Kennedy (Tuttlingen Germany)
Global Warming or Climate Change (!) seems to an area where a tad more objectivity would be welcome: especially after reading criticism of "The Hockey Stick" phenomenon which gave birth to the ‘Fake News’ machinations in our time.
as (Florida)
There is nothing new or useful in this piece. The problems have long been acknowledged across all empirical sciences and potential solutions proposed. There has been some progress (at least in my field, experimental psychology), but at the end of the day, the problems stem from human nature. The problem might be solved if academic promotion and journal publication rewarded discovery rather than significant results. It's in the hands of the gatekeepers.
Chris Judge (Bloomington IN)
A result is considered "negative" if it provides no evidence in favor of some hypothesis. However, that doesn't mean the hypothesis is false. By definition a "negative" statistical outcome provides zero information. But if the data set from such an experiment is combined with other data sets, one might obtain new information. For example, it is possible that a with 50 patients gives a positive result, another trial with 50 patients provides a negative result, and the combined data of 100 patients yields a negative result. It's not the negative result that should be published. It's the data set that should be published.
Emergence (pdx)
Negative data are often useful. But equally valuable in scientific research and experimental design is the awareness of uncontrolled variables which in an ideal world, are identified and controlled for. Investigators too frequently ignore or are not even aware of the variables that can impact their data. Therefore, when other investigators attempt to repeat experiments, the data often lose statistical significance or even become negative instead of positive data. For example, studies on the health effects of various diets often suffer from uncontrolled variables that can affect one's health even after things like age, gender and preexisting conditions are taken into account. The saying, "Ignorance is bliss." is particularly relevant to scientific research, especially in the biological sciences.
Alan Klein (Denver)
A wise Medical School professor once told my class (in 1978) not to rely on the results of a newly published study, for 10 years. So many studies have their results reversed, or debunked.
joeshuren (Bouvet Island)
You overlook the False Discovery Rate phenomenon, and so don't report that most "positive" results are false anyway. The PLOSone review article uses the conventional mark of "statistically significant (P<0.05)" for positive results. But that level of statistical significance is not the same as accuracy. Realistic sensitivity and specificity of tests of less than 90% leads to more than half false positives, which of course cannot be reproduced in repeated randomized clinical trials. In clinical settings this inaccuracy leads to actual patient harm and means that drug companies misallocate billions of dollars. This is the primary reason Phase 3 trials fail (as well as inability to recruit participants) and tempts researchers to cook the numbers for interventions that work in other cases. See DrugBaron https://www.forbes.com/sites/davidgrainger/2015/01/29/why-too-many-clini... , David Colquhoun DOI:10.1098/rsos.140216 , and John P. A. Ioannidis, https://doi.org/10.1371/journal.pmed.0020124
LL (Switzerland)
A key reason for negative data not being published is that scientific journals don’t accept negative data for publication. This goes even for complex clinical trials based on prominent preclinical scientific rationales. It might be understandable for what is called ‘failed’ scientific trials – trials were methodical or operational shortcomings preclude a conclusive interpretation. We recently had the example of a negative Phase 2 clinical trial intended to be published. This was roundly rejected by the first journal approached. The second journal invited the submission, with a statement of the editor that publishing negative trial data is important: Once the peer-review came in supporting publication, the editor changed his mind and rejected the publication based on the act that the trial was negative. The study was then finally published in another journal after multiple revisions, about 3 years after study completion (in part also due to most study personnel having moved on to other activities). The prevailing view that scientists / companies don’t want to publish negative results is misguided: Most journals won’t accept negative results for publication, plain and simple. The picture won’t change until the most central players in the publication business – journals – change their culture and policies.
ana (New York )
@LL You can (and should) post on bioRxiv https://www.biorxiv.org/ It is searchable, open-source, visited by many many people. Saves time, money and even get your foot in the door of the for-profit journals.
Bang Ding Ow (27514)
@LL Dr. Carroll, an old meme -- "truth will out." With the flood of social media, patients WILL make the results, public. Period, full stop. Anyone who thinks otherwise is either a child, deluded/naive, or insane. Taxpayers are funding the vast majority of basic research. For them, I would demand that HHS require a final report on outcomes, to be made public. Period.
jd (Indy, IN)
The biggest problem the pharmaceutical industry has is that they're driven by profits. There would be no incentive to take a junk drug to market if they were owned and operated for the benefit of the people.
Paul Kafalenos (St. Louis)
Negative results are not non-results.
JC (Oregon)
This kind of bias is not limited to clinical trials. I would argue that the funding system/mechanism for medical research is the real reason behind. NYT reported that cancer therapeutics are oversold because the very few positive outcomes were reported as miracles but the vast majority of failed cases were not reported. As a tax payer, I really think that most of my tax dollars ate wasted (on wars, fake medical breakthroughs, etc). I actually think that Trump was right to cut NIH funding. We have been hijacked by the medical interest group for too long. We will never win the "war on cancer". Simply because cancer is mostly a condition for older people. When can we finally accept the truth that we will all die one day. Due to our (genetic) differences, some will die earlier and some may die later. Again, based on the reports of NYT, the vast majority of medical spending is on some terminally ill patients. How stupid! Trump should really drain the swamp of the medical research.
Fred (Up North)
The very recent example of the decline and fall of Brian Wansink, Cornell food psychologist, is a good example of all these biases. https://www.nature.com/articles/d41586-018-06802-6?error=cookies_not_sup...
Mtnman1963 (MD)
Try to build a career out of studies that had negative results. Your reform needs to start with funding agencies if you want to make any headway. Follow the money.
John Schubert, cyclingsavvy.org (Coopersburg, PA)
In the field of bicycle safety, most studies are run by cheerleaders for expensive infrastructure. (I think their bias is usually more ideological than financial, but the effect is no better.) The cheerleaders have been able to suppress knowledge of known crash types caused by the infrastructure. It takes a high level of denial to participate in this truth-shading. Bike crashes aren’t biochemistry. They’re pretty easy to understand. But if you refuse to analyze individual crash causes, wrap everything up in a package of statistics, omit key portions of the roadway (like, the intersections aren’t counted in your study zone) and put a picture of a green bike lane on the cover of your study, you can “prove” that a facility with a known mechanism for causing fatalities is “safe.” I can name quite a few people who have died because of this. — John Schubert
oogada (Boogada)
The controlling factor in all this is money. When journals are swallowed up by publishing conglomerates their mission ceases to be dissemination of information and becomes profit via sensationalism and notoriety. Its not only money, its the 'money attitude': the conviction that everything is better when its run like a business, with heedless efficiency and little concern for anything but profit. So we get scientists in search of short-cuts like numbers of citations, prestige ratings of journals (they call it "influence"); grant-makers covering their derrieres by selecting big name researchers, previous recipients or, in a show of an anonymity and randomness, studies that sound a lot like ones that got noticed last cycle. There is one element of non-rigid corporatism afoot, though. When big mouths or big political guns pull rank and through mockery, moral indignation ("What?! You can't study guns/marijuana/abortion/climate change!! Its unAmerican! NOT WITH MY TAX DOLLARS!"); pressing questions, clinical challenges, statistical anomalies aside, this kind of thing has a big impact on scientists and funders. We differ on the concept "bad science". Bad science isn't only misleading or incomplete. It is timid, its committed to the current ruling paradigm, its afraid to gore the oxen of crusty old souls who typically run publication review boards and make up conference schedules. Bad science is concerned with the bottom line, not the future, not unexplored potential.
Steve (New York)
One of the dirty secrets of medicine is that pharmaceutical companies will rarely provide financial support for studies unless they are pretty certain that they will provide support for use of the their products. As an academic physician, I long ago learned that the studies that haven't been done often tell you as much as those that have. And isn't The Times tired yet of whipping drugs for mental illness? Readers would assume that these are the only drugs that have any controversy. Somehow in an Op-Ed on Sloan Kettering's questionable ethical practices on drugs for cancer, Marcia Angell managed to put in a denunciation of drugs for mental illness that had nothing to do with the Sloan Kettering story. Dr. Carroll seems to be participating in the same sort of misleading process he claims to be denouncing by picking and choosing what he wants to denounce.
T (Providence, RI)
Dr. Carroll casts a large spotlight on how various biases are unfortunately more prevalent than researchers would like to admit. Indeed, publishing negative as well as inconclusive results is much harder than publishing work with positive results. Our research group recently investigated this phenomenon in interventional Alzheimer's and mild cognitive impairment clinical trials. We found that not only were there more studies with positive results that were published among the 744 trials that we analyzed, but that a staggering 80% of these completed trials were not published. Trial nonpublication represents not only a waste of already scarce research resources but raises ethical issues for participants of the trial. A total of 66,655 were enrolled in unpublished, completed trials; whereas 18,246 participants were enrolled in unpublished, discontinued trials. 17% of the trials were also prematurely discontinued whereby the majority of trials did not have a known reason for discontinuation. As a result, a massive fund of information was never integrated into medical science and clinical practice. Link to the paper: https://www.trci.alzdem.com/article/S2352-8737(18)30013-1/fulltext Hopefully, more international initiatives like that of AllTrials (http://www.alltrials.net/) will help to have all past and present trials registered, and the full methods and results shared freely with all stakeholders.
Lois Addy (Lincolnshire UK)
This is absolutely spot on. My personal experience of ME/CFS is exactly as your article shows - I was diagnosed in 2007 with ME/CFS and offered a place on the PACE Trial, allocated randomly to the CBT arm. They said they had no magic bullet but could probably get me functioning better at the start. At the end they said, oh, we don't know why it's not worked for you, you tried so hard and did everything right. My biggest fear then was ending up in a wheelchair. In 2013 - suddenly I couldn't work, drive, climb stairs, walk, hold my own head and torso up or get a glass of water out of the tap. It was terrifying. The PACE People told me in 2007, don't worry that can't happen if you follow the protocols. It did. Now, in 2018 with support from family, and state and local services I'm able to stagger around the house and sit up and hold my own head up. So, progress. Is it progress I'd have recognised as quality of life back in 2007? No. Do I have a reasonable quality of life with my adjusted perceptions? where I now LONG to be able to use a wheelchair? YES!! The relevance to medical trials? Well the people who ran the Trial fell into most of the traps for researchers Aaron outlines in the article. And have been so influential that biomedical research wasn't funded, and made people worse. See Virology Blog and David Tuller's articles for the devastating global impact of the very things in Aaron's article how people are (mis)treated as a result as a direct result of the PACE Trial.
OneView (Boston)
It's easy to blame the medical industrial establishment, but in a "publish or perish" world, designing and executing (and spending money!) on studies that reveal... nothing... is a good way to not receive grant money and not be published in high-impact journals. The rot is systematic in how scientists are paid and rewarded... both by pharma and by universities.
The Pooch (Wendell, MA)
Let's not forget one of the biggest and most important negative results, the Women's Health Initiative. This was the largest and longest experimental trial on diet ever conducted, 40,000+ women, 7 years, testing the low fat/low sat fat guidelines. The results? Nothing. Nada. Zilch. The low fat/low sat fat diet produced no benefits whatsoever, not for obesity, not for heart disease, not for diabetes, not for cancer, not for mortality. This should have been enough to re-think the low fat guidelines, if not outright to reject the lipid hypothesis. But instead crickets all around -- the results were carefully ignored by all major nutritional authorities.
Jeremy Smillie (Australia)
If a pharmaceutical product is to be subsidised (purchased) by government, government should satisfy itself first that the product is what it says it is. Ethical, responsible Government must, independently, reproduce the studies and prove the purported benefits, before frittering away more taxpayers dollars.
Jerry (San Diego)
Researchers, especially university faculty, have a strong incentives to publish (tenure, promotion, funding, prestige), but since publication bias exists for positive results, there will be a pressure to publish only positive results. Until faculty meritocracy is fixed for this issue, positive publication bias will persist.
Blackmamba (Il)
Science is the method by which we use double-blind controlled experimental tests using the best currently available natural data based upon the best currently available naturally theory explanation. There is no right nor wrong answer. Truth has no faith nor favor.
oogada (Boogada)
@Blackmamba "Truth has no faith nor favor" you say. Yes but, like The Law, Science has very little truck with the truth. A self-serving aside: not all science, not even the best science, is double-blind, controlled experimental tests (you left out randomized, by the way). It depends on your question, why you ask it, how you propose to study it, who wants to know the answer. There are sometimes better, options. Thank you. There is, of course, no right or wrong answer. But there is the expected answer, in many ways the foundation of science. And there are funders. For them there is a right answer and they often insist on getting it, or no funding for you. Even if they don't get it, as Dr. Carroll notes, they try to convince you they did. Science is a human institution, like religion, and it needs to be supervised, results need to be reviewed, conclusions need to be criticized, and scientists need to be watched carefully, just like Evangelical preachers. Self-interest never goes away; science has no ironclad answers. Just effort, and the willingness of good-faith commenters to keep going. Unlike Evangelical preachers, scientists have the grace to admit the don't, can't, know "the truth". All they can do is answer, or study other researchers' answers, to important questions. There are those who take this beautiful aspect of science and weaponize it, as in "You don't know the truth, all you have is theories". Exactly; that's why science works.
Violet (Seattle)
What about studies that were pulled because the medication was shown to be ineffective? I was a subject in such a study last summer. The Phase 2 study was scuttled after a futility analysis showed the study medication was no better than placebo, and it was unfair to keep us subjects on an ineffective when there are meds that definitively work. Though I'm not a researcher, I'd like to know more about why the study medication was pulled, and I'd like to know what side effects other subjects may have experienced. We gave the use of our bodies over in the service of science, after all. Don't we deserve to know the story of the medication we ingested?
Edward Blau (WI)
There are two audiences for clinical trails. One is the FDA who makes the decisions about whether a drug or device should be approved and the other is the medical community. The FDA does, I hope, receive the results of all of the trials. It is up to the editors of medical journals to decide what articles they will print and unfortunately it is often up to the corporations sponsoring the trails to agree to allow the results to be sent for publication. As a reader of medical journals I believe the alarm raised by the author may bet a bit overdone. Just yesterday I read a negative result in the NEJM of a device to keep people alive after recovering from a heart attack and last week three negative studies on taking aspirin to prevent death from heart attacks.
Lois Addy (Lincolnshire UK)
@Edward Blau I do take your point, however, what Aaron covers in his article is exactly what has happened on a global scale as a result of those biases not being controlled by the PACE Trial researchers here in the UK. A journalist David Tuller (who has written for the NYT in the past) has been looking into the issue of research bias having tangible negative effects for decades in recent years, publishing his progress on Virology blog. There's a specific page on the blog that gives a roadmap to what he's covered. If research bias and the real world effects on real people interests you, then I thoroughly recommend this body of work. It's an well written entertaining set of articles which cumulatively show that for the PACE Trial at least (and presumably others that I don't know about) Aaron has not over-egged his puddind one jot!
MD (Ontario)
Agree with many of the points in this article- one "positive" trial in a sea of TRULY negative ones is more suggestive of a statistical fluke rather than intrinsic drug efficacy. However, it's important to note that not all trials that are labelled colloquially as "negative" are actually negative- many are actually "failed." Many diseases are really hard to study (e.g, depression, chronic pain, influenza). Not only are illness severity and improvements over time difficult to measure reliably for these diseases, but there may be substantial regression to the mean ("get better anyway" effect) to complicate matters. These features of certain diseases make it very difficult to design clinical trials with sufficient "assay sensitivity" to reveal any intrinsic drug efficacy. This, in turn, may lead to lots of "failed" trials in certain diseases (which don't actually imply ANYTHING about the drug's efficacy), but which are subsequently conflated with "negative" trials by those outside regulatory agencies.
SAF93 (Boston, MA)
When it comes to clinical trials, the situation is probably even uglier than the author paints it. Clinical trials are sponsored by corporations that want to sell therapies, generating profits. The experiments are routinely designed in a biased fashion to emphasize anticipated benefits while minimizing anticipated harms. Post-marketing studies by independent researchers often reveal less benefit and more harm than pre-market clinical trials.
Geraldine (Sag Harbor, NY)
@SAF93 Realize that post-market analyses usually involve sicker patients with multiple diagnoses and on multiple meds. Also, post-market reporting is not mandatory so out of 1,000 positive outcomes many of which may be off-label prescribing- only those resulting in egregious harms are reported. The mechanism to report post market adverse events is ridiculously complex and cumbersome and none of the physicians want to do it unless it's something they just can't ignore. I do not disagree with your statement, but i think it's important to realize that the real world experiences do differ from the controlled trials- but it's not a deliberate attempt at obfuscation.
SAF93 (Boston, MA)
@Geraldine There is an important distinction between post-market (Phase IV) studies of drug/devices vs. the FDA's adverse event reporting system, which as you note, may not capture many problems. Well-done Phase-IV trials usually assess outcomes similar to those of pre-market (Phase III) trials, using de-identified data from hospitals and insurers, in patient populations that are not pre-selected.
W.A. Spitzer (Faywood, NM)
@SAF93..."The experiments are routinely designed in a biased fashion to emphasize anticipated benefits while minimizing anticipated harms."....It sounds like you are describing phase two studies which are specifically designed to give a drug the best possible chance to work. But you are otherwise wrong. Clinical trials have to be approved by the FDA and are normally run by an independent clinical investigator. If they are carried out in a hospital setting the trials have to be reviewed and approved by a hospital board. There are often many hands other than the drug company involved in the design of a clinical trial, and that in itself can frequently be problematic.
David (San Francisco)
The problem is not limited to the medical fields. Publication bias exists in the other sciences and even in engineering. As a reviewer for several engineering journals I see many more papers submitted that claim a positive result than a negative one. It’s a quandary my colleagues and I have struggled with for years, and at least in engineering, money isn’t as much of a driver. But usefulness is, and a negative result just doesn’t seem as useful as a positive one, even though a positive result is actually meaningless.
PM (Phila, PA)
The problem of not publishing is also a direct result of lack of interest of the journals in giving space to negative results or results demonstrating the lack of efficacy. The issue therefore is more complicated than what the author writes. It is true, as the author states, that we can always go to FDA records and get an idea of both positive and genitive results. Yet another problem is the statistical terms used confusing the readers.
Pat (Somewhere)
A nice thought, but most research is not done to advance medical science and improve the human condition, but rather to develop something patentable and profitable. You don't make money and advance your career with negative results.
A (Scientist)
Bogus. There are plenty of scientists that have long, successful careers without ever holding a patent. It’s insulting to accuse the scientific community as a whole of placing greed above being genuinely curious and concerned for society
Pat (Somewhere)
@A Whether a scientist personally holds a patent is irrelevant. Nobody is "accusing" anyone of anything; merely pointing out the reality that research costs money, and those who pay for it want a return on their investment.
Thomas Zaslavsky (Binghamton, N.Y.)
@Pat, you are still wrong. You don't even know who "those who pay for it" are, much less what they want.
JVM (Binghamton, NY)
"Never destroy data" I was told on my very first job way back in 1964 at Data Device Corporation by its founder - Gerard G. Leeds. And he meant it. That was a policy and an order and wise philosophy. Our economy and our lives need it. In an ideal world researchers would not have their car payment, their spouse's birthday gift, or their children's school clothes budget depend on an outcome but rather on the quality of the designed and executed research - the data's validity. Ideally, science workers would worry about their work, not about their future. Just get that right. In a perfect world beyond war and want, infinite employment would be provided in the labs of science and the courts of justice. Work for all the different capabilities people have, and in matching proportion. Yes, "more accurate science". Both more, and more accurate science in a world just and fair for all would be a viable world. The Scientific Method, Judical Proceedure, and yes Journalistic Practice - Truth, Justice, and the American way for better longer living and an immortal civilization just getting started.
Paul (Brooklyn)
I agree with your report. Money is the major reason. If you look at ads on TV for drugs and hospitals you would swear that everybody that takes a drug or comes out of a hospital was instantly cured of a major disease. I know eight people that were treated in Sloan Cancer for serious cancer. One of them went into remission and is still living after 15 yrs. The other seven did not make it after two yrs. You don't hear about them.
Bruce Rozenblit (Kansas City, MO)
The reason non effective outcomes are not published on an equal basis is because those don't make the medical industrial complex any money. Medicine is a business and business exists for one reason which is to make money. The entire operation is designed to extract as much money from society as possible. With healthcare now exceeding 18% of GDP, I'd say they know exactly what they are doing. The more drugs and so called therapies that can be sold the more money they can make. The more they can charge for these products, the more money they can make. If everyone was healthy, they would all go broke. Their wealth is predicated on everyone else being sick. It's all about the money. Unless and until the profit motive is removed from medicine, society will be inundated with products that do next to nothing good and actually cause a lot of harm.
Bob (East Lansing)
@Bruce Rozenblit It's also about the money in publishing. No one wants to publish negative trials because no one wants to read them. Want your journal to move up? publish blockbuster results and spin the heck out of them. That gets you in the national press.
Kara Ben Nemsi (On the Orient Express)
That's cynical and wrong. The main reason simply is that it is 10 times harder to convince a journal to publish a negative outcome, unless it was completely unexpected, and because nobody on meetings is interested in hearing what did not work either. So why direct disproportionately more energy towards something with a lower payoff when that time can be used to perform a new study with potentially better outcome? The simplest solution is to ditch the albatross and move on.