Peer Review: The Worst Way to Judge Research, Except for All the Others

Nov 05, 2018 · 81 comments
Yuri Lazebnik (New Haven, CT)
“…we think that once a paper gets through peer review, it’s “truth.” We’d do better to accept that everything, even published research, needs to be reconsidered as new evidence comes to light, and subjected to more thorough post-publication review.” To help solve this problem, we are building a platform that will track, accumulate, and report confirming and refuting evidence for published scientific reports: scite.ai
Grover (Kentucky)
The role of editors in peer review is as important as the reviewers. An editor can send articles from her/his friends to less critical reviewers, and can express preferences in other ways. Reviewers themselves are often not objective in my experience, and are inclined to express their own egos rather than objective judgement.
Total Socialist (USA)
Just like everything else in this country, it's a corrupt system.
JPM (Hays, KS)
As an editor of an international science journal myself, I would add the following. Good editing is just as important as good reviewing, and editors are asked to handled too many articles for what they are paid. It is therefore tempting to reject without review for whatever justification, just because seeking reviewers and evaluating reviews is such a slog. Even with a double blind system, it is not hard for experienced authors to identify certain reviewers on the basis of articles they are asked to cite (their own). An unethicial practice that is becoming far too common.
SteveRR (CA)
Obviously the author is in error - the 'best' way to judge is via replication. The overwhelming driver of error is the total lack of replication especially in the Social 'Sciences' - even for major and 'ground-breaking' research findings. Yeah - that is a lot of quotes - so you know where my sympathies lie.
lou (Georgia)
If you want to know why we are in this fix, look first at publish or perish. And if you want to know why medicine is so slow to adapt to new findings, look at the reasons that have been given here: lagging belief in published articles, too many of them to keep up with, established authors at the "right" places given a pass, and they will keep printing the same viewpoints, unchallenged by alternate views from people at the wrong places who are screened out. This definitely happens in STEM fields.
JM (NYC)
Has any journal ever considered employing* recent Post-Docs to be involved in any part of the reviewing process? They would be highly aware of both past & current research in their areas. I remember when I defended my dissertation I had a solid grasp of past research in my field as well as the direction the field was going. And, as any PhD candidate can attest, completing a scientific dissertation demands a critical eye to develop one's work, to discuss its limitations, and to recommend ways to advance the field. *employ => salaried position commensurate with education
Melissa Martin (California)
I am on the editorial board of a major social science journal. In the many years I've been doing reviews, none of my co-reviewers have ever been anything but courteous and respectful to the author(s) even when the paper is really substandard. Comments are designed to make the article publishable, even if it's somewhere else. And I've also been struck with the consistency among reviewers in evaluating an article's merits. Usually the paper falls apart well before the data analysis, suffering from a poor articulation of the research question and development of the theory. Some papers may be rejected for being inappropriate to the journal's mission, but this is usually caught by the editor before it goes out for peer review. It's that step where I feel there has been more challenge in getting it right. Our reviews are double-blind. The mix of reviewers and authors (increasingly from Europe and elsewhere) is diverse. So while it's not flawless, peer review can be done well. My pet peeve: the decline in the acknowledgements stating 'this paper was greatly improved by the comments of three anonymous reviewers.'
Howard Johnson (NJ)
If we accept that most of the bias we're talking about is individual, not paradigmatic, than this is primarily an issue of reliability. Therefore, unreliable peer review can not be considered valid; end of story! But let's take the issue one step further and look at journal publishing requirements; the controls of what even makes it to the peer review process. We have so few avenues for finding the real meaning of individual studies through journals because meaning is not in the requirements. WVO Quine critiqued this type of reductive science thusly back in 1951: "The dogma of reductionism survives in the supposition that each statement, taken in isolation from its fellows, can admit of confirmation or infirmation at all. My countersuggestion, issuing essentially from Carnap's doctrine of the physical world in the Aufbau, is that our statements about the external world face the tribunal of sense experience not individually but only as a corporate body. . . . My present suggestion is that it is nonsense, and the root of much nonsense, to speak of a linguistic component and a factual component in the truth of any individual statement. Taken collectively, science has its double dependence upon language and experience; but this duality is not significantly traceable into the statements of science taken one by one." (WVO Quine, Two Dogmas of Empiricism, http://www.ditext.com/quine/quine.html) Should it be any wonder that practitioners find science so confusing!
Virginia (Illinois)
I've become so disillusioned and disgusted with journal peer reviews that I've geared my writing toward books, I've had a few good reviews, with valuable suggestions and useful corrections of those little mistakes on dates, names, etc., that somehow always creep in to complicated discussions. But most reviews have been uninformed, ignorant, baffling in ways that suggested that they didn't actually read the piece, or ideologically driven in reflecting some clearly preferred faction on the paper's topic. And the point about training reviewers is well taken. One reviewer made a big issue of my not citing three scholars I'd never heard of; upon a Google search I found that all were junior scholars, twenty years my juniors, who had yet to publish even one book. Later I found that one of those three was the reviewer himself, plugging his own work! And the journal was somehow reputable ... But my special beef is the tilt to accepting work by famous names, the problem, suggested here in the test in which 89 percent of published articles were rejected after the authors' names and affiliations were changed. My favorite story in this regard was told to me by an eminence gris in African studies, whom I'll call "Big Name", whose tremendous research had defined his subfield. For a double-blind review of a journal article he submitted, he received a sniffy report to the effect of, "Weak study, omits the key data in this field, probably a graduate student, needs to read Big Name."
Larry Bednar (Portland, OR)
Regarding the current peer review system: 1) Results that confirm previous reports have reduced chance of being published because they are less "newsworthy". But those confirmation provide real utility. 2) Research results rejected by one publication are often submitted/published elsewhere. Is this really good quality control? 3) Post-research review is inefficient quality control. The effort has already been expended, and the chance to adjust most protocols is long past. It's "downstream" correction - the boat was launched long ago, and advice to avoid that big rock isn't helpful at this point. Of course there is a lot of momentum - fundamental change would be complex and difficult. But consider just one alternative approach. Imagine reviews would be provided BEFORE the work is completed, and publication is guaranteed if work is performed with acceptable quality, regardless of outcome. Some potential benefits: 1) Feedback would be received when most helpful - before funds and energy are already expended on a flawed study concept. It's superior "upstream" quality control. 2) The issue of flawed research simply being published elsewhere would be reduced, because review input actually directly improves study design. 3) The advance promise to publish ANY results from well-executed studies would partly counteract tendencies to cherry-pick and publish only striking results. There ARE some ways to imagine improvements.
Moses (WA State)
There are obviously many people, trying to do right by patients and completely separate from money concerns, who depend on an accurate and honest appraisal of medical research. It must be a huge task for medical journal editors and staffs to assess all the facets of a proposed study entry to determine whether design, data collected and analyzed, data presented and conclusions are valid. For a reader such as myself also fall on the responsibility to weed through the mountains of data and have a basic understanding of statistics, which also are no small tasks. Since medical school I have struggled with this and medicine is increasingly more complex. The economic biases can easily overwhelm the average reader.
Steve (New York)
As one of the editors of a medical journal, I can assure your readers that these are all issues very much on the mind of those who are responsible for deciding what gets published. Finding competent reviewers is an ongoing problem. As Dr. Carroll points out, there is little tangible reward to reviewers, most of whom who are in academia and therefore usually overloaded between clinical, educational, and research responsibilities all of which need to be fulfilled for career advancement. Apart from the gratitude expressed in print by most journals at the end of the year naming reviewers, there is often no other tangible reward. As to payment, in most fields of medicine, journals would be hard put to give payments sufficient to make a difference to reviewers and with growing restrictions many journals have on advertising, finding the money wouldn't be easy. As to having more female reviewers, I am in agreement. However, many fields of medicine are still male dominated and it is hard enough finding reviewers in those without the further restriction of trying to balance the genders of reviewers.
Robert (USA)
"Peer" review is a good idea -- in general, and in theory. But it is problematic. Depending on the field and the context, it is sometimes misused to legitimize crap and mediocrity, perpetuate insular networks, coerce or marginalize nontenured faculty, etc. And as the writer points out, peer review can also be misused to stifle innovative ideas and paradigm breakthroughs while promoting trendy incoherence, etc. The imprimatur of peer review homogenizes significant variations in research quality. And all this is apart from the issue of academic fraud, which seems to have grown in recent decades, partly as a result of ego and wishful thinking, but also because people get sloppy and/or desperate when seeking the funding and the time they need to work on their research. The fact that peer review boards encounter a lot of substandard work means that those who submit such work are either deluded or oblivious. Others who make good faith efforts can use the peer review process to improve their work -- but only if they have enough time before P&T or promotion reviews.
Jonathan Katz (St. Louis)
As a practicing scientist (theoretical astrophysics), I see the problem as a failure of editors to do their job. This varies between scientific fields. An editor's job is to be the referee of the referee reports. Is the accept/reject recommendation supported by sensible arguments? A thinly disguised "I disagree or the author is my rival, reject" or "This confirms my belief or the author is my friend, accept" report should itself be rejected. An editor who rubber-stamps the referee's recommendation (sometimes, no editorial judgement appears to be involved) isn't doing his job and should quit (or be dismissed by the editorial board). Bad editing tolerates and leads to abusive refereeing. Responsible editing leads to constructive refereeing.
BCY123 (NY)
As a senior scientist having reviewed 100s of journal articles, grants and books, I can tell you that it is major dent in your time. Some papers are a delight to read, others not so much as they are so poorly written that it is difficult to tell what the did and why. However, many of these disasters may contain some very interesting results and it may be worth the effort to struggle through and then relay your impression to the editor that it is worthwhile but unreadable! It is hard to figure a once-size fits all method to address the various permutations that determine if a paper merits acceptance or rejection. I can tell you that lately I have seen many papers that I can suss are not telling the whole truth. The data are too good! I know this from 40+ years in the lab. Once in awhile things are perfect, but never in a series of many studies that might be incorporated into one report. Unfortunately, a large number of these papers come from certain countries. I have had this problem so frequently lately, I often decline to review these submissions after a quick look. These latter papers are a true drag on the review process and make many of us reduce the contribution of our time. I often see them published or submitted elsewhere with none of my concerns addressed.
DF (Orange County, NY)
In your lede you call the fake studies "ridiculous" but don't ever engage with the reality that, according to those journals, the premise of those studies was not ridiculous. The field being overworked or having gender imbalances did not have as much impact on their acceptance and publication as did their false conclusions that lined up with the far-left ideology dominating social sciences and their accompanying journals. In other words, these studies were accepted more because of their outcomes - regardless of how mind bendingly stupid they were - than any of the reasons you laid out. Where's the article on how out of control this anti-white, anti-west, anti-male, anti-hierarchy thinking that dominates academia and social science journals has become? I'm progressive and left leaning in most of my politics, but PC culture and extreme liberalism is the new fascism. Professors are losing their jobs all across the country because their students characterize challenging ideas as "thought violence" (literally, look it up) and protest them out of their careers. For a demographic that wants to be known for their tolerance and open-mindedness they're some of the most intolerant and close-minded people. Now that's irony.
Mark Edington (Hardwick, Mass.)
Overlooked in this article is another important initiative to improve the conduct of peer review -- the move toward creating agreed standards of peer review to be shared by scholarly publishers (we think we all mean the same thing when we say "double blind" or "single blind," for example, but in the absence of any standards no one actually knows), and to make transparent to readers just what form of review was applied to which object. In the humanities, it's fairly common to review a work both at the proposal stage and at the manuscript stage. These may be different forms of review, depending on the case, and on the judgment of the editors. More about this work at www.prtstandards.org
SAD SCIENTIST (Bethesda, MD)
I am a senior biomedical scientist. I have served on numerous editorial boards and have probably performed peer review for tens of thousands of research papers in my career. While I didn't get paid extra for the time I spent on this work, I used to enjoy the intellectual challenge. I felt it was a necessary and important contribution and besides, I was learning something in the process. Not any more. At this point, most of the papers that come by my desk are simply a waste of my time. If not outright fraud, they are largely "efforts to deceive", meaning, small advances written in a way to make them seem larger than life, data that has been "cherry-picked" in attempt to generate some marginal significance, not to mention findings that cannot possibly be true, but might get a pass if reviewed by someone not familiar with a given field of knowledge. So, what happens next? In good faith, you, as the reviewer politely point out one or more of the aforementioned problems. You would think that this would be the end of it, right? Wrong. The paper either comes zinging back to you more or less unchanged as "revised." Or it comes zinging back to you "un-revised" as the authors simply ignore what you have said and have submitted it to another journal. In other words, scientific publication has become a racket. And NOT just the for-profit so-called predatory journals either. I am not sure exactly what the precise solution to this problem might be.
Nasty Curmudgeon fr. (Boulder Creek, Calif.)
Unfortunately, your comment is making the most sense in this day and age of Tump – fake science accusations and counter accusations. Sorry to see you go, but I guess since the world is gon’a sweat ‘n boil (off) to a mars-like-end, might as well relax before you retire.
SAD SCIENTIST (Bethesda, MD)
@Nasty Curmudgeon fr. Indeed!!! The sad truth is, rigorous scientific peer review is no longer in fashion. I don't know whether it has to do with gender politics, millenial sensibilities, fake news, or who is or is not currently the President, but at this point I'd rather spend my non-reimbursed personal time working on my novel :-). Thanks for your comment!!!
Paul Adams (Stony Brook)
The obvious answer is - review the reviewers! Everyone who regularly reviews should be peer-reviewed, primarily by assessing how often the papers they reviewed and accepted or rejected have been cited (assuming that most rejected papers do eventually get accepted,as seems to be the case). Also, reviewers should be asked to estimate how often the papers they review are likely to be cited, e,g. ten years from now -this estimate should contribute to editors' decisions.
PictureBook (Non Local)
I prefer to get people obsessed with a problem that might not have an immediate solution rather than force them to experience the pain of cognitive dissonance. The great thinkers all had great rivalries like Aristotle and Plato, Newton and Hooke, Newton and Leibniz, Einstein and Bohr, Peanut Butter and Jelly. A lot of that missing magic in science is due to the idea that we have to get along and get it right. Most of the time we have to bury the other viewpoint in overwhelming evidence. It should be a rivalry to push each other to uncover the truth even if someone has to play Devil's advocate. Publishing is communication and that can be accomplished in better ways without libraries spending their limited funding cutting fat checks to publishers for articles most people will not read. Change will not happen until funding is no longer associated with the bandwagon fallacy of impact factor and the number of citations the author receives. You have been peer-reviewed Dr. Carroll I expect you to step up your sociology game.
Full Name (Location)
An important topic, but Aaron spent way more time on gender issues than is warranted, and no time at all on issues such as Chinese editors using the system to help their network of allies. Arron was obviously driven by his PC attitude, but for this problem to be addressed, we need to start by being honest. Aaron's article fails at this.
Dr.Abe (Ft Myers)
The REAL problem with the majority of the articles published are NOT readable. The basic English language is often lacking. I believe an ENGLISH major should be part of any team publishing research. If the English major cannot understand the basic content....few others will. If few understand the article...fewer will benefit from the content. If the authors cannot explain their work, I question that they really understand their own work. Making reviewers job more difficult, and fraud is more easily passed through the pipeline. Clarity, should be at the foundation of every good worthwhile research.
MJ Cho (Las Vegas, NV)
So long as this mentality of "publish or perish" prevails, the troubling issue will remain. Ask yourself how many papers Albert Einstein publish. What about Henry Cavendish?
Horace Dewey (NYC)
So, here's another problem w/ peer review that should be kept in mind by those who participate in the process. Many with PhDs do work that crosses several disciplines. However, many of those whose work fits in the interdisciplinary category would be the first to tell you that -- when it comes to serious expertise -- the field they have most fully mastered is, say, Swedish studies. Dr. Doe might have done some serious work in veterinary dentistry, and even been noticed for a study on how Swedish vets treat cavities in primates, but for that work she only learned the most rudimentary facts about the science of animal oral hygiene. Dr. Doe may be the world's greatest expert on the cultural differences in these practices, but she only knows the basics of the chemistry involved. The journal of Veterinary Dentistry then receives an article about the chemistry of canine plaque and -- with the best of intentions -- is happy to send Dr. Doe the article for peer review. Maybe you can see where this is headed. If the peer review system works as it should, Dr. Doe will respond and explain that she is not the right person to do the peer review, but that her colleague Dr. Roe is immersed in the kind in research dealt with in the paper. On the other hand, Dr. Doe might do the review, even though she knows she is not the best choice. The problem: many papers are not sent to an appropriate reviewer because assigning a specific paper takes a level of expertise that itself is rare.
Skinny hipster (World)
Keep repeating that "peer review" is better than the alternatives until the reproducibility of scientific research is 0% and then we'll wrap up the whole scientific endeavor as another dead end like psychoanalysis or astrology. Just look here (https://pubpeer.com/publications/D569C47E7BE09AD9D238BA526E06CA#) for the latest shining example of the peer review filter as operated by a publication like Nature. It's time to give alternatives a chance. Pre-publication on arxiv and bioarxiv. Open review on pubpeer or alternatives. If it passes muster there, on to publication where it is added irreversibly to the record (if citation is any indication of that). And enough of for profit publishers. Publications cost are near nothing these days, funding agencies should take it upon themselves to host community-supported journals. For instance, see the letter (http://www.jmlr.org/statement.html) with which 40 prominent ML researcher resigned from the board of a for profit journal and joined an open alternative which is thriving 20 years later.
Marc (Baton Rouge)
@Skinny hipster - Open review is not an answer. People don't have time to read the 'peer-reviewed' literature, let alone the effluent beforehand! I say this as a retired science editor with a decade in the trenches, as well as a mere peer reviewer for many 30 years prior. Most of the material my journal received (ca. 80%) was NOT sent out for peer review because the quality of the writing and the science were so bad. I wouldn't want to waste my reviewers' time with such dross. Even decent published work wastes a reader's time today. When I started out in the late 1970s, editors and reviewers actually edited. As an exercise in my class, we can cut down PUBLISHED papers by 30-60% on a regular basis. Multiply this by the hundreds of papers one needs to read, and this constitutes theft of time. I could go on. But, I think that today's problems with peer review are just another symptom of the emphasis on "productivity"(quantity) and not quality.
Tam Hunt (Hawai‘i)
Great piece. I’ve found that reviewers are often grumpy and overly critical, making points that are often demonstrably wrong. Since I work in areas that are a bit iconoclastic I would suggest that reviewers approach reviews with a more open mind, while of course maintaining rigor.
Donald Johnson (Colorado)
I used to read peer reviewed medical journals. But after I employed a few PhDs and learned from them that a lot of peer reviewed articles are based on flawed models, research designs and methodology, I became very wary. What has really destroyed my confidence in peer reviewed articles is the corruption of "science" in the climate change industry. I've read enough articles and comments about the lack of reliable and enough climate change data, the lack of credible computer models and the lack of auditing that assures data isn't manipulated to view all peer reviewed articles skeptically. Further, so called science publications have become so political when it comes to climate change that they have no credibility on anything they publish. Too many journals are so determined to sell the unproved climate change story that they've ruined their reputations and the reputations of all scientists. To me, that is a terrible tragedy. It takes only a few bad apples to destroy an industry's reputation, and science is in trouble because of its scholarly cheaters, biases and political campaigns. I suggest readers check out retractionwatch.com. It tracks "retractions as a window into the scientific process." I am a retired editor, publisher and owner of health care and other business and consumer publications.
What'sNew (Amsterdam, The Netherlands)
@Donald Johnson You "used to read peer reviewed medical journals" and you "are a retired editor, publisher and owner of health care and other business publications": so I assume that you have medical background. The essence of the greenhouse effect is simple enough and has been known for two centuries. CO2 in the atmosphere permits sunlight to pass but it blocks heat radiation into the cold universe. Increasing CO2 wil block more of this heat radiation. Consequently the planet warms up and, as anyone can see, at an alarming rate. The model is simple and can be modified. This is the work of basic scientists, not of physicians. I know of many basic scientists working in the medical field, but know no physicians working in basic science. The greenhouse effect is indeed ä terrible "tragedy".
Donald Johnson (Colorado)
@What'sNew I am not a scientist nor a M.D. I am the former editor (10 years) of Modern Healthcare, Health Care Strategic Management (20 years) and a co-creator and editor for 15 years of Managed Care Digest statistical reports on HMOs, PPOs and providers. And as a a life long (professional patient, not hypochondriac, stage 4 kidney disease patient), I have a personal interest in medical science. I have been interested in the peer review process and its problems for decades. I really don't care whether scientists think I'm a legit participant in this debate. But I know that you have serious problems because important consumers of your work doubt your competency and integrity. This is confirmed by most comments in this thread. I hope the academic scientific establishment will be ousted and replaced by open publishing, paid reviewers and studies based on good data and project designs. It doesn't seem likely at this point, which is too bad. Good luck
s K (Long Island)
The problem of improper review is magnified manifold in the social sciences. The author of this article is looking at STEM where the issue is really minor when compared to newer fields in the social sciences where writings that accept the orthodoxy go unchallenged and anything challenging the orthodoxy never gets published.
Skinny hipster (World)
@s K I beg to disagree. Prof. Ioannidis of Stanford wrote a paper under the title "Why Most Published Research Findings Are False" talking about the biomedical sciences, which is just one entry point into what is by now a large discussion. For an example from engineering, there is controversy over the reproducibility of results in Machine Learning and Reinforcement Learning in particular. I agree with you not all fields are equally affected and some seem to have hit headwinds before others, but claiming an exemption for all STEM fields is not consistent with what I see.
Greg Gerner (Wake Forest, NC)
"Everybody loves good news about their bad habits." Think about it.
Blackmamba (Il)
See the Bible and the Quran; "The Structure of Scientific Revolutions " Thomas J. Kuhn See the theories of Quantum Mechanics. Relativity, Incompleteness, Dark Energy and Dark Matter. "God created man in his own image. And man returned the favor. " George Bernard Shaw
Ivan (Memphis, TN)
The idea of reviewers as gate-keepers is old fashioned and should be scrapped. Everything should be allowed to be published, and it should all be on-line (no need to kill more trees). The part where good reviewers can still serve an important role is the critique of the published work. Let reviewers and authors debate the strengths and weaknesses of the work in public - after it has been published. Then allow revisions to be posted to strengthen and clarify the publication in later versions. In the modern world there are no justification for the year long delays caused by the current system.
Full Name (Location)
@Ivan If you think about this for even a few minutes, you realize this is unworkable. How do you find information you're looking for with millions of random publications around? Do you have to wait until someone with expertise and lots of time on their hands decides to write comments? It would be impossible to do science or make funding decisions. Science is too complex for overly simplistic ideas like this. It sounds a lot like a libertarian's blind faith in capitalism. Let the market decide. Completely unworkable.
Diogenes (Sinope)
@Full Name, Take a look at arxiv.org, which already does a pretty good job of organizing new research on a large number of the physical, mathematical, and computational sciences. It's certainly no worse than the smorgasbord of results one sees in the table of contents of most journals. Electronic search across multiple publications and publishers is the only realistic approach in this day & age, and existing publishers are resistant to pooling resources when it comes to search tools that make their products (publications) compete with those of competing publishers. I like @Ivan's idea re: publish and disseminate first, review later. The challenge will be to figure out how to referee the referees; they definitely cannot be anonymous. Otherwise we risk having the whole enterprise devolve into flame wars and personal attacks like one sees on far too many "public commentary" sites.
What'sNew (Amsterdam, The Netherlands)
@Ivan Reviewers are needed because writers often do not see their own errors. I myself often did not/do not see my own errors. Of course, reviewers may be wrong. But in general, I find reviewers considerate, patient, friendly, with good manners. In cases where I argued my case, I have overcome resistance. The reviewing system constitutes a filter. Calling names such as 'old-fashioned' does not help. Keep what is good and improve what is bad, if you can. I only see improvements in the system. Reviewing is much faster than it used to be (my recent experience is months). After acceptance publishers often ask me for my comment on the overall process.
rjon (Mahomet, Ilinois)
Peer review, in all fields, is often good, it is sometimes bad. But there are all sorts of in-betweens. Peer review is simply a means of getting scholars to communicate, requiring that they will be honest in what they have to say. They almost always are. But the result isn’t always objectivity, nor is it always even the goal. Objectivity in science generally means the assumption of a dead material molecular universe these days and whether scientific scholarship kowtows to such a view is often the aim of a review and has legitimacy in some scientific circles. Not all. There are other ends in other forms of peer review. Daston and Galison, in Objectivity, for example, cite other goals of peer review: whether something is “true-to-nature,” or whether an article exhibits “trained judgment.” I will add that review may result even in a “one off” recommendation to publish in any number of non-scientific fields where human judgment and experience are paramount—typically we call such fields “the humanities.” The pressures on peer review that degrade it are several, but they include the illegitimate claim that scientific review is the standard for all review, the need to pad vitae in contemporary academia for promotion and tenure decision-making in a bureaucratic environment, and the rise of a so-called celebrity “star-system” among often less-than-scholarly academics, as well as a decline of scholarly communication standards documented some years ago by Irving Horowitz.
LT (Durham, NC)
The peer review system needs a better way to shuttle papers to qualified, junior researchers. As a grad student and post doc, I never got requests to review papers. For those unfamiliar with the process, during submission, it is common practice to suggest qualified reviewers to the editor when you submit your work. However, as mentioned in the article, recommending a reviewer who is a very busy "top name" in the field means that they only have so much bandwidth to review papers on top of all of their other responsibilities. What actually happens in some cases (which I believe is technically in breach of the journals' confidentiality policies) is PIs will pawn off reviews onto their grad students and postdocs, and then submit them under their own name after some light editing. There should be a formal channel whereby a senior researcher can transfer the reviewing role to a grad student or post doc and mentor them on the review process. It's a win-win: junior researchers get experience reviewing, and let's be honest, sometimes they are better equipped to critique the details than the PI. They are the ones in the lab every day, after all... As for the comment: "Too often, we think that once a paper gets through peer review, it’s “truth.”" I don't know who "we" is in this case. I was trained, and train all of my mentees, to approach every article with a healthy level of skepticism. More first-hand experience with peer review would nurture this skepticism.
JM (NYC)
@LT Highly Agree: PhD candidates, those whose dissertations proposals have been approved, & Post-Docs can both assist & benefit from involvement with reviewing. They are the ones who are most in touch with the field from all the work they have done to develop & complete their dissertations. And, exposure to potential publications can hone skills in learning both how to & how not to write & put together a journal article.
Dana S (Long Beach, CA)
A part of why we academics should embrace voluntary peer reviewing, in addition to it being an expected aspect of service to our field, is that we can stay abreast of the most cutting-edge research. Reviewing enables scholars to read the most current work and ideas, given the lengthy time it takes for articles to get in press. My doctoral advisor used to train me through mock peer reviews. She would give me an article to read and review, and then critique and discuss my review. This or some similar kind of development of these skills could be embedded in all doctoral programs. Finally, we know too much about unconscious bias at this point to have any non-anonymous peer review processes. Those should be revamped immediately. My field’s journals are all anonymous; I cannot imagine having a completely bias- free review knowing the author names and institutions. Names that indicate gender or ethnicity, or are not familiar with loaded affiliation, or institutions that have or don’t have presumed prestige— all choices are ripe for discrimination.
Full Name (Location)
@Dana S And as the author states, the gender bias appears to favor women. Are you sure you want to give that up? Without anonymous reviews, there is no way to have even the chance for a fair review. Scientists are very thin-skinned, and if someone reviews you badly, you will get them back when you review something of theirs. No junior professor would dare criticize a senior colleague.
Htb (Los angeles)
Yes, peer review can sometimes be biased. And let's be a little more frank about one of the ways in which it can become biased: reviewer cartels. Researchers are rarely so crude as to send each other emails that say: "If you give my papers favorable reviews, then I'll give your papers favorable reviews." But researchers can and do collude in more subtle ways, often without fully realizing themselves what they are doing. Many researchers draw over and over from the same well of "recommended reviewers" when they submit their papers. It would be interesting to have hard data on the symmetries between authorship and reviewership: how often does the reviewer of a given paper ask an author of that paper to serve as a reviewer on papers of their own? Do journals keep in-house records of these symmetries? Perhaps journals should take active measures against such mutual backscratching. For example, a confidential industry-wide database could be formed to track author-reviewer symmetries, and used as a tool for implementing policies that break such symmetries. Discovery is not well served when the peer review system fosters the formation of echo chambers, in which small groups of reviewers give mutually favorable reviews of one another's papers.
Full Name (Location)
@Htb YES! That is absolutely true. The first thing to start with is to publish the name of the editor with the paper. That would short-circuit some of it if an editor knew he/she would be publically associated with a paper of a buddy or a paper that wasn't really worthy of publication.
Frued (North Carolina)
if peer reviewed research leads to grant funding from the government, and it is later determined to be poorly reviewed--the scientists involved should reimburse the government.
RC (MN)
Peer review worked well until the the past 3 or 4 decades, during which scientific research transitioned from a primarily intellectual activity to a profit-driven business. Combined with massive over-training of PhDs and post-docs during this period, as well as expansion of programs at marginal institutions that want in on the money, scientists now spend most of their time competing for grants and building their research groups, in order to attain money, power, and positions. The outcome for peer review (particularly at "top"journals) is that it is now dominated by politics. It can't be fixed without addressing how money distorts the judgement of a scientific career.
Chris (SW PA)
Researchers are career driven because it pays off. Most organizations reward researchers for the number of publications they get. This dilutes the actual quality of the papers and helps to overwhelm the review system. Researchers are no different than most people in that they have their own best interests in mind and not some high minded desire to seek truth. Truthfully, in the specific areas I am familiar, most papers really don't add to our understanding of anything. If you only published the papers that are actually meaningful 90% of publications would be rejected. The same can be said for patents, but to an even larger extent. I would estimate that 99% of patents are not worth anything. The real issue to me is how poorly the leadership of universities, national labs and private research organizations understand actual science and what work is actually meaningful. They set up systems that reward high numbers of publications when in fact they should reward quality and impact, but they don't know what is impactful. There is also a lot of loyalty to old businesses that have great political and monetary influence so that cutting edge, world changing research, is suppressed to protect the status quo. That is a very human characteristic, its called greed Lastly, politicians shovel money for political influence. To justify this, those who receive money for dubious research must publish to justify their awards. Their is always a journal willing to play this game.
RBS (Little River, CA)
@Chris I agree with your point about the proportion of published papers that add no real value to a field. As an editor I lived for those few papers of real value and eventually resigned after years in that pursuit. The frustration of dealing with a steady stream of mediocre papers, reviews that took months to complete and most of my weekends in my office led me to resign. Also you can be mildly ostracized at conferences when your rejection rate is above 75%. The chinese also have been flooding the market with new journals of dubious value whose content has to be sorted through by consciencous scientists.
Janet W. (New York, NY)
Dr. Carroll is speaking specifically of peer review in medical & other science journals. I don't recall having heard of fabricated evidence or similar scholarly plagues in peer-reviewed humanities journals. Probability is there may have been some but they don't make the headlines because humanities articles aren't life or death issues. Except in academia's “publish or perish” sense. The humanities and social sciences, however, are guilty of a lot of gobbledegook - the use of lingo for other scholars in the know. Or not in the know. Scholars whose own head scratching is a personal embarrassment at their being outside the cool world of indecipherable language & murky ideas. Those sorts of high-falutin' stupidities have been mocked, & deservedly so. As long as an in-group has control over its own expertise or academic discipline it'll favor its own internal forms of communicating knowledge. Music theory is for the few. In most cases it has to be that way otherwise the field can't progress. All academic scholarship, whatever the discipline, should be subject to some sort of peer review, whether institutionalized or less formal. Books, plays, films, sports, food, and even elections are subject to review by critics (another group of experts.) That seems to be about the best we can do.
Dan Styer (Wakeman, OH)
@Janet W. says "I don't recall having heard of fabricated evidence or similar scholarly plagues in peer-reviewed humanities journals." And this is proof that Janet W. didn't bother clicking on Aaron Carroll's first link. That link speaks specifically and solely of plagues in peer-reviewed humanities journals. To quote from the link: "Something has gone wrong in the university — especially in certain fields within the humanities."
Brian E Davies (Mount Pleasant, SC)
'we think that once a paper gets through peer review, it’s “truth.”' An inherent problem is that there is hardly ever any money to support repeating the work that went into the paper. In experimental science similar work may be done and the new results may cast doubt on the conclusions in the cited paper. In those sciences depending on field work (ecology, geology etc.) I cannot imagine any circumstances where a repeat of the original survey would be funded. The result - we have to 'trust' the original conclusions.
thisisme (Virginia)
As a researcher, I'm constantly dismayed at the poor stats training graduate students undergo. Many of my peers had the required 1 (or 2) mandatory stats class as required by our department but that's it. They design experiments with no thinking about whether their data will actually answer their questions. As a reviewer, this is very clear when I read manuscripts. Wrong analyses are used, assumptions aren't tested to see if a particular analysis can be used, etc. I once said a paper should be accepted with major modifications because the authors did not test whether the assumptions have been met for a linear regression to be used. Two of the other reviewers said the paper was fine, and it was ultimately accepted without the authors ever having to check their assumptions. There are different types of reviewers and if journals want good reviewers, they're going to need to provide incentives for both. One type is subject matter experts, individuals who understand the concepts, theoretical and applied, and can place those within the larger discipline. The other are methodological experts--statisticians, experimentalists, etc.--who can assess whether what was done actually answers the problem at hand. Sometimes, often times in my opinion, these don't go hand in hand. Just because you have a theoretically sound paper doesn't mean what was done (e.g., survey, observations, stats, etc) was correct. We need better training overall.
KAO (Sioux Falls, SD)
@thisisme as a grad student currently, I absolutely agree. We need more training in stats, scientific writing, presentations, and how to communicate with a non-scientific audience. It's not enough to take classes and do our research anymore. We're not being adequately prepared for careers and often times we're getting away with poor science.
MarkDFW (Dallas)
As Dr. Carroll correctly points out, peer review is the worst system except for all the others. One thing that has changed in the past 1-2 decades -- "premier" or "glam" journals often demand so much data and so many different techniques that no single reviewer has the expertise (or the time, actually) for a thorough critical review. Editors do their best, but certain segments of the paper make it through without an expert actually analyzing it. One solution would be to move away from the current model of journals wanting to publish issues full of "blockbuster" papers. And that change must ultimately be mandated by funding agencies and tenure/promotion committees.
Doug (Minnesota)
If the work submitted is not cookie cutter similar to earlier research then one wonders where the experts reviewing come from? Expertise is often narrow and cam discount different approaches because they are outside of a frame of reference. As someone who is asked to review I am amazed at the number of times I have to decline because I do not have one of substantive, theoretical, or methodological expertise. Perhaps there should be less publish or perish pressure and more incentives to do science carefully.
Dan Styer (Wakeman, OH)
According to Dr. Carroll: A significant improvement would require a change in attitude. Too often, we think that once a paper gets through peer review, it’s “truth.” We do? I don't know about Dr. Carroll, but I don't. I follow the typical attitude that all science is tentative. (A Google search on "all science is tentative" comes up with 40,900,000 hits. A Google search on "peer review produces truth" comes up with 5,120,000 hits, and the first few take pains to state that peer review DOESN'T insure correctness.)
MBB (Nyc)
@Dan Styer I completely agree. Issues about the imperfections of peer review aside, published scientific information is always evolving and being tested as new information, techniques and technologies become available. What is reported in a scientific paper, no matter how well done, are the results of that particular study, which may hold up over time or may not. For example, in lung cancer, tyrosine kinase inhibitors were initially shown to have mixed or even negative results. It wasn't until cases that did respond were more carefully evaluated and it was determined that adenocarcinomas with EGFR mutations were responsive to these drugs. Then, it was determined that some EGFR mutations responded more readily than others and now there is the issue of how to best combat resistance mutations and the evaluation other factors which may contribute to the effectiveness of these drugs. Each publication is just a step towards better understanding of disease. It is seldom if ever the final "truth".
Dan Styer (Wakeman, OH)
More on the prevalence of knowledge of peer review's imperfections: According to Press, Flannery, Teukolsky, Vetterling, "Numerical Recipes: The Art of Scientific Computing": "If all scientific papers whose results were in doubt because of bad [random number generators] were to disappear from library shelves, there would be a gap on each shelf about as big as your fist." If I only published what I knew to be perfect, I would never publish at all.
tom (midwest)
Concur. The peer review process works if there is intellectual rigor in the process. I have been a reviewer but my wife is the reviewer in the family. Our science careers had a lot to do with it but her 30+ years of experience as a reviewer makes her very good at her job. She is scrupulous about not reviewing publications outside her expertise. She has been editor or associate editor of various journals and is very discerning about any attempts to submit fake studies (and caught a few). We were and never have been paid but we consider it both an honor as well as a duty to do the very best job of review to ensure the best quality material gets published. As to hiding names of authors etc, many scientists in our field have distinctive writing styles that make it fairly simple to determine the author. But then again, that only comes with years of experience.
kant (Colorado)
As an editor, there are 2 problems with the peer review system as I see it. 1. It is difficult to find experts in the field to review a manuscript. They are either too busy or do not feel the rewards for reviewing are worth their time and effort. Since reviewing papers does not help progress in one's career, it is often just a waste of time. The well-established experts do not need any boost from reviewing anyway. They can spend their time doing rewarded research and publishing. As a result, we often do end up with second-tier scientists reviewing manuscripts. That is not good. 2. The peer review system is biased toward consensus-driven research. Excellent but off-the-main-stream findings are often rejected because of non-conformity with consensus views. It often discourages curiosity-driven research and reinforces mainstream research. Being rejected by journals may be indicative of poor quality of research or excellent out-of-the-box research. Hard to determine which one it is. How best to improve the system? 1. Provide incentives for reviewers, especially senior experts in the field. Spending one's valuable time on reviews must be rewarded monetarily and/or otherwise. This will improve the effectiveness of peer review. 2. Journals must publish unfunded research by not charging usual publication charges, if the reviewers and the editor think it is high quality. 3. Keep the identity of authors secret. This will curb a group of scientists approving one another's papers.
Steve (New York)
@kant I can't speak for other fields, but most of the major medical journals do not charge publication fees. In fact, those that do so for the most part publish inferior papers which essentially the authors pay to be published to add to their CVs
Donald Johnson (Colorado)
@kant How many research funders require: 1. A budget for highly trained and credible statisticians who are trained to design surveys, computer models and product reviews? 2. A budget for five paid trained reviewers who are recognized experts and reviewers in their specialties? 3. A budget for science editors who are paid and given the power to make articles readable for consumers as well as the two or three scientific readers of the studies? 4. A budget for publishing succeeding versions of articles on the internet along with comments by readers until the research and resulting article can be "published." 5. A budget for two replication projects that are designed to test the findings of a published article publish their finding as described above? 6. A commitment by the institution that employs the researcher and his team to grant promotion credits only when the above types of research are funded and completed successful, including when the research results in disappointing findings? 7. A budget for promoting the findings of the researchers after they have achieved final publication status for their work?
KB (Virginia)
Quality research should be simple in design, conclusive in result, and important in question. While peer review has all of the listed flaws; turning this job to computers loses the level of judgment to evaluate the quality. Further, algorithms are only as good as their input; until we can specifically recognize the hallmarks of good research, we can’t know what machines need to learn to recognize quality.
CT (New York, NY)
NIH and NSF should mandate that federally funded graduate programs develop a peer review training class and have senior (post qualifying exam) grad students review some quota of papers in their area of expertise (eg 10) in high-quality open-access journals as a criteria of graduation.
Scientist (United States)
Another consideration here is that there is a large swatch of researchers, often publishing in popular area (e.g., medicine) who have extremely poor training in statistics. Those researchers are also reviewers. They are often impressed by large effects driven by small sample sizes, or fancy sounding but inappropriate models, as are some of the writers here. I discussed the poor quantitative training with the head of our MD/PhD program, wondering if we could recruit undergrads with more quantitative backgrounds but perhaps lower GPAs, and it was clear the national rankings discouraged this. Some professional editors at glam journals, who are real gatekeepers, have similar limitations that prevent them from appreciating the strength of statistical evidence in some studies. As a reviewer, I’ve seen such a study rejected (another glam journal picked it up) and tried to “educate” the editor, to no avail. Learn stats if you want to do science, kids.
WSB (Manhattan)
@Scientist Buy poor stats may lead to publication, promotion and grants for further research. If it's publish or perish some people will use invalid stats and methods to get published.
Steve (New York)
@Scientist Most journals have statisticians ready to review and answer any questions regarding statistics that editors and reviewers may have so in the real world this isn't a major problem
Jaque (Champaign, Illinois)
With over 40 years of academic research and publishing I have come to the conclusion that no review process is perfect in spite of the best intentions of all involved. The main problem for a reviewer is the sheer volume of past work that no human can read even in an entire life time. This is where AI and machine learning can help. There is already some effort at IBM for medical literature based machine learning and diagnosis that is far better than human experts.
Scientist (United States)
Except for bias, which is widespread, this description seems more representative of the situation in medicine than most of science. Training in peer review if part of every PhD program I know (I also train MD/PhDs). It’s a critical part of helping junior researchers learn how to write good papers, sift through evidence, and give constructive feedback. I ask them to review with me from day 1 (with editor’s permission). We also, as a lab, informally review papers and preprints just about every week. NIH fellowships like to see some training in this too. Preprints are the norm, not the exception, in most scientific fields now. I do not know a journal that does not allow them. I regularly give and receive feedback on preprints, as do my colleagues. The real problems are persistent bias, which is only solved through careful adherence to protocols to evaluate papers on their own merit. I teach my lab to answer set questions as they read a paper, just like when you interview someone. Double blind peer review really doesn’t work once you know the field, attend conferences, and read preprints.
Greg Latiak (Amherst Island, Ontario)
As observed by others, these journals need to make a profit to thrive. And one way is to publish material whose themes will be accepted by their audience. The down side is that work challenging those ideas are often not well received and so are discouraged. And advertiser support should also not be ignored -- papers challenging their products will not be well received. So orthodoxy is reinforced and radical new ideas censored. This issue goes far beyond the fields of scientific and medical research but can be seen in almost every field of endeavor. The idea of peer review is still useful, but its implementation in the world of profit-driven publication has side effects that need to be acknowledged.
Christopher (Australia)
Journals are for profit, some journals are more okay than others, and reviewing comes down to unpaid labour. Science just isn't something we care about enough to make it better.
Philip (New York, NY)
@Christopher Not all journal are for-profit. But your points are well taken. We don't value scientific research as much as we should.
bro (houston)
@Philip All journals in my field are for profit. They are either published by companies with the sole purpose of making a profit for the owner(s) or by scientific societies that use the profit from their journal to fund other operations.
tom (midwest)
@Christopher only a few journals in our field of study are for profit and almost all are published by truly non profit societies and associations that merely want the journal to break even. It depends on the field of study.
Michael (St Petersburg, FL)
With falsification and fabrication being uncovered regularly, the important process of scientific research is being threatened. The immediate step should be for editors to screen all papers prior to review with AI software to detect textual and statistical anomalies. Long term, the NIH should conduct independent, verification studies of findings from studies with important health implications.
Philip (New York, NY)
@Michael Why stop there? Why not completely turn the peer review process over to the computers? Humans are clearly the problem. Eliminate them from the process and you'll suddenly have perfect peer review!
Janet W Reid (Trumansburg NY)
Now, now ... programs to detect plagiarism (or the somewhat less heinous habit of lifting phrases from other papers — often done by non-native speakers) are certainly useful. And perhaps software to point out statistical anomalies. Beyond that, AI cannot replace human judgment. Folks, keep in mind, your editor and reviewers are truly your best friends.