bioinformatics, genomes, biology etc. "I don't mean to sound angry and cynical, but I am, so that's how it comes across"

On Quilt Plots, and the need for editorial consistency

There has been quite some furore over a recent paper in PLOS ONE that presents “Quilt Plots“, and Neil Saunders, amongst others, has already blogged about this.  There are also quite a few choice comments on the journal’s website.  (I also missed this hilarious post by Jonathan Eisen)

The major issue seems to be that Quilt Plots are, in fact, heatmaps; they’ve been around for some time; creating them is fairly simple; and therefore this doesn’t represent anything novel.

On the publication itself

I have a “live and let live” attitude to publications, and so I am not going to call for retraction as some have.  If the authors want to publish this as a paper, I say good luck to them.  I understand very well the pressures on young scientists to publish, and adding to the list of peer-reviewed papers is always an attractive proposition.

The one thing I do want to say is that, as scientists, the papers we publish are a reflection of ourselves, and other people will judge our quality as scientists on the quality of our publications.  I’d just like to urge the authors of the Quilt Plots paper to reflect on this for a few moments.  Is “Quilt Plots” what you want to be known for?

On PLOS ONE and editorial control

My issues with PLOS ONE are well documented, and I think this episode backs up my opinion that PLOS ONE does not manage to achieve any level of editorial consistency.  The key policy is here, and the key sentence appears to be:

Utility. The tool must be of use to the community and must present a proven advantage over existing alternatives, where applicable. Recapitulation of existing methods, software, or databases is not useful and will not be considered for publication.

As I mentioned in another blog post, we had a paper rejected from PLOS ONE specifically because the reviewer and editor didn’t think our software represented an advantage over existing alternatives.  In short, they thought we should just use MG-RAST.  They were wrong – there were things our software did that MG-RAST didn’t, and vice versa.  In fact, our software had a very different purpose to MG-RAST and so I was rather unsure why the reviewers thought this was an important comparison.  I pointed all of this out to PLOS ONE and didn’t get anywhere, so we published it Frontiers in Bioinformatics and Computational Biology.

I’m not bitter about this; I am really happy to have a paper in FrontiersIn, and actually quite glad I don’t have a paper in PLOS ONE.

But my point is this – if Quilt Plots are a sufficient advance over heatmaps to warrant publication in PLOS ONE, then our paper should not have been rejected.  There is no consistency in the enforcement of the policy, and a total lack of editorial control.  This is not good and I can only see a downward spiral for PLOS ONE unless this changes.  At the heart of academic publishing surely sits the basic philosophy that all papers and authors will be treated fairly and equally.  I don’t think PLOS ONE are even getting close.

On opening the floodgates

If Quilt Plots represent a sufficient advance over heatmaps to warrant publication in PLOS ONE, then I’m afraid this may open the floodgates to 100s of new methods papers that represent, at best, small advances in bioinformatics.  When reviewers/editors point out the “Utility” section of the PLOS ONE editorial policy, then all authors need to do is point to the Quilt Plots paper.  If the editor doesn’t allow small advances to be published, then PLOS ONE will be accused of being unfair.  Arguments such as “If Quilt Plots are allowed to be published, then so should my code to make bar charts slightly better” will be put forward.  How can the editors refuse, given the Quilt Plots paper?


I want to muse for a while on Altmetrics.  The Quilt Plots paper has been tweeted about 264 times, and as far as I can tell, that’s a large number compared to other papers.  I’m guessing that the Altmetrics score for this article is very high.  Just think about that the next time you’re plugging Altmetrics as a better way of measuring research impact.


I wish the best of success to the authors of Quilt Plots, I really do, and I’d say that if you really want this paper to remain out there, then good luck to you, ignore what everyone else is saying and stick to your guns.  It’s the only way.

However, for me, my own personal opinion is that no-one comes out of this looking good – not the authors, not the journal, and not the field of bioinformatics.  There are still tons of biologists out there who do not see bioinformatics as a real science, who think we just click a few buttons and answers magically appear, who see us as support staff rather than research staff, something to be used not valued.  In that context, I don’t think this paper really does our field any favours.  This makes me sad.  Ho hum.


  1. The last para hits the nail on the head, on a much deeper level than is evident in the publishing game !

  2. Good points, well made. This lack of editorial consistency likely derives from absence of a traditional journal editorial structure including an Editor-in-Chief who is ultimately accountable for balance and fairness. instead the PLOS ONE model appears to be a loose federation of reviewers and associates, something of a quilt itself. Add to this the mind boggling number of articles circulating through the system. A solution may be to stream submitted content into silos by subject and give editorial authority to academically recognised section editors for each.

  3. Make no mistake about it, academic editors are assigned based on their expertise, and so in the majority of cases the paper will be being “handled” by an expert in a field relevant to the paper.

    The lack of consistency may be a result of the size and structure of the journal, but if they cannot deliver on the basic qualities of fairness and equality, one has to question why they exist and whether we should publish with them.

  4. All good, well-argued points.

    I have some sympathy for the authors too. They’ve written some code, found it useful and decided that it constitutes a least-publishable unit. It’s just that they gravely misjudged how least-publishable it is. Their latest comment indicates that they’re just a bit clueless when it comes to software distribution. I’m trying to be constructive and non-personal in my comments.

    The failure here is with the editors and reviewers, for not following their own guidelines.

  5. I was trying to get the authors to think whether they actually want to be associated with this paper or not; it’s becoming quite controversial, and certainly if I spotted it in an interviewees list of papers, they’d be facing some serious questions.

    If this had come out as a blog post, I think everyone would have been really happy.

    PLOS ONE are hugely responsible for this, though, and I do wonder what on Earth the editors were thinking.

  6. From what I can tell, PLoS One editors think only one thing: another paper is another chunk of cash. I’ve seen no evidence for meaningful peer review in PLoS One. The other PLoS journals have standards, but PLoS one is a trashbin for unpublishable papers.

  7. I wonder how this is different from any other article published that someone somewhere disagrees with or thinks is useless? There are hundreds of elsevier, npg and springer journals that publish stuff that makes little to no impact. I see no evidence that this is better or worse.

  8. @gasstationwithoutpumps I can only assume that you do not know how PLOS One works. As pointed out by Lksdf and biomickwatson, PLOS One functions with a large number of editors who accept or reject papers based on peer-review. I am one of these.

    We are not paid (like editors in most journals – here at least it’s open access and non profit), and I have never – never – had the slightest hint that I should accept more papers, neither for financial nor for other reasons. I have also verified with the PLOS One offices that waivers are given systematically to those who ask for them, conditional on answering some minimal inquiry. I accept more than half the submissions that I manage because I subscribe to the philosophy of PLOS One, that a published result is better than a hidden result. The most frequent fate of papers which I manage is one to two rounds of major revision, which generally improve dramatically the paper. Often that includes claims being toned down. I’m fine with papers which are not revolutionary and do not pretend to be being published.

    For the record, I have posted this discussion and others concerning this now infamous paper on the PLOS One editors discussion board, where it is being discussed. At no point in the discussion are financial issues even obliquely evoked.

    Also for the record, this paper is (justly in my opinion) criticized for containing insufficient novelty and not releasing the code (that will hopefully be corrected). It is not being criticized for being erroneous, which is a lot better in my opinion than the problems associated with quite a few “higher profile” journals.

  9. I think the push for ‘editorial consistency’ is what got us into this Science/Nature/Cell mess. The papers should be judged by the actual content and reputation of authors, and not reputation of journals. If an author publishes dumb papers too many times, we do not need to read and cite his future work any more.

    That was the traditional approach in physics, and people trusted the authors. New authors making big claims had to build reputation over years. It was also out of necessity, because math of some theoretical papers took weeks to months to sort out. So, it was impossible to work through the math of every paper. Science/Nature/Cell changed that respect toward ‘reputed researcher’ to ‘reputed journals’, which is ultimately bad for researchers.

  10. For me, and for you, this would have been nothing more than a blog post. You going to submit your blog posts to PLOS ONE now?

  11. Hi Marc, I’m sure I’m not alone in being curious as to how the discussion goes!

  12. And when I said editorial consistency, I didn’t mean in terms of quality, I mean that one paper should be treated with the same fairness and objectivity as any other. Fairness and objectivity sit at the very heart of peer review. My point is that PLOS ONE do not apply their policies fairly and consistently.

  13. Never going to happen, because PLOS One has so many editors that it is impossible to coordinate them together. I had similar frustrating experience from the journals and that led me to start the blog. The main purpose of starting it and also for taking time to find “Best Bioinformatics Contribution of 2012”, “2013”, etc. has been to neutralize the journals, and more importantly give weight to places like arxiv. If the community takes active role in discussing useful papers, then it will be little importance, whether a paper ultimately got published in PLOS One or PNAS or Gigascience or wherever.

  14. > You going to submit your blog posts to PLOS ONE now?

    I want to get paid for writing (or at least not lose money), and want government out of this business altogether. Paying $1000/article is not profitable unless the government covers one’s @$$.

  15. So, the crucial point is consistency across editors. This was becoming a huge problem at one of the Society driven journals that I am involved with. (Genetics and G3 but that is not the point of my comment). Two measures are currently in place to have as much consistency as possible: 1)Below the editor in chief there are senior editors who are repsonsible for assigning the handling editors for each manuscript and who each have their own area. There are meetings (virtual and in-person) among the SE and EIC where case studies are discussed. 2) peer-editing: we have created a culture and an online system where it is very common to involve other editors in the decision making process, either when deciding whether to send a paper for review or when the referee decisions are in. This will not make the overall result perfect but at least more consistent.
    Furthermore, the decidon to accept the offer of becoming an editor should not be taken lightly. You will not only be scrutinised by your peers on what you publish (as pointed out by Mick earlier), your decisions as an editor will also be under close scrutiny.

  16. Well if we’re giving up on fairness and objectivity, then that makes me very sad

  17. This is a great comment DJ, and for me, the key point is that the journal you describe is striving for editorial consistency; PLOS ONE needs to do more in this space.

  18. Being paid to write would be nice, yes 😉

  19. Did you read my blog post? 🙂

    To summarise my major points:

    1) the authors need to think about whether they want to be associated with this

    2) the paper does not satisfy PLOS ONE’s editorial policies on novelty.

    3) PLOS ONE does not apply its policies fairly and consistently

    I’m happy for people to publish their stuff, but this paper should have been a blog post and no more.

  20. Hi, sorry for the delay in getting back to you.
    The discussion is I suppose confidential in its details, but basically (1) the paper does not fulfill conditions for retraction, even if we can agree it might have been better not to publish it, and (2) there was some miscommunication between the editorial office and the editor, and they are going to try to improve this in the future.

Leave a Reply

© 2018 Opiniomics

Theme by Anders NorenUp ↑