There has been quite some furore over a recent paper in PLOS ONE that presents “Quilt Plots“, and Neil Saunders, amongst others, has already blogged about this. There are also quite a few choice comments on the journal’s website. (I also missed this hilarious post by Jonathan Eisen)
The major issue seems to be that Quilt Plots are, in fact, heatmaps; they’ve been around for some time; creating them is fairly simple; and therefore this doesn’t represent anything novel.
On the publication itself
I have a “live and let live” attitude to publications, and so I am not going to call for retraction as some have. If the authors want to publish this as a paper, I say good luck to them. I understand very well the pressures on young scientists to publish, and adding to the list of peer-reviewed papers is always an attractive proposition.
The one thing I do want to say is that, as scientists, the papers we publish are a reflection of ourselves, and other people will judge our quality as scientists on the quality of our publications. I’d just like to urge the authors of the Quilt Plots paper to reflect on this for a few moments. Is “Quilt Plots” what you want to be known for?
On PLOS ONE and editorial control
My issues with PLOS ONE are well documented, and I think this episode backs up my opinion that PLOS ONE does not manage to achieve any level of editorial consistency. The key policy is here, and the key sentence appears to be:
Utility. The tool must be of use to the community and must present a proven advantage over existing alternatives, where applicable. Recapitulation of existing methods, software, or databases is not useful and will not be considered for publication.
As I mentioned in another blog post, we had a paper rejected from PLOS ONE specifically because the reviewer and editor didn’t think our software represented an advantage over existing alternatives. In short, they thought we should just use MG-RAST. They were wrong – there were things our software did that MG-RAST didn’t, and vice versa. In fact, our software had a very different purpose to MG-RAST and so I was rather unsure why the reviewers thought this was an important comparison. I pointed all of this out to PLOS ONE and didn’t get anywhere, so we published it Frontiers in Bioinformatics and Computational Biology.
I’m not bitter about this; I am really happy to have a paper in FrontiersIn, and actually quite glad I don’t have a paper in PLOS ONE.
But my point is this – if Quilt Plots are a sufficient advance over heatmaps to warrant publication in PLOS ONE, then our paper should not have been rejected. There is no consistency in the enforcement of the policy, and a total lack of editorial control. This is not good and I can only see a downward spiral for PLOS ONE unless this changes. At the heart of academic publishing surely sits the basic philosophy that all papers and authors will be treated fairly and equally. I don’t think PLOS ONE are even getting close.
On opening the floodgates
If Quilt Plots represent a sufficient advance over heatmaps to warrant publication in PLOS ONE, then I’m afraid this may open the floodgates to 100s of new methods papers that represent, at best, small advances in bioinformatics. When reviewers/editors point out the “Utility” section of the PLOS ONE editorial policy, then all authors need to do is point to the Quilt Plots paper. If the editor doesn’t allow small advances to be published, then PLOS ONE will be accused of being unfair. Arguments such as “If Quilt Plots are allowed to be published, then so should my code to make bar charts slightly better” will be put forward. How can the editors refuse, given the Quilt Plots paper?
I want to muse for a while on Altmetrics. The Quilt Plots paper has been tweeted about 264 times, and as far as I can tell, that’s a large number compared to other papers. I’m guessing that the Altmetrics score for this article is very high. Just think about that the next time you’re plugging Altmetrics as a better way of measuring research impact.
I wish the best of success to the authors of Quilt Plots, I really do, and I’d say that if you really want this paper to remain out there, then good luck to you, ignore what everyone else is saying and stick to your guns. It’s the only way.
However, for me, my own personal opinion is that no-one comes out of this looking good – not the authors, not the journal, and not the field of bioinformatics. There are still tons of biologists out there who do not see bioinformatics as a real science, who think we just click a few buttons and answers magically appear, who see us as support staff rather than research staff, something to be used not valued. In that context, I don’t think this paper really does our field any favours. This makes me sad. Ho hum.