I love the process of Peer Review where you get judged by a group of unaccountable and usually anonymous "experts" who have all the power and to whom you have no avenue of response. So this is my response.
After an Appeal from
the author I was asked to adjudicate on the decision from Editor 1. I was able to review the prior reviews, as well as the author’s
Appeal document and a revised version of their manuscript.
I have read the paper carefully and I agree with Editor 1 that the paper
doesn’t meet the quality criteria we expect from a paper at Journal B. The paper
has clear methodological issues that need to be addressed. In addition, the
paper is sloppily prepared and the figures are poor.
1. The argument that FastTree can’t be used for bootstrap is absurd. If
FastTree is good enough for point estimates then it is good enough for
bootstrap. The bootstrap simply assesses the amount of variation expected under
the given inference method. It can be done with any tree reconstruction method,
however flawed that method may be. If you don’t trust FastTree then you should
use a different inference method. (Note that the FastTree paper explicitly
states that FastTree can be used for bootstrap.)
2. The methods are strange. You analyze two separate data sets (H5N8 sequences
vs. H5 sequences/N8 sequences) with entirely different methods. I’m not sure
these are even comparable. Importantly, this is never properly explained or
3. The figures are largely impenetrable. It is standard procedure in
phylogenetic analyses to collapse and/or color branches, as well as label
groups of sequences rather than individual sequences, so that the key features
of the tree are clearly visible. Even though this paper's conclusion hinge
entirely on the shape of the phylogenetic trees that were obtained, you haven't
put much effort at all into into producing proper, high-quality tree figures.
Moreover, you labeled each tip with the full strain name, which makes them
largely illegible (in particular when the font is small, e.g. Fig. 2).
4. The paper is sloppily written. To give just one example, in the revision,
you added the sentence "These trees are in good agreement with the much
more detailed and rigorous coalescent analysis carried out previously.”
Importantly, there is no reference. I have no idea which previous analysis you
refer to. Similar issues permeate the paper. This paper simply isn't written in
such a way that readers can understand what exactly was done and why.
Most importantly, it is the author’s responsibility to prepare a compelling,
well-prepared manuscript. The work you have done may very well be (mostly) technically
correct, but it is not presented in a way that it makes a useful contribution
to the field.
So let me see;
1) You do not bootstrap in FastTree - you bootstrap in seqboot and consense from Phylip as FastTree has no bootstrap function built in. So if we are playing pedantry I am right and Editor 2 is wrong. Why do I want to know the variation under an approximate method? It was created for large sequence trees for speed not accuracy. The key to the paper is identifying if the H5N8 hemagglutinin and neuraminidase genes are part of a single cluster or multiple clusters. The bootstrap tells you NOTHING about this because it just tells you the variability of the method nothing about the biology and the results. These same multiple clusters are identified in both the neuraminidase and hemagglutinin trees. These are independent samples as opposed to covariate sites tested in a bootstrap and so if they agree it is very unlikely the trees are wrong. Bootstraps add nothing other than a sop to the phylogenetics community who cannot live without them.
2) I explained the methods - they both use ML GTR + gamma + I just in different programs FastTree for the big alignment and Mega for the small alignments. Editor 2 had actually told me to use FastTree in a previous paper where these H5 and N8 trees were needed as supplementary data. In that case it was to confirm my Bayesian tree did not have recombinations, it was in doing that, that I found the result reported here. So I am quite stunned that he has forgotten that this is the method he actually suggested.
3) The figures are vector graphics - is it beyond the wit of an editor to zoom a figure. Even if they are in need of editing THIS IS NEVER A REASON FOR REJECTING A PAPER for revisions yes but rejection you have to be joking. Each tip is labelled in that way because the BIOLOGY is what matters i.e. the location and date of the sequences were I to shorten to database identity this would make interpreting the trees impossible in terms of biology and it would also be impossible to see that the trees show a coherent pattern in terms of date and location. The problem is we get so carried away with algorithms we no longer look at the data. If a cluster is all from New Jersey in 1989 then that is a reasonable cluster in good agreement with geography and chronology.
4) I object to the word sloppily. It is not a term that should ever enter an editor's response. You may say the paper is missing a reference at point A and point B but sloppily is pure hyperbole and not worthy of a good editor. It is not a word I would ever use. Still more it is not justification for more than major revisions.
I have been an editor for 4 years and edited 120 papers and I would never write such a response or make such a decision without proper justification especially given the final part about it being mostly technically correct. Not being presented in a way that is a useful contribution is NOT a CRITERIA for rejection, at most it is for Major revision especially at initial submission. We do not judge on significance we judge on it being SCIENCE.