From Editor 1:
Andrew incorrectly interprets the rejection based on the idea that the sequences are in the public domain and therefore not interesting to reevaluate. This is not the case. I have many papers based on such assembled data sets. The point the reviewers have made though, is that the manuscript is not clear on what advances are made based on these analyses.
Reviewer:Given that almost all of the data for these analyses came from published studies, it was not clear if there were differences between these results and those studies –and if so, was this due to using different datasets or different analyses from those studies?
Response: I am confused as to why using data from a public dataset cannot be original. If this were true no papers based on the human genome project would be valid after the initial publication. Andrew is missing the point. It is not that he used publicly available data, it is that he has not articulated what new insights he has gained from combining and re-analyzing these data.
The reviewers also had issues with some of the methods. Andrew excuses these because they are ‘comparable in how they were produced’ in previous studies (a poor criterion for method choice!). He excuses the lack of bootstrap values because he used the FastTree approach, which is a poor approach to phylogeny estimation and an even poorer excuse to not get bootstrap values. To provide a tree without bootstrap values or posterior probabilities is simply poor science. It is providing a point estimate with not confidence interval or variance estimate. There are fast bootstrap approaches, even in a maximum likelihood framework and Bayesian approaches that should have been explored.
Clearly the reviewers were confused because of the writing of the paper. Andrew tries to clarify a few things in the response letter and edits to the manuscript, but this only confirms the fact that the reviewers were confused by the previous version. This revised draft is still rushed and only addresses a few of the concerns (not many of the methodological concerns). So I stand behind the rejection decision. That said, I would be happy to have Andrew resubmit a new version that addresses the concerns in a more robust way. But if he is unwilling to put the time in to do a proper phylogenetic analysis and articulate in the manuscript what and how he is doing such analyses, then I don’t want to waste reviewers’ time with it again.
Only one reviewer was confused his poor PhD student. The first author states that he understood it clearly and even his PhD student says that it is written well! I chose the methods not to compare to other studies as Editor 1 state but for internal consistency. That is most definitely good science and a fair test. It is actually fundamental to experimental design. As a statistician I am well aware of point estimates and confidence intervals, but I am also aware of the limits of bootstraps and bootstrap sampling.
So I worded a very strong reply to the Editor in Chief and asked for an appeal. Suspecting that it would be passed to Editor 2 who I have had previous dealings before where I told the editor in chief I never wanted Editor 2 to touch any paper I submitted to the journal again, because he is a pedant. Foolish me I forgot to remind the Editor in Chief about this.
I greatly like the Editor in Chief. He has to deal I suspect with a large number of irate authors but I think he could have less if he had editors who took their job seriously and not personally.