September 29, 2022

EIC Update: American Naturalist policy on data and code archiving

 American Naturalist policy on data and code archiving

 

Daniel Bolnick, Editor-In-Chief

September 5, 2022

 

 

In the past year, the Editorial Board of The American Naturalist has implemented several new policies aimed at improving the reproducibility and openness of the science that we publish. Starting in 2011, the journal required authors to publicly archive data on DataDryad or equivalent, version-controlled, public repositories. Options include Figshare, Dataverse, Zenodo, and Dryad. This requirement is been crucial in enabling re-analyses of data for meta-analyses and other follow-up research, and at detecting some cases of error or misconduct. However, compliance with that data archiving policy has been less than optimal. A review of Dryad repositories from past years revealed that many archives are incomplete (missing key data), or uninterpretable because they lack sufficient metadata documentation. These findings are summarized in a recent AmNat Editor’s blog (guest post by lead Data Editor Bob Montgomerie), and in Roche et al 2014 PLoS Biology (https://doi.org/10.1371/journal.pbio.1001779) In response to these problems, the Editorial Board has initiated several new policies. 

1)    We now require that authors archive both data and any code used to generate the results reported in the paper. Data may include empirical measurements, outputs of simulations, and collections of previously published data assembled into new dataframes for analysis. Code may include statistical analyses (e.g., in R or Python), scripts summarizing statistical commands in proprietary software (e.g., JMP), simulations, or Mathematica notebooks. You can find The American Naturalist’s guidelines for data and code archiving here: https://comments.amnat.org/2021/12/guidelines-for-archiving-code-with-data.htmlDryad also publishes a list of best practices (https://datadryad.org/stash/best_practices) you may wish to consult, though we do not specifically require that authors use Dryad, per se. 

2)    The data and code archive must be created prior to manuscript submission, and a private URL key provided to the journal upon submission. For instance, DataDryad allows the creation of a private archive that you can share exclusively with the Editors and reviewers, which later becomes public upon publication. Do note, however, that Dryad charges authors a fee once the article is published (AmNat used to cover this fee, but the expense became untenable recently). This allows reviewers and editors to check the contents of the data for completeness, to evaluate the technical accuracy of the code used in analyses, and, potentially, to rerun analyses, while keeping the repository private. We encourage, but do not require, reviewers to examine the data and code. We ask reviewers and data editors not to comment on the elegance of the code, as we simply care that it works to generate the results reported in the manuscript. 

3)    Data and code archives must be accompanied by a README file that clearly indicates the contents of the archive. Variable names should be explained (units, etc) so that readers can determine what variables were used in the analyses reported in the paper. If there are multiple data files, the relationship between them should be clearly stated (e.g., what column allows information in different files to be merged appropriately). If there are multiple code files, in what order should they be run, and what does each do. 

4)    The data archive will be checked during the review process, by a new cadre of Data Editors. This is a team of colleagues who evaluate whether the provided data files are complete and in csv format, and whether the code runs and is clearly annotated. A Data Editor report to is provided to the authors of all provisionally accepted manuscripts. Any data archiving weaknesses identified by a Data Editor must be addressed prior to final acceptance.

 

We will not penalize authors who, voluntarily and in good faith, find and correct data or code errors after publication. We believe that corrections should be encouraged when warranted. This is not a new policy, but is important to reiterate.

 

While we are on the topic of reproducible science,  here are some pointers for authors who use code in their data analyses and simulations. Even professional programmers make errors that affect the performance of the code that they write. The question is, what can authors do to minimize the resulting risk of incorrect inferences? And what can journals do to help minimize such mistakes?

 

1.     Programming habits for authors. There are many good sources of recommendations for how authors can be better coders. For example, Wilson et al. (2014, PLoS Biology, https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001745;  2017 PLoS Computational Biology, https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005510&type=printable ) outline recommendations. To highlight a few:

a.     Annotate thoroughly, especially focusing on documenting the purpose of each step, not just the mechanics.

b.     Use pair programming. Work in teams on code, preferably as you write.

c.      If this is not a collaboration, find a colleague to check your code with you, and return the favor for their project.

d.     After doing an analysis, close and reopen the software and re-run the code to make sure you get the same result.

e.     Generate some simulated data, where you know the true pattern. Run this through the code and make sure it generates correct results. This gives you more confidence that the inferences derived from your real data may be correct.

f.      Don’t overwrite original data, create new versions as you modify data tables to track the process. There are many ways to version your data files and this is highly recommended, rather than giving each file a new neame and number.

g.     Make bite-sized sections of code, each for its own task, preferably ordered in the sequence presented in the publication. It is also a good practice to have a unique data file and code script file for each figure, so readers can readily recreate each figure on their own.

These, and other author best practices, are explained in our optional checklist: https://www.amnat.org/announcements/MS-Checklist.html

 

 

2.     The role of a journal. Given the risk of data analysis code not reproducing the reported results, or violatiung statistical principles and assumptions, it is very tempting for journals to initiate a code review procedure. However, this would be  a daunting task. and reviewers will be far less likely to accept review requests. Moreover, there is a high likelihood that a particular reviewer simply will not have the expertise to evaluate a particular bit of code, as not everyone ‘speaks’ every programming language. As well, some author-generated software is highly resource-intensive, requiring computer cluster time, or very long runs, that cannot readily be duplicated in a time- and cost-effective way. 

Periodically at The American Naturalist we do get reviewers (or, Associate Editors) asking to see code so they can check that a given step was done correctly.  Often this is in the authors’ best interest: reviewers frequently raise questions about analytical methods that are not described in sufficient detail. Did the authors transform their data? Did they do type I or II or III sums of squares? Often authors have done analyses appropriately, but not described their work in sufficient detail in the text. Reviewers then get concerned, raise questions, and become more critical of the paper. With code available, a reviewer has the opportunity (though, not the obligation) to check the code directly and answer their own question, potentially avoiding a misunderstanding that could derail a paper’s prospects.

Providing code for reviewers to examine should have two benefits. First, it allows reviewers the optionof examining code for errors, or to clarify steps that were not clearly described in the manuscript text.  Second, and perhaps more importantly, it should motivate authors to check and annotate their code more carefully before submission.  We deem it likely that the added concern about having a stranger check one’s code will encourage authors to be careful. This self-policing should help identify errors pre-emptively.