December 1, 2021

Guidelines for archiving Code with Data

The following is a cross-post from the Editor's blog of The American Naturalist, developed with input from various volunteers (credited below).



December 1, 2021

Daniel I. Bolnick (, Roger Schürch (, Daniel Vedder (, Max Reuter (, Leron Perez (, Robert Montgomerie (

Starting January 1, 2022, The American Naturalist will require that any analysis and simulation code (R scripts, Matlab scripts, Mathematica notebooks) used to generate reported results be archived in a public repository (we specifically prefer Dryad, see below). This has been our recommendation for a couple of years, and author compliance has been very common. As part of our commitment to Open and Reproducible Science, we are transitioning to make this a requirement. The following document, developed with input from a variety of volunteers, is intended to be a relatively basic guide to help authors comply with this new requirement.




The fundamental question you should ask yourself is, “If a reader downloads my data and code, will my scripts be comprehensible, and will they run to completion and yield the same results on their computer?” Any computer code used to generate scientific results should be readily usable by reviewers or readers. Sharing this information is vital for several reasons as it promotes: (i) the appropriate interpretation of results, (ii) checking the validity of analyses and conclusions, (iii) future data synthesis, (iv) replication, and (v) their use as a teaching tool for anyone learning to do analyses themselves. Shared code provides greater confidence in results. 

The recommendations below are designed to help authors conduct a final check when finishing a research project, before submitting a manuscript for publication. In our experience, you will find it easier to build reusable code and data if you adhere to these recommendations from the start of your research project. We separately list requirements, and recommendations in each category below.




  Great template available here:




 Prepare a single README file with important information about your data repository as a whole (code and data files). Text (.txt or .rtf) and Markdown (.md) README files are readable by a wider variety of free and open source software tools, so have greater longevity. The README file should simply be called README.txt (or .rtf or .md). That file should contain, in the following order:

Citation to the publication associated with the datasets and code 

Author names, contact details

A brief summary of what the study is about 

 Identify who is responsible for collecting data and writing code.

List of all folders and files by name, and a brief description of their contents. For each data file, list all variables (e.g., columns) with a clear description of each variable (e.g., units)

Information about versions of packages and software used (including operating system) and dependencies (if these are not installed by the script itself). An easy way to get this information is to use sessionInfo() in R, or 'pip list --format freeze' in Python.


RECOMMENDED (for inclusion in the README file):

  Provide workflow instructions for users to run the software (e.g., explain the project workflow, and any configuration parameters of your software)

 Use informative names for folders and files (e.g., “code”, “data”, “outputs”)

  Provide license information, such as Creative Commons open source license language granting readers the right to reuse code. For more information on how to choose and write a license, see This is not necessary for DRYAD repositories, as you choose licensing standards when submitting your files.

 If applicable, list funding sources used to generate the archived data, and include information about permits (collection, animal care, human research). This is not necessary for DRYAD repositories, as it is also recoded when submitting your files.

  Link to or equivalent methods repositories where applicable



  Scripts should start by loading required packages, then importing raw data from files archived in your data repository.

  Use relative paths to files and folders (e.g. avoid setwd() with an absolute path in R), so other users can replicate your data input steps on their own computers. 

 Annotate your code with comments indicating what the purpose of each set of commands is (i.e., “why?”). If the functioning of the code (i.e., “how”) is unclear, strongly consider re-writing it to be clearer/simpler.  In-line comments can provide specific details about a particular command

 Annotate code to indicate how commands correspond to figure numbers, table numbers, or subheadings of results within the manuscript.

  If you are adapting other researcher’s published code for your own purposes, acknowledge and cite the sources you are using. Likewise, cite the authors of packages that you use in your published article.



  Test code before uploading to your repository, ideally on a pristine machine without any packages installed, but at least using a new session.

  Use informative names for input files, variables, and functions (and describe them in the README file).

  Any data manipulations (merging, sorting, transforming, filtering) should be done in your script, for fully transparent documentation of any changes to the data.

  Organise your code by splitting it into logical sections, such as importing and cleaning data, transformations, analysis and graphics and tables. Sections can be separate script files run in order (as explained in your README) or blocks of code within one script that are separated by clear breaks (e.g., comment lines, #--------------), or a series of function calls (which can facilitate reuse of code).

  Label code sections with headers that match the figure number, table number, or text subheading of the paper.

  Omit extraneous code not used for generating the results of your publication, or place any such code in a Coda at the end of your script.

  Where useful, save and deposit intermediate steps as their own files. Particularly, if your scripts include computationally intensive steps, it can be helpful to provide their output as an extra file as an alternative entry point to re-running your code. 

  If your code contains any stochastic process (e.g., random number generation, bootstrap re-sampling), set a random number seed at least once at the start of the script or, better, for each random sampling task. This will allow other users to reproduce your exact results.

  Include clear error messages as annotations in your code that explain what might go wrong (e.g., if the user gave a text input where a numeric input was expected) and what the effect of the error or warning is.



Checklist for preparing data to upload to DRYAD (or other repository)


Repository contents 


  All data used to generate a published result should be included in the repository. For papers with multiple experiments or sets of observations, this may mean more than one data file.

 Save each file with a short, meaningful file name and extension (see DRYAD recommendations here).

 Prepare a README text file to accompany the data. Our recommendation is to put this in the single README file described above. For complex repositories where this readme becomes unmanageably long, you may opt to create a set of separate README files for the overall repository, with one master README and more specific README files for code and for data. But, our preference is one README. The README file(s) should provide a brief overall description of each data file’s contents, and a list of all variable names with explanation (e.g. units). This should allow a new reader to understand what the entries in each column mean and relate this information to the Methods and Results of your paper. Alternatively, this may be a “Codebook” file in a table format with each variable as a row and column providing variable names (in the file), descriptions (e.g. for axis labels), units, etc. 

 Save the README files as a text (.txt) or Markdown (.md) files and all of the data files as comma-separated variable (.csv) files. 

 Save all of the data files as comma-separated variable (.csv) files. If your data are in EXCEL spreadsheets you are welcome to submit those as well (to be able to use colour coding and provide additional information, such as formulae) but each worksheet of data should also be saved as a separate .csv file.


 We recommend also archiving any digital material used to generate data (e.g., photos, sound recordings, videos, etc), but this may require too much storage space for some repository sites. At a minimum, upload a few example files illustrating the nature of the material and a range of outcomes. We recognize that some projects entail too much raw data to archive all the photos / videos / etc in their original state.

Data file contents and formatting  


 Archived files should include the raw data that you used when you first began analyses, not group means or other summary statistics; for convenience, summary statistics can be provided in a separate file, or generated by code archived with the data.

 Identify each variable (column names) with a short name. Variable names should preferably be <10 characters long and not contain any spaces or special characters that could interfere with reading the data and running analysis code. Use an underline (e.g., wing_length) or camel case (e.g., WingLength) to distinguish words if you think that is needed.


 Omit variables not analyzed in your code.

 A common data structure is to ensure that every observation is a row and every variable is a column.

 Each column should contain only one data type (e.g., do not mix numerical values and comments or categorical scores in a single column).

  Use “NA” or equivalent to indicate missing data (and specify what you use in the README file)



 Upload your data and code to a curated, version-controlled repository (e.g., DRYAD, zenodo). Your own GitHub account (or other privately or agency controlled website) does not qualify as a public archive because you control access and might take down the data at a later date.

 Provide all the metadata and information requested by the repository, even if this is optional and redundant with information contained in the README files. Metadata makes your archived material easier to find and understand.

 From the repository, get a private URL and provide this on submission of your manuscript so that editors and reviewers can access your archive before your data are made public.


 Prepare your data, code, and README files, before or during manuscript preparation (analysis and writing).



More detailed guides to reproducible code principles can be found here:

Documenting Python Code: A Complete Guide -

A guide to reproducible code in Ecology and Evolution, British Ecological Society: 

Dokta tools for building code repositories:

Version management for python projects:

Principles of Software Development - an Introduction for Computational Scientists (, with an associated code inspection checklist (

Style Guide for Data Files

 See the Google R style guide ( the Tidyverse style guide( for more information

Google style guide for Python:




The American Naturalist requests that authors use DRYAD or zenodo for their archives when possible. 

·                DRYAD/zenodo are curated and this means that there is some initial checking by DRYAD for completeness and consistency in both the data files and the metadata. DRYAD requires some compliance before they will allow a submission.

·                We are finding it much easier and more convenient to download repositories from DRYAD/zenodo rather than searching the ms etc for the files or repository

·                Files on DRYAD/zenodo cannot be arbitrarily deleted or changed by authors or others after publication. DRYAD will allow changes if a good case can be made—all changes are documented and all versions are retained.

·                DRYAD is free for Am Nat authors and we have a good working relationship with them and they take seriously our suggestions for improvement etc.

·                editors, reviewers, and authors will all become familiar with the workings of DRYAD

·                DRYAD and zenodo are now linked. DRYAD is for data files and zenodo for everything else (code, PDFs, etc). You need only upload all files to DRYAD and they will separate your archive into the appropriate parts. As you will see your DRYAD repository provides a link to the files on zenodo and vice versa

New American Naturalist editorial staff

Since Trish Morse retired in October 2020, Owen Cook has heroically managed all of the parts of The American Naturalist's editorial office solo. This includes checking incoming manuscripts for compliance, contacting Associate Editors picked by the Editors to handle a paper, and contacting reviewers for the AEs, sending reminders about overdue reviews and recommendation letters, responding to author queries, checking manuscript files and giving feedback on figure formatting, and much much more. He has truly been doing two jobs for the past year. Thanks Owen!

Luckily we now have been able to recruit two part-time helpers to assist Owen. Please extend a warm welcome to:

Alex Yu:

Alex (she/her/hers) is a Master of Humanities(MAPH) student at UChicago, currently working on her creative writing & translation thesis; in her spare time she is a travel writer, poet as well as vlogger. 

Evan Williams
Evan Williams (he/him/his) is a poet and essayist studying at the University of Chicago where he writes on topics ranging from surrealism to masculinity to the Svalbard Global Seed Vault.  

January 21, 2021

 Editor's Note:

Since fall 2020, Robert Montgomerie has been leading a group of nascent Data Editors in a task of designing a framework for monitoring compliance with Data Sharing requirements at The American Naturalist. This entails both setting up policies for a future board of Data Editors whose job will be to evaluate compliance of manuscripts' data and metadata before acceptance, and evaluating where problems lie in the past. What follows is a brief summary from Bob Montgomerie of his findings looking back at 2020 publications' compliance.  -Dan Bolnick

Data Transparency 2020

Bob Montgomerie,


For the past decade, authors of papers published in The American Naturalist have been required to make their raw data publicly available (Whitlock et al. 2010), either as an online supplement or in a recognized public data repository. The American Naturalist was one of the first journals to make this commitment to reproducibility and transparency but in the intervening 11 years, many biology journals have followed suit. Despite this requirement, however, compliance has too often been spotty (Roche et al. 2015) with data too often being incomplete, unintelligible, inconsistent or non-existent, though by 2020 all papers in Am Nat have made some data available to readers.


The myriad forms of, and problems with, data associated with papers is hardly surprising as journals rarely, if ever, provide guidelines for authors. For that reason, The American Naturalist now has a specific set of guidelines for providing data (—much like the usual guidelines to authors for manuscript style—and a small team of data editors to help authors comply. Our policies and procedures for data will undoubtedly evolve in the coming months as our goal is to help authors make their data as transparent as possible, while also saving time for both authors and downstream users of those datasets.


To provide a summary of the current state of data available for Am Nat papers I looked at 100 papers published in 2020 (issues 1-5). By my count, 78 of those papers analyzed data that I expected to be made available (e.g., not analytical theory, or a synthetic review). The good news is that all but six of those papers made their raw data available—3 of those had embargoed their data for a reasonable period, and three others had not yet made their data available, which we immediately rectified. The not so good news—and this applies to all journals that I use regularly—is that those data are too often incomplete, or inscrutably difficult to understand (see graph).


The biggest issue, and easy to resolve, is that only about 25% of those papers with data are what we would now call ‘complete’ in that they provide useful information about the datasets and variables provided. On trying to use data from a variety of journals in my statistics courses over the past decade, I often found that it would take me hours or even days to replicate analyses, too often involving correspondence with the authors to figure out cryptic variable names and complex data structures.


Those 75 data repositories that I looked at are remarkably diverse, involving 5 different repositories, 1-100s of files, some 34 different files types, and total repository size ranging from 20 Kb to >13 Gb. Anyone who has tried to open VisiCalc files from 1981, as I have, will appreciate the usefulness of simple file structures that will be accessible for years to come as the landscape of data-handling software evolves.


The survey of papers published in 2020 provides a baseline to gauge our progress in making data associated with Am Nat papers useful and transparent, and our research optimally reproducible. We will revisit this sort of analysis in a year’s time and we welcome your comments and suggestions.



Roche DG, Kruuk LEB, Lanfear R, Binning SA. Public Data Archiving in Ecology and Evolution: How Well Are We Doing?. PLOS Biol. 2015; 13 (11): e1002295. 1002295 PMID: 26556502


Whitlock MC, MacPeek MA, Rausher MD, Rieseberg L, Moore AJ. 2010 Data archiving. American Naturalist 175: 145-146),