December 1, 2021

Guidelines for archiving Code with Data

The following is a cross-post from the Editor's blog of The American Naturalist, developed with input from various volunteers (credited below).



December 1, 2021

Daniel I. Bolnick (, Roger Schürch (, Daniel Vedder (, Max Reuter (, Leron Perez (, Robert Montgomerie (

Starting January 1, 2022, The American Naturalist will require that any analysis and simulation code (R scripts, Matlab scripts, Mathematica notebooks) used to generate reported results be archived in a public repository (e.g., Dryad, FigShare, DataVerse, Github archived on Zenodo). This has been our recommendation for a couple of years, and author compliance has been very common. As part of our commitment to Open and Reproducible Science, we are transitioning to make this a requirement. The following document, developed with input from a variety of volunteers, is intended to be a relatively basic guide to help authors comply with this new requirement.




The fundamental question you should ask yourself is, “If a reader downloads my data and code, will my scripts be comprehensible, and will they run to completion and yield the same results on their computer?” Any computer code used to generate scientific results should be readily usable by reviewers or readers. Sharing this information is vital for several reasons as it promotes: (i) the appropriate interpretation of results, (ii) checking the validity of analyses and conclusions, (iii) future data synthesis, (iv) replication, and (v) their use as a teaching tool for anyone learning to do analyses themselves. Shared code provides greater confidence in results. 

The recommendations below are designed to help authors conduct a final check when finishing a research project, before submitting a manuscript for publication. In our experience, you will find it easier to build reusable code and data if you adhere to these recommendations from the start of your research project. We separately list requirements, and recommendations in each category below.




  Great template available here:




 Prepare a single README file with important information about your data repository as a whole (code and data files). Text (.txt or .rtf) and Markdown (.md) README files are readable by a wider variety of free and open source software tools, so have greater longevity. The README file should simply be called README.txt (or .rtf or .md). That file should contain, in the following order:

Citation to the publication associated with the datasets and code 

Author names, contact details

A brief summary of what the study is about 

 Identify who is responsible for collecting data and writing code.

List of all folders and files by name, and a brief description of their contents. For each data file, list all variables (e.g., columns) with a clear description of each variable (e.g., units)

Information about versions of packages and software used (including operating system) and dependencies (if these are not installed by the script itself). An easy way to get this information is to use sessionInfo() in R, or 'pip list --format freeze' in Python.


RECOMMENDED (for inclusion in the README file):

  Provide workflow instructions for users to run the software (e.g., explain the project workflow, and any configuration parameters of your software)

 Use informative names for folders and files (e.g., “code”, “data”, “outputs”)

  Provide license information, such as Creative Commons open source license language granting readers the right to reuse code. For more information on how to choose and write a license, see This is not necessary for DRYAD repositories, as you choose licensing standards when submitting your files.

 If applicable, list funding sources used to generate the archived data, and include information about permits (collection, animal care, human research). This is not necessary for DRYAD repositories, as it is also recoded when submitting your files.

  Link to or equivalent methods repositories where applicable



  Scripts should start by loading required packages, then importing raw data from files archived in your data repository.

  Use relative paths to files and folders (e.g. avoid setwd() with an absolute path in R), so other users can replicate your data input steps on their own computers. 

 Annotate your code with comments indicating what the purpose of each set of commands is (i.e., “why?”). If the functioning of the code (i.e., “how”) is unclear, strongly consider re-writing it to be clearer/simpler.  In-line comments can provide specific details about a particular command

 Annotate code to indicate how commands correspond to figure numbers, table numbers, or subheadings of results within the manuscript.

  If you are adapting other researcher’s published code for your own purposes, acknowledge and cite the sources you are using. Likewise, cite the authors of packages that you use in your published article.



  Test code before uploading to your repository, ideally on a pristine machine without any packages installed, but at least using a new session.

  Use informative names for input files, variables, and functions (and describe them in the README file).

  Any data manipulations (merging, sorting, transforming, filtering) should be done in your script, for fully transparent documentation of any changes to the data.

  Organise your code by splitting it into logical sections, such as importing and cleaning data, transformations, analysis and graphics and tables. Sections can be separate script files run in order (as explained in your README) or blocks of code within one script that are separated by clear breaks (e.g., comment lines, #--------------), or a series of function calls (which can facilitate reuse of code).

  Label code sections with headers that match the figure number, table number, or text subheading of the paper.

  Omit extraneous code not used for generating the results of your publication, or place any such code in a Coda at the end of your script.

  Where useful, save and deposit intermediate steps as their own files. Particularly, if your scripts include computationally intensive steps, it can be helpful to provide their output as an extra file as an alternative entry point to re-running your code. 

  If your code contains any stochastic process (e.g., random number generation, bootstrap re-sampling), set a random number seed at least once at the start of the script or, better, for each random sampling task. This will allow other users to reproduce your exact results.

  Include clear error messages as annotations in your code that explain what might go wrong (e.g., if the user gave a text input where a numeric input was expected) and what the effect of the error or warning is.



Checklist for preparing data to upload to DRYAD (or other repository)


Repository contents 


  All data used to generate a published result should be included in the repository. For papers with multiple experiments or sets of observations, this may mean more than one data file.

 Save each file with a short, meaningful file name and extension (see DRYAD recommendations here).

 Prepare a README text file to accompany the data. Our recommendation is to put this in the single README file described above. For complex repositories where this readme becomes unmanageably long, you may opt to create a set of separate README files for the overall repository, with one master README and more specific README files for code and for data. But, our preference is one README. The README file(s) should provide a brief overall description of each data file’s contents, and a list of all variable names with explanation (e.g. units). This should allow a new reader to understand what the entries in each column mean and relate this information to the Methods and Results of your paper. Alternatively, this may be a “Codebook” file in a table format with each variable as a row and column providing variable names (in the file), descriptions (e.g. for axis labels), units, etc. 

 Save the README files as a text (.txt) or Markdown (.md) files and all of the data files as comma-separated variable (.csv) files. 

 Save all of the data files as comma-separated variable (.csv) files. If your data are in EXCEL spreadsheets you are welcome to submit those as well (to be able to use colour coding and provide additional information, such as formulae) but each worksheet of data should also be saved as a separate .csv file.


 We recommend also archiving any digital material used to generate data (e.g., photos, sound recordings, videos, etc), but this may require too much storage space for some repository sites. At a minimum, upload a few example files illustrating the nature of the material and a range of outcomes. We recognize that some projects entail too much raw data to archive all the photos / videos / etc in their original state.

Data file contents and formatting  


 Archived files should include the raw data that you used when you first began analyses, not group means or other summary statistics; for convenience, summary statistics can be provided in a separate file, or generated by code archived with the data.

 Identify each variable (column names) with a short name. Variable names should preferably be <10 characters long and not contain any spaces or special characters that could interfere with reading the data and running analysis code. Use an underline (e.g., wing_length) or camel case (e.g., WingLength) to distinguish words if you think that is needed.


 Omit variables not analyzed in your code.

 A common data structure is to ensure that every observation is a row and every variable is a column.

 Each column should contain only one data type (e.g., do not mix numerical values and comments or categorical scores in a single column).

  Use “NA” or equivalent to indicate missing data (and specify what you use in the README file)



 Upload your data and code to a curated, version-controlled repository (e.g., DRYAD, zenodo). Your own GitHub account (or other privately or agency controlled website) does not qualify as a public archive because you control access and might take down the data at a later date.

 Provide all the metadata and information requested by the repository, even if this is optional and redundant with information contained in the README files. Metadata makes your archived material easier to find and understand.

 From the repository, get a private URL and provide this on submission of your manuscript so that editors and reviewers can access your archive before your data are made public.


 Prepare your data, code, and README files, before or during manuscript preparation (analysis and writing).

➤ If you use a Github repository and generate an archive on Zenodo, we recommend removing ALL extraneous files except the core dataset and code and README file, so as to not clutter the archive in wahs that make it harder for readers to understand what files to use.



More detailed guides to reproducible code principles can be found here:

Documenting Python Code: A Complete Guide -

A guide to reproducible code in Ecology and Evolution, British Ecological Society: 

Dokta tools for building code repositories:

Version management for python projects:

Principles of Software Development - an Introduction for Computational Scientists (, with an associated code inspection checklist (

Style Guide for Data Files

 See the Google R style guide ( the Tidyverse style guide( for more information

Google style guide for Python:




The American Naturalist requests that authors use DRYAD or zenodo for their archives when possible. 

·                DRYAD/zenodo are curated and this means that there is some initial checking by DRYAD for completeness and consistency in both the data files and the metadata. DRYAD requires some compliance before they will allow a submission.

·                We are finding it much easier and more convenient to download repositories from DRYAD/zenodo rather than searching the ms etc for the files or repository

·                Files on DRYAD/zenodo cannot be arbitrarily deleted or changed by authors or others after publication. DRYAD will allow changes if a good case can be made—all changes are documented and all versions are retained.

·                DRYAD is free for Am Nat authors and we have a good working relationship with them and they take seriously our suggestions for improvement etc.

·                editors, reviewers, and authors will all become familiar with the workings of DRYAD

·                DRYAD and zenodo are now linked. DRYAD is for data files and zenodo for everything else (code, PDFs, etc). You need only upload all files to DRYAD and they will separate your archive into the appropriate parts. As you will see your DRYAD repository provides a link to the files on zenodo and vice versa

New American Naturalist editorial staff

Since Trish Morse retired in October 2020, Owen Cook has heroically managed all of the parts of The American Naturalist's editorial office solo. This includes checking incoming manuscripts for compliance, contacting Associate Editors picked by the Editors to handle a paper, and contacting reviewers for the AEs, sending reminders about overdue reviews and recommendation letters, responding to author queries, checking manuscript files and giving feedback on figure formatting, and much much more. He has truly been doing two jobs for the past year. Thanks Owen!

Luckily we now have been able to recruit two part-time helpers to assist Owen. Please extend a warm welcome to:

Alex Yu:

Alex (she/her/hers) is a Master of Humanities(MAPH) student at UChicago, currently working on her creative writing & translation thesis; in her spare time she is a travel writer, poet as well as vlogger. 

Evan Williams
Evan Williams (he/him/his) is a poet and essayist studying at the University of Chicago where he writes on topics ranging from surrealism to masculinity to the Svalbard Global Seed Vault.  

January 21, 2021

 Editor's Note:

Since fall 2020, Robert Montgomerie has been leading a group of nascent Data Editors in a task of designing a framework for monitoring compliance with Data Sharing requirements at The American Naturalist. This entails both setting up policies for a future board of Data Editors whose job will be to evaluate compliance of manuscripts' data and metadata before acceptance, and evaluating where problems lie in the past. What follows is a brief summary from Bob Montgomerie of his findings looking back at 2020 publications' compliance.  -Dan Bolnick

Data Transparency 2020

Bob Montgomerie,


For the past decade, authors of papers published in The American Naturalist have been required to make their raw data publicly available (Whitlock et al. 2010), either as an online supplement or in a recognized public data repository. The American Naturalist was one of the first journals to make this commitment to reproducibility and transparency but in the intervening 11 years, many biology journals have followed suit. Despite this requirement, however, compliance has too often been spotty (Roche et al. 2015) with data too often being incomplete, unintelligible, inconsistent or non-existent, though by 2020 all papers in Am Nat have made some data available to readers.


The myriad forms of, and problems with, data associated with papers is hardly surprising as journals rarely, if ever, provide guidelines for authors. For that reason, The American Naturalist now has a specific set of guidelines for providing data (—much like the usual guidelines to authors for manuscript style—and a small team of data editors to help authors comply. Our policies and procedures for data will undoubtedly evolve in the coming months as our goal is to help authors make their data as transparent as possible, while also saving time for both authors and downstream users of those datasets.


To provide a summary of the current state of data available for Am Nat papers I looked at 100 papers published in 2020 (issues 1-5). By my count, 78 of those papers analyzed data that I expected to be made available (e.g., not analytical theory, or a synthetic review). The good news is that all but six of those papers made their raw data available—3 of those had embargoed their data for a reasonable period, and three others had not yet made their data available, which we immediately rectified. The not so good news—and this applies to all journals that I use regularly—is that those data are too often incomplete, or inscrutably difficult to understand (see graph).


The biggest issue, and easy to resolve, is that only about 25% of those papers with data are what we would now call ‘complete’ in that they provide useful information about the datasets and variables provided. On trying to use data from a variety of journals in my statistics courses over the past decade, I often found that it would take me hours or even days to replicate analyses, too often involving correspondence with the authors to figure out cryptic variable names and complex data structures.


Those 75 data repositories that I looked at are remarkably diverse, involving 5 different repositories, 1-100s of files, some 34 different files types, and total repository size ranging from 20 Kb to >13 Gb. Anyone who has tried to open VisiCalc files from 1981, as I have, will appreciate the usefulness of simple file structures that will be accessible for years to come as the landscape of data-handling software evolves.


The survey of papers published in 2020 provides a baseline to gauge our progress in making data associated with Am Nat papers useful and transparent, and our research optimally reproducible. We will revisit this sort of analysis in a year’s time and we welcome your comments and suggestions.



Roche DG, Kruuk LEB, Lanfear R, Binning SA. Public Data Archiving in Ecology and Evolution: How Well Are We Doing?. PLOS Biol. 2015; 13 (11): e1002295. 1002295 PMID: 26556502


Whitlock MC, MacPeek MA, Rausher MD, Rieseberg L, Moore AJ. 2010 Data archiving. American Naturalist 175: 145-146),

January 17, 2021

Call for Special Topics paper submissions

 Nature, Data, and Power: How hegemonies shape biological knowledge

The American Naturalist calls for proposals of manuscripts that address how systems of power and oppression have shaped theory and practice in organismal biology (including but not limited to behavior, ecology, evolution, and genetics)Social relations of power, such as white supremacy, colonialism, misogyny, cissexism, ableism, and heteronormativity, have long shaped scientific understandings of the world. Investments in the maintenance of social hierarchies have manifested at the structural, institutional, and personal level--whether overtly or implicitly, intentionally or not--at all stages of the scientific process. They influence the kinds of questions scientists ask, the formation of scientific expertise and networks of knowledge production, and research outcomes themselves. In this Special Section, we will assemble papers that investigate the cultural, social, and political foundations of the theories and practices of contemporary organismal biology.

Papers should be written for an audience of biology researchers, and should both identify problems within current theories and practices, and make suggestions on how we can transform our thinking and produce more just science. Such contributions are aligned with Am Nat’s mission to “pose new and significant problems, introduce novel subjects, develop conceptual unification, and change the way people think.” We seek submissions from authors of varied disciplinary and interdisciplinary backgrounds in the social sciences, humanities, and natural sciences. We particularly encourage cross-disciplinary collaborations.

Proposal and manuscript review will be managed by a cross-disciplinary editorial team. Following proposal review, we will invite authors to submit full manuscripts. An invitation to submit a full manuscript does not guarantee publishing in the American Naturalist. Publication charges will be waived for full manuscripts included in this special section. 

Submission process/timeline: 

Please submit a 500 word (maximum) proposal describing your paper idea and why you think it would be a good fit for this Special Section to with subject line “Nature, Data, and Power Special Section” by February 15 2021. Invitations for full papers will be issued by March 15 2021. The deadline for full manuscripts will be June 15 2021. Anticipated publication of the section is before July 2022.

Papers will be handled by a special Editorial team, in consultation with the Editor-In-Chief (Daniel Bolnick):

Nancy Chen, Department of Biology, University of Rochester

Vince Formica, Department of Biology, Swarthmore College

Ambika Kamath, Department of Environmental Science, Policy, and Management, University of California Berkeley

María Rebolleda-Gómez, Department of Ecology and Evolutionary Biology, University of California Irvine

Banu Subramaniam, Department of Women, Gender, Sexuality Studies, University of Massachusetts Amherst

Beans Velocci, Department of History, Yale University

Ashton Wesner, Department of History, University of California Berkeley

If you have any questions, please contact

January 4, 2021

Registering complaints or concerns about published papers

The American Naturalist would like to clarify its procedures for handling comments on previously published work, and offer new protections for researchers who have valid reasons for maintaining their anonymity while commenting on previously published work.

For decades, scientific journals such as The American Naturalist have had an established protocol for handling criticisms of already-published papers. Readers are able to submit Comments that clearly document and justify their concerns about a published paper. These may identify factual errors (including mistakes in analyses, code, etc), flaws in experimental designs, or disagreements about interpretation of results or context. Such Comments are reviewed, and the authors of the original paper are given a chance to review the Comment. If the complaint is found to have merit (even if the original authors disagree), then the complaint is accepted for publication. The authors have an option to publish a rebuttal if they disagree with the points raised in the Comment.  If the authors acknowledge the merits of the critique, then they may leave the Comment unanswered, or they may submit a Correction that updates the paper with more correct information such as new statistical results or mathematical analyses (which also gets reviewed).  In extreme cases, if the Comment identifies demonstrable errors that significantly undermine the conclusions of the original paper, the original authors and/or Editorial Board may opt for a retraction instead. 

Note that papers can receive Comments for a whole range of reasons, from minor errors that don't change the main message, to fundamental mistakes of factual presentation (e.g., incorrect statistical results), suspect patterns in data, or critiques of interpretation or context. There's a gradation from papers having minor flaws, to major ones, and flaws ranging from demonstrable evidence to matters of interpretation. Not all warrant Comments, and certainly not all need Correction or Retraction even when the critiques are true, depending on the magnitude of the problems and how compelling the evidence is for the problem. 

We should emphasize that a reader's first action, on finding something that is unclear or appears wrong, should ideally be to contact the author for clarification. This might ultimately resolve the problem without involving the journal. Or, it may induce the authors to publish a Correction (or, in principle, even a retraction). Writing a Comment to the journal should be a plan-B option when direct communication with the authors has failed to resolve the issue, or if the critics have a valid reason to avoid direct contact (e.g., fear of retaliation). I am aware of at least one case where not only did an author (Bob Holt) agree with a Comment's critique, he joined in the effort and ultimately became a co-author on the Comment moving the field forward in the process. I know of cases where authors self-retracted after critics contacted them before the journal. Indeed, we recently published a Correction to a paper from the 1980's that the original authors submitted after a colleague found an error.

In recent years, the Editors of this journal have been receiving criticisms of papers through unofficial channels. These include direct emails to the Editors, tweets, or PubPeer posts. Until now, our policy has been to take these complaints seriously and conduct a full evaluation to identify whether the concerns have merit. This has resulted in multiple committees being formed, and has been a significant drain on the energies of the Editorial Board as a whole, but has indeed identified both some genuine problems, and some cases where there is a simple difference of opinion. 

The problem with this approach, of responding fully to informal critiques, is growing clearer to us on the Editorial Board. First and most troubling, it becomes plain that an individual with a personal vendetta against an author could use such informal critiques to recruit the journal as an unwitting tool for bullying the author, whether or not the critiques hold water or are severe enough to ultimately require action. Second, it simply is not possible for the Editorial Board, with their other responsibilities to the journal, to fully monitor all possible social media venues where people post criticisms of papers (PubPeer, Twitter, etc). Rather than assuming that the journal has seen a criticism posted elsewhere, we prefer that critics submit Comments to the journal where the criticism can be fully vetted through a standard review process.  This takes time, but is the traditionally accepted means of evaluating scientific arguments. Third, tweets and PubPeer posts are also often used for the milder goal of carrying on an honest and open scientific debate on a topic of honest disagreement, or about minor errors that may not warrant the effort of Correction. If we initiated a multi-person investigatory committee for every such disagreement, the journal would collapse under the weight of re-evaluating past work and become a partisan in adjudicating honest debates. Let's face it: when was the last time you were in a journal club reading a new paper and nobody had a question that couldn't be clearly resolved, nobody had a quibble with interpretation, nobody had a suggestion for a better experimental design. Therefore, we need a mechanism for distinguishing between which debates are best left alone, versus those that require investigation and possible corrective action.  That mechanism is the same as it has long been: when we receive a submission of a Comment manuscript.  Fourth, submitting informal critiques to the journal (e.g., by alerting us to a PubPeer post, or emailing an informally written diatribe) shunts the work of checking the paper off onto the journal's editorial board, when the critic is often better placed (by virtue of their expertise) to write a careful and complete criticism. It is akin to telling someone else to write a comment. As a case in point, some recent committee reports have turned out to be many times longer than the original email or PubPeer complaints, and that is work that has fallen on already-overburdened volunteer Associate Editors, which is not a sustainable approach. We rely in part on our community of readers to identify problems that were missed in review (and let's face it, peer review isn't perfect), and the mechanism for doing this is through Comments.

Therefore, it is the policy of the Editors of The American Naturalist, that henceforth we will expect that criticisms of published papers be written as Comments and submitted to us through Editorial Manager, to be subject to review through our editorial software that appropriately archives all steps in the process. These Comments need not be long, but they must effectively document errors in the published paper's data, analysis, or interpretation. I will emphasize that the Comments' author(s) will be known to the Editor-In-Chief handling the complaint, but that the review process can henceforth be double-blind to ensure the critics' anonymity. In extreme cases where a Comment author is afraid to have their name listed on the final publication, the Editorial Board will consider requests for anonymity on the publication itself, on a case by case basis. It is not our desire to make all Comments anonymous by default (we encourage openness once things are published), but neither will we refuse a request for anonymity when accompanied by a clear justification to the Editor-In-Chief. It is possible there are situations that arise where a Comment is not the best course of action, and readers may of course contact the Editor to inquire.

The primary exception is that readers may notify the Editor-In-Chief (Daniel Bolnick) by email ( if they find that an article published since 2010 does not provide complete and publicly available data (e.g., a Dryad repository) sufficient to recreate the analyses. Compliance with data sharing has been imperfect (based on a survey of data archives posted for AmNat papers in 2020, many of which are incomplete or have unclear metadata). Readers are encouraged to directly contact authors first to request correction of flawed data archives.  But when authors are unresponsive, the journal may help step in to encourage correction.  When we become aware of missing or incomplete data archives (or, unusably unclear metadata), we will contact the author to request its completion on Dryad (or other repository). If the authors have not fixed the data archive in a reasonable window of time, or if they are unable to do so having lost the original data files, then the journal would as a rule publish an Editorial Expression of Concern noting that the data for a paper are missing. This applies only to papers published after data archiving became a condition for publication (e.g., 2010 and after). Authors who are habitual offenders may see their current submissions delayed while we check their data archives more thoroughly. And, as noted in a previous Editors blog post on data archiving, we encourage authors to create data archives prior to submission, for the journal to evaluate during the review process. On Dryad, where the pre-review archiving process is free and you can generate a private key to provide to the journal.

To close this blog post, I wish to briefly consider some of the motives for why people have been using informal venues for registering criticisms, at least as far as I can discern. 

First, many posts on PubPeer, twitter, and other venues, are not meant for journals. Rather they are aimed at establishing a constructive but critical conversation between authors and readers. This is a healthy and valuable element of scientific dialogue, though when done in a less constructive tone it becomes stressful for authors and can devolve into a one-sided process. Such posts are not meant for our consideration. Comments, Corrections (or Retractions) are not meant for carrying on simple conversations, but for identifying and resolving errors of fact. This, again is one reason why journals should not be expected to react to all PubPeer posts.

Second, the next best rationale I have come across is that the critic of a paper is afraid of professional or personal retribution, and so wishes to remain anonymous. Our process of anonymous review of Comments, and openness (in principle) to well-justified requests for anonymity upon publication, should serve to alleviate this concern and thus encourage people to use the established route of Comments.

Third, in at least one case, the critic openly acknowledged that they simply did not wish to take the time to write a formal Comment to be reviewed and published. As noted above, this simply offloads the work onto others (our busy Associate Editors) who may not be as expert in the details of the subject. This is unfair to our editorial board, as it amounts to saying that the problem isn't important enough for the critic's time, but is important enough to consume the editorial board's time. It is true that the editorial board (especially the editors) have a greater obligation to ensure the quality of the work published in their journal, but ultimately we rely on the community to help with this (as we do with submissions, and reviews).

Fourth, some critics seek rapid dissemination of their criticisms, and prefer not to wait for a lengthy review process. This rapid science view point has merit, but risks damaging authors' reputation and imposing severe stresses before the validity of the criticisms has been considered. By placing attacks in public before the attacks are verified, this approach is indeed fast but can do great damage when the attacks ultimately prove to be misplaced. I have seen specific cases where statistical criticisms posted on PubPeer are later found to be incorrect, and a review process helps protect authors' reputation from flawed criticism.

Finally, there are cases where critics mistrust the institutional process. I have heard directly of cases where critics contacted journals and were rebuffed or ignored.Readers sometimes believe (rightly or wrongly) that journals are more interested in protecting their own reputation and their authors', and so have motives to sweep criticisms under the carpet.  I am shocked by many of these stories. It saddens me to think that we Editors are mistrusted, when we go through such efforts to ensure quality of our published work. I will admit that Editors may hesitate to pursue aggressive steps towards retraction out of fear for legal attack by affected authors. But more often, in my experience, a critic feels ignored when they submit a complaint but the journal finds the complaint to be insufficiently severe for corrective action (either misplaced, or insufficiently documented, or just not substantive enough a change to the paper's core message). But having been involved in both retractions and corrections and decisions to not do either, I can say that my view from behind the curtain has taught me that journals' decisions (at least AmNat's, and a few others I've watched from the sidelines) are done carefully and thoughtfully, with great effort and with good intentions, even when the critics (or I) don't fully agree with the outcome.

I will also acknowledge that evaluating criticisms is a large drain on our already-stretched time running regular journal functions. Personally, in 2020 I have spent far more time writing R code to analyze other people's potentially flawed datasets, than in analyzing data of my own. I have written more words regarding investigations into past publications this year, than I have written words on my own papers or grants. But, it is an Editor's job to ensure the quality of published work in their journal. The journal's reputation is bolstered not by sweeping problems out of sight, but by being proactive about correcting known problems. But, we Editors need the community's participation in this process, by following procedures set down by the journal for evaluating problems. Ultimately, it is a journal that publishes a paper, and so it is a journal that has leverage to pursue Corrections or Retractions or Expressions of Concern. This means that if critics really want their concerns to lead to corrective actions they ultimately do need to work through journals, which means using journals' established means of handling complaints.

All of this is not meant to deter Comments or criticism of published work. Science advances by self-criticism and self-correction. We should never shy away from fixing what is wrong (when it is clearly wrong, and important enough for the effort - nobody would publish a Correction for a grammatical mistake for instance). But, procedures for doing so exist, and I'd like to see those procedures used more, favored over backchannel approaches.