After 10 months of our experiment in modified double blind, it seemed like a good time to provide a bit of an update on how it's going.
To recap, the journal committed to redacting the peer review system and removing the title page from PDFs but has left it up to the authors to what extent the authors want to anonymize their papers beyond that. If authors have chosen ways to reveal themselves, then we decided that, at this stage, double blind is essentially a service to authors and we wouldn't police that. Authors can also choose to opt out entirely, identifying themselves on the title page.
We've had to make a lot of new policies and judgment calls as unexpected twists developed. I think most of those were caused because the community isn't used to the concept--and there many ways in which authors have already identified themselves before submitting--e.g., preprints, Figshare or GitHub links, submissions to single-blind journals first. In addition, double-blind seemed to make reviewers oddly chatty about the act of reviewing, at least at first. It's gotten easier as people have gotten used to it. And fairly soon, we're hoping it will get easier since several more journals are talking about trying our version of modified double blind or even 100% compulsory double blind (neither reviewers or authors can choose to reveal themselves).
As the process settled down, we were able to provide a clearer set of instructions on the webpage about the ways authors might not be anonymous. Taking the last three months since that change, the percentage of papers whose authors have opted out is 16%. Most of them have done so because they felt their identities were too obvious to pretend otherwise. A few are simply against the idea that there might be a problem that double blind would fix.
One of the questions we had was whether reviewers would be more likely to turn down review requests if the identity of the authors is unknown, but the percentage who agree to review has stayed essentially the same. Only a few reviewers have refused explicitly because of double blind review. Some felt not knowing would alter the quality of the review. Some disagreed with the premise that there might be implicit bias.
The repeated objection to double-blind review in science is that author identities are almost always guessable, making it a pointless exercise. Not all guesses reported to us, however, have been accurate. Sometimes the lab was guessed but the lead author was not, so minimizing unconscious bias still seems to have value for that lead author. As one person summed up in a discussion of double blind on Twitter, “There’s a difference between guessing and knowing—I think that gap even if small is important.”
It's a bit too early to say whether the experiment has made a difference in what papers get accepted, but at least one editor thinks there's one noticeable difference: well-known people may be getting much tougher reviews.
Post a Comment