September 21, 2017

Impact Factors and Original Research

Short Version:

Review articles increase Journal Impact Factor because authors cite them instead of the original research that underlies them. Too much original research used, for example, in meta-analysis, gets listed in online-only appendixes or author-supplied PDFs. Web of Science can’t find either. Google Scholar can’t find author-supplied-PDFs. In a quest to increase credit for the authors of original research, Am Nat is printing all the references in appendixes in the main Literature Cited list and will be encouraging authors to consider citing the original research.

Long Version: 

As anyone who has followed our Journal Impact Factor (JIF) knows, it’s been on a bit of a roller coaster. Since 2003, I’ve been trying to puzzle out, first, how the impact factor works and, second, why ours bounces around. All along, our journal’s Editors have insisted that we won’t change anything editorially with the JIF in mind. It was obvious early on that review articles and methods articles boost a journal’s JIF. Indeed, in a list of the 100 most cited articles, methods papers dominated ( So the Editors agreed that we would be consistent and publish the same conceptually driven, original research, that has always defined The American Naturalist. 

We do publish some papers with methods in them, of course (Felsenstein 1985 being our most-cited paper, for example), as well as papers with a review component in them—in particular, synthesis papers—but they have to be conceptual and provide new insights that move research forward. So, mysteriously (to me), over the years our editorial course has been steady while our impact factor has not. As a result, periodically, I’ve poked around in the JIF mystery.

I would look at Web of Science, and I would do what they say they do. So, for the number that came out in 2017, the JIF supposedly took the citations made in 2016 of papers published in 2015 and 2014. I can capture the trend but nowhere near the same values. Then I bumped into this blog entry that explained that the JIF is not based on Web of Science:
“The counting method used in the [Journal Citation Reports] is much less strenuous than the Web of Science, and relies just on the name of the journal (and its variants) and the year of citation. The JCR doesn’t attempt to match a specific source document with a specific target document, like in the Web of Science. It just adds up all of the times a journal receives citations in a given year.”
I still don’t know why the denominators never make any sense, but it helps explain why scrolling through the Web of Science never adds up.

When looking at individual articles, it’s clear that most articles get a few citations in the short window that the JIF looks at, with a few high flyers that lift the mean. So just a few papers added into the mix and the JIF increases a lot. As these papers appear in (or fall out of) the 2-year window of time that defines JIF, the impact factor may jump up or drop down. Here’s a blog post about how this looked for some journals in Chemistry.

Another blog that looked at the problems with JIF calculations is Brian McGill’s post here:

As a result, I started looking at our two highest years, and sure enough, there was a very highly cited paper that was lifting our average up—it was an introduction to an ASN Vice Presidential symposium. So, though the Editors don’t publish review papers, the invited ASN papers can act that way. As it turned out, in the two years we sharply dropped, we were missing a VP symposium introduction and there happened to be no ASN addresses. Since then, the VP introductions and the ASN addresses have returned on schedule and our impact factor has gone back up.

Though the high-flying introduction made a big difference, all the VP introductions get a healthy array of cites. This phenomenon is described in this post along with an explanation: where Andrew Hendry analyzes why some of his papers seem over-cited—and finds they are introductory papers:
Another choice for an over-cited paper might be the introduction we wrote to a Philosophical Transactions of the Royal Society special issue on Eco-Evolutionary Dynamics. The introduction simply pointed out that evolution could be rapid and that evolution could influence ecological process, before it then summarized the papers in the special issue. Again, nothing wrong with the paper, but a summary of papers in a special issue is hardly cause for (soon) 300+ citations, nor is that typical of such a summary. …. This is fine, but excellent papers that treat eco-evolutionary dynamics as a formal research subject, rather than a talking point, are out there and should be cited more. Indeed, several papers in that special issue are precisely on that point, and yet our introduction is cited more. Similar to this example of over-citation, I could also nominate the introduction to another special issue (in Functional Ecology) – which is my fourth most cited paper (437 citations). 
Why are these “OK, but not that amazing” papers so highly cited? My guess is that two main factors come into play. The first is that these papers had very good “fill in the box” titles. For instance, our PTRSB paper is the only one in the literature with Eco-Evolutionary Dynamics being the sole words in the title. Thus, any paper writing about eco-evolutionary dynamics can use this citation to “fill in the citation box” after their first sentence on the topic. You know the one, that sentence where you first write “Eco-evolutionary dynamics is a (hot or important or exciting or developing) research topic (REF HERE)” The Functional Ecology introduction has much the same pithy “fill in the box” title (Evolution on Ecological Time Scales) and, now that I look again, so too does the Conservation Biology paper (Evolutionary Response to Climate Change.) The second inflation factor is likely that citations beget citations. When “filling in the box”, authors tend to cite papers that other authors used to fill in the same box – perhaps partly because they feel safe in doing so, even if they haven’t read the paper. (In fact, I will bet that few people who cite the above papers have actually read them.) One might say these are “lazy citations” – where you don’t have to read anything but can still show you know the field by citing the common-cited papers.
He summarizes a point I was coming to realize--that it’s not just that review papers lift impact factors. Review papers take cites away from the original research. I had that demonstrated in one of my forays into Web of Science. I was following a paper that happened to be published early in the year. It was getting quite a few cites in that same year, the year that didn’t count toward the JIF. I checked it the next two years, but in the years that did count toward the JIF, the article got zero cites. It had been co-opted by a review paper that cited it. Another demonstration came to me when a reviewer wrote in asking how soon the paper he’d just reviewed would be published. He was writing a review paper and wanted to cite it. So a paper’s ability to be counted was getting co-opted before it was even in Production.

It became increasingly clear that the drive to lift JIFs with review papers has meant that the authors of original research are not getting their due. Therefore, we are philosophically committed to encouraging authors to cite original sources for results and ideas and will do what we can to get cites counted by Web of Science and Google Scholar.

--Trish (Managing Editor)

Updated to add another view of how the Impact Factor is damaging scholarship:

September 5, 2017

The Debate at Asilomar--21st Century Naturalist Meeting, January 2014

One of the goals the ASN 21st Century Naturalists Meeting

was to experiment with different formats, one of which was the evening debate organized by ASN President Trevor Price, who told the debaters--NO COMPROMISE:

The proposition was “This house believes that species richness on continents is dominated by ecological limits.”

Proponent: Dan Rabosky. Seconded by: Allen Hurlbert.
Opponent: Luke Harmon. Seconded by: Susan Harrison.
Organized by Trevor Price, ASN President 2014.
As it was described in the program:
To what extent does regional and local diversity depend mostly on time and diversification rate (Wallace's old hypothesis for the latitudinal gradient), or is instead close to an ecological carrying capacity? These issues have recently become much more focused given our improved understanding of biological diversity through time and earth's history, notably paleoclimate. Nevertheless we are far from resolution, and researchers still have strong views. 
In this debate, Dan Rabosky and Allen Hurlbert will present the case for ecological regulation, while Luke Harmon and Susan Harrison will argue the non-equilibrium case. The format will roughly follow that of the famous Oxford University debates. Dan will present a prepared 20-minute summary, followed similarly by Luke. Then, there will be opportunity for alternating rebuttals from either side. While flexible, we expect the first rebuttal to last up to 20 minutes from each side, with a second response of up to 10 minutes again from each side. Following this, questions will be thrown open to the audience; each question can be addressed to one or other side, or both, but both sides will be given the opportunity to respond. This is an innovation for the ASN, and if successful we hope to refine the format in future meetings.
Dan Rabosky is Assistant Professor at the University of Michigan. He has worked on Australian lizards and comparative methods, and is well known for his investigations of diversity-dependence in the pattern of lineage splitting in phylogenies. Allen Hurlbert is Assistant Professor at the University of North Carolina, whose research explores broad-scale patterns of diversity and community structure, with an emphasis on North American birds. Luke Harmon is Associate Professor at the University of Idaho. He has worked on Anolis lizards and comparative methods, including evaluation of the correspondence between morphological diversification and lineage diversification, and causes of disparities in clade richness across vertebrates. Susan Harrison is Professor at the University of California, Davis. She works on regional, historical and local drivers of plant richness, focusing especially on he flora of California.

It was indeed lively and there were zombies...

Thanks to Luke Harmon there is a video of the debate posted on You Tube:

The participants have also agreed to turn their arguments into papers so that the debate will appear in the American Naturalist with an introduction by Trevor Price later this year. As Allen describes it in the interview (link below),
In terms of thoroughness, it seems we will be putting out two companion papers based on the debate that all share the same subheadings, thus providing a detailed point-counterpoint. In that way, it’s less important that the actual debate cover every topic or resolve any one issue, and the point of the debate can be focused on conveying to the audience the general areas of disagreement.
Thanks to Jeremy Fox, there's a great interview with the participants at Dynamic Ecology, "Resolved: debates at scientific meetings are a good thing"

As they say, it was a trial run so aspects were a bit rocky, but everyone that I spoke with agreed that the debate format was a great idea.

Updated: The debate is in the journal at


Rabosky and Hurlbert

Harmon and Harrison