September 21, 2017

Impact Factors and Original Research

Short Version:

Review articles increase Journal Impact Factor because authors cite them instead of the original research that underlies them. Too much original research used, for example, in meta-analysis, gets listed in online-only appendixes or author-supplied PDFs. Web of Science can’t find either. Google Scholar can’t find author-supplied-PDFs. In a quest to increase credit for the authors of original research, Am Nat is printing all the references in appendixes in the main Literature Cited list and will be encouraging authors to consider citing the original research.

Long Version: 

As anyone who has followed our Journal Impact Factor (JIF) knows, it’s been on a bit of a roller coaster. Since 2003, I’ve been trying to puzzle out, first, how the impact factor works and, second, why ours bounces around. All along, our journal’s Editors have insisted that we won’t change anything editorially with the JIF in mind. It was obvious early on that review articles and methods articles boost a journal’s JIF. Indeed, in a list of the 100 most cited articles, methods papers dominated (http://www.nature.com/news/the-top-100-papers-1.16224). So the Editors agreed that we would be consistent and publish the same conceptually driven, original research, that has always defined The American Naturalist. 

We do publish some papers with methods in them, of course (Felsenstein 1985 being our most-cited paper, for example), as well as papers with a review component in them—in particular, synthesis papers—but they have to be conceptual and provide new insights that move research forward. So, mysteriously (to me), over the years our editorial course has been steady while our impact factor has not. As a result, periodically, I’ve poked around in the JIF mystery.

I would look at Web of Science, and I would do what they say they do. So, for the number that came out in 2017, the JIF supposedly took the citations made in 2016 of papers published in 2015 and 2014. I can capture the trend but nowhere near the same values. Then I bumped into this blog entry that explained that the JIF is not based on Web of Science:
“The counting method used in the [Journal Citation Reports] is much less strenuous than the Web of Science, and relies just on the name of the journal (and its variants) and the year of citation. The JCR doesn’t attempt to match a specific source document with a specific target document, like in the Web of Science. It just adds up all of the times a journal receives citations in a given year.” https://scholarlykitchen.sspnet.org/2016/04/12/on-moose-and-medians-or-why-we-are-stuck-with-the-impact-factor/
I still don’t know why the denominators never make any sense, but it helps explain why scrolling through the Web of Science never adds up.

When looking at individual articles, it’s clear that most articles get a few citations in the short window that the JIF looks at, with a few high flyers that lift the mean. So just a few papers added into the mix and the JIF increases a lot. As these papers appear in (or fall out of) the 2-year window of time that defines JIF, the impact factor may jump up or drop down. Here’s a blog post about how this looked for some journals in Chemistry. https://stuartcantrill.com/2015/12/10/chemistry-journal-citation-distributions/

Another blog that looked at the problems with JIF calculations is Brian McGill’s post here: https://dynamicecology.wordpress.com/2016/06/21/impact-factors-are-means-and-therefore-very-noisy/

As a result, I started looking at our two highest years, and sure enough, there was a very highly cited paper that was lifting our average up—it was an introduction to an ASN Vice Presidential symposium. So, though the Editors don’t publish review papers, the invited ASN papers can act that way. As it turned out, in the two years we sharply dropped, we were missing a VP symposium introduction and there happened to be no ASN addresses. Since then, the VP introductions and the ASN addresses have returned on schedule and our impact factor has gone back up.

Though the high-flying introduction made a big difference, all the VP introductions get a healthy array of cites. This phenomenon is described in this post along with an explanation: http://ecoevoevoeco.blogspot.ca/2017/03/over-citation-my-papers-that-should.html where Andrew Hendry analyzes why some of his papers seem over-cited—and finds they are introductory papers:
Another choice for an over-cited paper might be the introduction we wrote to a Philosophical Transactions of the Royal Society special issue on Eco-Evolutionary Dynamics. The introduction simply pointed out that evolution could be rapid and that evolution could influence ecological process, before it then summarized the papers in the special issue. Again, nothing wrong with the paper, but a summary of papers in a special issue is hardly cause for (soon) 300+ citations, nor is that typical of such a summary. …. This is fine, but excellent papers that treat eco-evolutionary dynamics as a formal research subject, rather than a talking point, are out there and should be cited more. Indeed, several papers in that special issue are precisely on that point, and yet our introduction is cited more. Similar to this example of over-citation, I could also nominate the introduction to another special issue (in Functional Ecology) – which is my fourth most cited paper (437 citations). 
Why are these “OK, but not that amazing” papers so highly cited? My guess is that two main factors come into play. The first is that these papers had very good “fill in the box” titles. For instance, our PTRSB paper is the only one in the literature with Eco-Evolutionary Dynamics being the sole words in the title. Thus, any paper writing about eco-evolutionary dynamics can use this citation to “fill in the citation box” after their first sentence on the topic. You know the one, that sentence where you first write “Eco-evolutionary dynamics is a (hot or important or exciting or developing) research topic (REF HERE)” The Functional Ecology introduction has much the same pithy “fill in the box” title (Evolution on Ecological Time Scales) and, now that I look again, so too does the Conservation Biology paper (Evolutionary Response to Climate Change.) The second inflation factor is likely that citations beget citations. When “filling in the box”, authors tend to cite papers that other authors used to fill in the same box – perhaps partly because they feel safe in doing so, even if they haven’t read the paper. (In fact, I will bet that few people who cite the above papers have actually read them.) One might say these are “lazy citations” – where you don’t have to read anything but can still show you know the field by citing the common-cited papers.
He summarizes a point I was coming to realize--that it’s not just that review papers lift impact factors. Review papers take cites away from the original research. I had that demonstrated in one of my forays into Web of Science. I was following a paper that happened to be published early in the year. It was getting quite a few cites in that same year, the year that didn’t count toward the JIF. I checked it the next two years, but in the years that did count toward the JIF, the article got zero cites. It had been co-opted by a review paper that cited it. Another demonstration came to me when a reviewer wrote in asking how soon the paper he’d just reviewed would be published. He was writing a review paper and wanted to cite it. So a paper’s ability to be counted was getting co-opted before it was even in Production.

It became increasingly clear that the drive to lift JIFs with review papers has meant that the authors of original research are not getting their due. Therefore, we are philosophically committed to encouraging authors to cite original sources for results and ideas and will do what we can to get cites counted by Web of Science and Google Scholar.

--Trish (Managing Editor)

Updated to add another view of how the Impact Factor is damaging scholarship:
http://blogs.lse.ac.uk/impactofsocialsciences/2017/09/19/clickbait-and-impact-how-academia-has-been-hacked/

3 comments:

  1. I say publish more review papers. If there is a demand, fulfill the supply rather than taking such an elitist approach.

    ReplyDelete
    Replies
    1. But people don't want to *read* review papers, they just want to use them as a citation to support sentences they already wrote without a specific study in mind. If you cite an original research paper to support your statement you (usually) have to actually know exactly what's in the article. Cite a review about the general area covered in your statement and you can be confident something in there will be relevant, so you don't actually have to read past the abstract. We all do it. Good for American Naturalist for publishing actual, y'know, research.

      Delete
    2. There is absolutely a demand for review papers, but it is important to acknowledge the articles that present the original empirical research. I've been involved with several meta-analyses, and while they are important research it has always bothered me a bit that I couldn't include data from primary manuscripts in the main document citations. While it's not a huge issue if your meta-analysis is small, when you start analyzing data from 50+ papers it becomes very cumbersome. As someone who loves doing meta-analyses, I am very supportive of including citations of all original research articles that data is obtained from in the main manuscript, with the obvious caveat that these types of citations shouldn't count towards manuscript word limits - the original authors deserve due credit!

      Given that citations can be (a very poor form of) currency in academia, this is a good way to help ensure the original manuscripts are cited and the authors get due credit. I hope other journals follow suit.

      Delete