An aspect of publishing that is critically important to publishers and authors alike is the metric that measures a journal’s significance in its field. The impact factor has long reigned as the preeminent yardstick for a journal’s relevance and importance. It captures the average number of citations that articles published in the journal received over the past two years. Authors often choose the journal to submit their manuscript to based on impact factor, and because of this, editors are motivated to make sure this number increases each year. However, there are several inherent flaws in this metric. To name a few:
- Self-citation can artificially inflate a journal’s impact factor.
- Because reviews tend to be cited more, journals that publish a high proportion of review articles will likely have a higher impact factor than journals that publish a mix of research articles, letters/communications, and reviews.
- Impact factors cannot accurately be compared across disciplines: while natural scientists predominantly turn to journals to publish their work, researchers in the humanities and social scientists often publish books rather than journal articles.
Considering these drawbacks, other measures of a journal’s importance have been created that are not based solely on citations. One of these is Altmetrics (“alternative metrics”), which uses a donut graphic to indicate how many mentions an article gets on sources like social media posts, blogs, Wikipedia, and news articles, to name a few. However, Altmetrics is intended to complement metrics like the impact factor, not replace them; it reflects the attention and “buzz” that a certain article receives. It is important to note, however, that buzz can be bad: for example, if an article goes viral on Twitter for ethical misconduct or poor methodology, the donut will only reflect the number of mentions with no caveat that mentions are because the article is flawed.
A third metric introduced by the Center for Open Science is the TOP Factor (“TOP” stands for Transparency and Openness Promotion). The Center for Open Science argues that the impact factor is failing as the gold standard of metrics because it encourages misconduct, and it is overused because there are no other good options. Rather than measuring citations, the TOP Factor scores journals on “the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community.” With this metric, the quality of the journal’s policies is being measured, rather than how often its articles are cited.
Even with the emergence of other metrics, it’s clear that the impact factor isn’t going anywhere anytime soon. However, there could be a shift away from citation-based metrics and toward metrics based on transparency and journal quality as initiatives like Plan S put more and more emphasis on open access.
What are your thoughts on the different metrics models? Do you think one should be used over another? Let us know in the comments below!