In Search of the Perfect Metric: Comparing Three Models

An aspect of publishing that is critically important to publishers and authors alike is the metric that measures a journal’s significance in its field. The impact factor has long reigned as the preeminent yardstick for a journal’s relevance and importance. It captures the average number of citations that articles published in the journal received over the past two years. Authors often choose the journal to submit their manuscript to based on impact factor, and because of this, editors are motivated to make sure this number increases each year. However, there are several inherent flaws in this metric. To name a few:

  • Self-citation can artificially inflate a journal’s impact factor.
  • Because reviews tend to be cited more, journals that publish a high proportion of review articles will likely have a higher impact factor than journals that publish a mix of research articles, letters/communications, and reviews.
  • Impact factors cannot accurately be compared across disciplines: while natural scientists predominantly turn to journals to publish their work, researchers in the humanities and social scientists often publish books rather than journal articles.

Considering these drawbacks, other measures of a journal’s importance have been created that are not based solely on citations. One of these is Altmetrics (“alternative metrics”), which uses a donut graphic to indicate how many mentions an article gets on sources like social media posts, blogs, Wikipedia, and news articles, to name a few. However, Altmetrics is intended to complement metrics like the impact factor, not replace them; it reflects the attention and “buzz” that a certain article receives. It is important to note, however, that buzz can be bad: for example, if an article goes viral on Twitter for ethical misconduct or poor methodology, the donut will only reflect the number of mentions with no caveat that mentions are because the article is flawed.

A third metric introduced by the Center for Open Science is the TOP Factor (“TOP” stands for Transparency and Openness Promotion). The Center for Open Science argues that the impact factor is failing as the gold standard of metrics because it encourages misconduct, and it is overused because there are no other good options. Rather than measuring citations, the TOP Factor scores journals on “the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community.” With this metric, the quality of the journal’s policies is being measured, rather than how often its articles are cited.

Even with the emergence of other metrics, it’s clear that the impact factor isn’t going anywhere anytime soon. However, there could be a shift away from citation-based metrics and toward metrics based on transparency and journal quality as initiatives like Plan S put more and more emphasis on open access.

What are your thoughts on the different metrics models? Do you think one should be used over another? Let us know in the comments below!

You May Also Be Interested In

The Art of the Book

The Art of the Book

Depending on the corners of the internet you inhabit, the algorithm might show you, like it does for me, crafty videos of people using paper, twine, glue, cardstock, leather, and all manner of other things to make and bind their own books and journals. Book binding,...

Human Nature: What Human Help Can Do That AI Cannot

Human Nature: What Human Help Can Do That AI Cannot

In today’s world, artificial intelligence (AI) is, slowly but surely, becoming just as much of a staple to the publishing industry as the internet. Its advantages include allowing the writing and editing processes to become faster, more streamlined, and, in some...

Cash for Citations: The Newest Scam in Scholarly Publishing

Cash for Citations: The Newest Scam in Scholarly Publishing

“Publish or Perish” tends to be the unfortunate moniker of the scholarly publishing world nowadays. Experts have to publish their work (and in the right journal, mind you) to get the citations and recognition needed to advance in their field. This mindset has of...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.