In Search of the Perfect Metric: Comparing Three Models

An aspect of publishing that is critically important to publishers and authors alike is the metric that measures a journal’s significance in its field. The impact factor has long reigned as the preeminent yardstick for a journal’s relevance and importance. It captures the average number of citations that articles published in the journal received over the past two years. Authors often choose the journal to submit their manuscript to based on impact factor, and because of this, editors are motivated to make sure this number increases each year. However, there are several inherent flaws in this metric. To name a few:

  • Self-citation can artificially inflate a journal’s impact factor.
  • Because reviews tend to be cited more, journals that publish a high proportion of review articles will likely have a higher impact factor than journals that publish a mix of research articles, letters/communications, and reviews.
  • Impact factors cannot accurately be compared across disciplines: while natural scientists predominantly turn to journals to publish their work, researchers in the humanities and social scientists often publish books rather than journal articles.

Considering these drawbacks, other measures of a journal’s importance have been created that are not based solely on citations. One of these is Altmetrics (“alternative metrics”), which uses a donut graphic to indicate how many mentions an article gets on sources like social media posts, blogs, Wikipedia, and news articles, to name a few. However, Altmetrics is intended to complement metrics like the impact factor, not replace them; it reflects the attention and “buzz” that a certain article receives. It is important to note, however, that buzz can be bad: for example, if an article goes viral on Twitter for ethical misconduct or poor methodology, the donut will only reflect the number of mentions with no caveat that mentions are because the article is flawed.

A third metric introduced by the Center for Open Science is the TOP Factor (“TOP” stands for Transparency and Openness Promotion). The Center for Open Science argues that the impact factor is failing as the gold standard of metrics because it encourages misconduct, and it is overused because there are no other good options. Rather than measuring citations, the TOP Factor scores journals on “the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community.” With this metric, the quality of the journal’s policies is being measured, rather than how often its articles are cited.

Even with the emergence of other metrics, it’s clear that the impact factor isn’t going anywhere anytime soon. However, there could be a shift away from citation-based metrics and toward metrics based on transparency and journal quality as initiatives like Plan S put more and more emphasis on open access.

What are your thoughts on the different metrics models? Do you think one should be used over another? Let us know in the comments below!

You May Also Be Interested In

Why The Grant Writing Guide Is Essential Reading for New Scholars

Why The Grant Writing Guide Is Essential Reading for New Scholars

It’s easier to get started on a project with a good start on getting started. Getting one foot in the door is crucial for anyone trying to make it into the guarded world of academia, a world historically shaped by exclusivity and discrimination, and grants are that...

When Is It Time to Hire an Editor? What Indie Authors Need to Know

When Is It Time to Hire an Editor? What Indie Authors Need to Know

As an author, making the decision to self-publish a book is a huge commitment, both in terms of time and resources. But making the decision to hire an editor for that project can be at least as big a commitment—maybe even more so. Although authors might need less...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.