In Search of the Perfect Metric: Comparing Three Models

An aspect of publishing that is critically important to publishers and authors alike is the metric that measures a journal’s significance in its field. The impact factor has long reigned as the preeminent yardstick for a journal’s relevance and importance. It captures the average number of citations that articles published in the journal received over the past two years. Authors often choose the journal to submit their manuscript to based on impact factor, and because of this, editors are motivated to make sure this number increases each year. However, there are several inherent flaws in this metric. To name a few:

  • Self-citation can artificially inflate a journal’s impact factor.
  • Because reviews tend to be cited more, journals that publish a high proportion of review articles will likely have a higher impact factor than journals that publish a mix of research articles, letters/communications, and reviews.
  • Impact factors cannot accurately be compared across disciplines: while natural scientists predominantly turn to journals to publish their work, researchers in the humanities and social scientists often publish books rather than journal articles.

Considering these drawbacks, other measures of a journal’s importance have been created that are not based solely on citations. One of these is Altmetrics (“alternative metrics”), which uses a donut graphic to indicate how many mentions an article gets on sources like social media posts, blogs, Wikipedia, and news articles, to name a few. However, Altmetrics is intended to complement metrics like the impact factor, not replace them; it reflects the attention and “buzz” that a certain article receives. It is important to note, however, that buzz can be bad: for example, if an article goes viral on Twitter for ethical misconduct or poor methodology, the donut will only reflect the number of mentions with no caveat that mentions are because the article is flawed.

A third metric introduced by the Center for Open Science is the TOP Factor (“TOP” stands for Transparency and Openness Promotion). The Center for Open Science argues that the impact factor is failing as the gold standard of metrics because it encourages misconduct, and it is overused because there are no other good options. Rather than measuring citations, the TOP Factor scores journals on “the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community.” With this metric, the quality of the journal’s policies is being measured, rather than how often its articles are cited.

Even with the emergence of other metrics, it’s clear that the impact factor isn’t going anywhere anytime soon. However, there could be a shift away from citation-based metrics and toward metrics based on transparency and journal quality as initiatives like Plan S put more and more emphasis on open access.

What are your thoughts on the different metrics models? Do you think one should be used over another? Let us know in the comments below!

You May Also Be Interested In

Holiday Gift Guide for the Writer in Your Life

Holiday Gift Guide for the Writer in Your Life

The holiday season can be a busy time of year -- so busy you might not have had the time to think of the perfect gift to give the writer in your life! Don’t worry, Technica Editorial has got you covered. Whether it be a small trinket or a learning experience, we have...

Self-Publishing in 2024: Prize-Winning Contenders to Check Out

Self-Publishing in 2024: Prize-Winning Contenders to Check Out

In the year 2024, I think we can safely say that self-publishing is here to stay. What was once seen as just a forum on the internet to publish niche genre work by lesser-known authors, self-publishing is now big business with many aspiring authors forgoing the...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.