In Search of the Perfect Metric: Comparing Three Models

An aspect of publishing that is critically important to publishers and authors alike is the metric that measures a journal’s significance in its field. The impact factor has long reigned as the preeminent yardstick for a journal’s relevance and importance. It captures the average number of citations that articles published in the journal received over the past two years. Authors often choose the journal to submit their manuscript to based on impact factor, and because of this, editors are motivated to make sure this number increases each year. However, there are several inherent flaws in this metric. To name a few:

  • Self-citation can artificially inflate a journal’s impact factor.
  • Because reviews tend to be cited more, journals that publish a high proportion of review articles will likely have a higher impact factor than journals that publish a mix of research articles, letters/communications, and reviews.
  • Impact factors cannot accurately be compared across disciplines: while natural scientists predominantly turn to journals to publish their work, researchers in the humanities and social scientists often publish books rather than journal articles.

Considering these drawbacks, other measures of a journal’s importance have been created that are not based solely on citations. One of these is Altmetrics (“alternative metrics”), which uses a donut graphic to indicate how many mentions an article gets on sources like social media posts, blogs, Wikipedia, and news articles, to name a few. However, Altmetrics is intended to complement metrics like the impact factor, not replace them; it reflects the attention and “buzz” that a certain article receives. It is important to note, however, that buzz can be bad: for example, if an article goes viral on Twitter for ethical misconduct or poor methodology, the donut will only reflect the number of mentions with no caveat that mentions are because the article is flawed.

A third metric introduced by the Center for Open Science is the TOP Factor (“TOP” stands for Transparency and Openness Promotion). The Center for Open Science argues that the impact factor is failing as the gold standard of metrics because it encourages misconduct, and it is overused because there are no other good options. Rather than measuring citations, the TOP Factor scores journals on “the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community.” With this metric, the quality of the journal’s policies is being measured, rather than how often its articles are cited.

Even with the emergence of other metrics, it’s clear that the impact factor isn’t going anywhere anytime soon. However, there could be a shift away from citation-based metrics and toward metrics based on transparency and journal quality as initiatives like Plan S put more and more emphasis on open access.

What are your thoughts on the different metrics models? Do you think one should be used over another? Let us know in the comments below!

You May Also Be Interested In

Plagiarism and ChatGPT: What Every Author Needs to Know

Plagiarism and ChatGPT: What Every Author Needs to Know

For most authors, using ChatGPT might almost seem like it’s not even a choice anymore—it’s practically mandatory. It cuts down significantly on the amount of time that it takes to complete a book project—meaning those who don’t use it are going to fall seriously...

Ethics in Peer Review: Avoiding Conflict of Interest

Ethics in Peer Review: Avoiding Conflict of Interest

The peer review process, by nature, is designed to be free of conflict of interest—that is, the reviewers should be unbiased when it comes to the authors whose work they are evaluating. True objectivity, however, can be difficult to obtain, particularly if the review...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.