Scholarly journal editors have always had some degree of difficulty securing peer reviewers to evaluate submissions for publication suitability. But in recent years, this phenomenon—known as “peer reviewer fatigue—has been steadily increasing, contributing to both review quality and timeliness.
The culprits behind this uptick can be divided into three main categories. First off, there’s supply and demand—although the number of scholarly journal submissions is going up, the number of qualified reviewers isn’t going up with it. Then, there’s the issue of reviewers’ busy schedules. As staffing challenges mount, qualified reviewers are being forced to spend more time in their day jobs, leaving less and less time to devote to reviewing.

And finally, the editors themselves are also finding themselves stretched thinner and thinner, giving them fewer opportunities to form professional or social connections with qualified reviewers—which, in turn, means it’s harder for them to quickly narrow down the list of possible candidates.
More Submissions—and Not Enough Reviewers to Keep Up
Scholarly journals are currently seeing submissions in record numbers. This is occurring for a variety of reasons, including increased pressure worldwide on authors to get work published, increased intensity and competitiveness in the academic sector, and the rise of newer, more specialized scientific fields.
But while the number of submissions is going up, the number of qualified peer reviewers isn’t going up with it—at least, not quickly enough to keep pace. In fact, as more specialized fields emerge, some types of manuscripts are actually seeing numbers of competent reviewers plummeting, since knowledgeable experts for such new subject areas are so few and far between. This issue of supply and demand is contributing significantly to peer reviewer fatigue.
Staffing Challenges: The Reviewers’ End
The current economic environment is also playing a role in the rise of peer reviewer fatigue, because it’s creating more responsibilities for reviewers in their workplaces—and thus, less time to spend on reviews. Most qualified scholarly reviewers are working in academia and/or industry—both of which are fields that are especially feeling the pinch of budgetary cuts.
So, the employees of these fields—who often do double duty as scholarly reviewers—are working in understaffed spaces, doing multiple peoples’ jobs on their own. They simply do not have the mental energy to properly review a manuscript during their spare time, which is growing more and more scarce.
Adding to this dilemma is the issue of incentives for peer reviewers. Ethically speaking, there is generally no financial compensation for the peer review process; it’s a voluntary action that is done solely for the purpose of maintaining a sound scholarly publishing landscape. But as both submission volume and other reviewer responsibilities continue to skyrocket, it’s becoming more difficult to find would-be reviewers who are willing to invest time and energy without some kind of tangible reward—and understandably so.
Staffing Challenges: The Editors’ End
Editors, too, whether in academia or in industry, are facing the staffing challenges in their respective fields. Currently, economic conditions mean fewer staff members available for teaching, laboratory work, and research.
As a result, most editors are being forced to put in many more hours into duties outside their editorial responsibilities, leaving substantially less time to find and contact reviewers. Moreover, they are unable to devote the necessary time and energy into determining which reviewers are most suitable for which manuscripts. This leads to an increasing number of invitations being sent to reviewers who simply aren’t a good fit—so, they have little choice but to decline. This forces the editors to go back to the drawing boards, which drags out the process even more and creates a tiresome environment.
In addition, one of the unwritten rules of peer review is that a diverse set of reviewers should be chosen. That is, in order to minimize potential bias, a manuscript should be evaluated by multiple reviewers with multiple academic backgrounds. The pressure on editors to make this happen has been mounting even more in recent years—again, in part due to expertise areas becoming more specialized. While perhaps an effective method to making sure the manuscript judgment process remains sound, this process inevitably creates extra hurdles, and thus more peer reviewer fatigue, for the editors.
The COVID pandemic, and its aftermath, has also likely played a role in new challenges for editors. When COVID first hit, in-person meetings—where editors could meet and greet potential new reviewers—were not an option for about two years. Although some organizations tried to make up for this loss with virtual meetings, it simply could not make up for the feeling of community that comes with face-to-face, in-person contact.
Unfortunately, even once the pandemic subsided, in-person meetings never regained popularity the way that they had done in the pre-COVID days. This has led to ongoing troubles for editors looking to get and keep connections with reviewers, contributing significantly to peer reviewer fatigue.
Is AI the Answer?
…Plenty of critics out there have pointed to AI in peer review as a quick fix for this ever-growing problem—but is it really?
AI tools are changing the way research is conducted, and thus the way reviews are handled. For instance, they have the ability to screen papers for relevance and thus eliminate the bulk of papers not suitable for review, which saves editors time in choosing reviewers. They also can screen submissions for key words and phrases, which saves reviewers time in writing them. In addition, AI algorithms can match reviewers with suitable manuscripts, cutting down on the amount of energy journal editorial boards must spend doing so.
But there are multiple caveats. First, AI technology is evolving too quickly for most editors and reviewers to keep up. And it can be equally difficult for editors and reviewers to keep up with different journals’ policies on the use of AI for reviewing purposes, which are constantly changing. Authors also are using AI increasingly often to aid in writing their papers. This means journal policies about AI usage during peer review will necessarily need to be altered to counteract this every-growing phenomenon—and trying to remain aware of the latest policies can create substantial barriers for editors and reviewers alike.
Furthermore, AI screening tools are inherently imperfect, because they can’t understand context the way the human brain can. For example, an AI screening process might pair an editor with a manuscript because the manuscript is about non-verbal communication in childhood development, and the editor is listed online as being an expert in matters related to non-verbal communicative procedures. However, the editor’s real area of expertise is sign language in childhood development, whereas the manuscript is actually about other non-verbal means of communication, such as body language and facial expressions. These are, of course, two entirely different subject matter expert areas—and this is the type of nuance that AI, at least for now, won’t always pick up.
So, although AI screening tools might be effective in supplementing the reviewer selection and completion processes, they aren’t a magic bullet cure for peer reviewer fatigue. Minimizing reviewer burnout has always been—and will always be—a multi-step process, requiring editors, reviewers, and authors to all be conscious of each other’s time and resources.
…At first glance, scholarly peer review might seem like a solidarity undertaking. But doing it efficiently simply isn’t a one-person job—it takes teamwork!
By Anne Brenner
Anne is an Assistant Managing Editor at Technica Editorial




