AI Is Everywhere
The publishing realm—included, but certainly not limited to, scholarly publishing—is morphing before our eyes. Among many other changes, that means substantial change to the peer-review system that authors have grown to know over the past several decades. Peer review just isn’t the same peer review that it was a generation ago—and authors are going to be forced to accept this reality if they don’t want to get left behind.
One of the biggest changes in peer review (and, really, one of the biggest changes in publishing in general) is the influence of AI. This is, right now, an extremely contentious and controversial issue. Some believe it’s beneficial, while others maintain that it does much more harm than help—and the argument could most certainly be made either way.

The Benefits
Expediting the Process
Most publishers and authors would like for the peer-review process to move as quickly and efficiently as possible. But when journals are receiving a huge deluge of submissions, it can be tough for editors to keep up. And there will inevitably be certain situations where the peer reviewers will run into snags, and their reviews might become significantly overdue—it’s just the nature of the beast.
But the use of AI might at least provide a starting point in mitigating this problem. The right AI tools can quickly assess the quality of a manuscript, determining within minutes whether it’s even worth sending to human peer reviewers at all. And if the answer to that question is “no,” the editor can then notify the author that it will not be considered further. This saves time for everyone—not only the editors and reviewers evaluating the manuscript, but also the author, who doesn’t waste time getting false hope of the work being published in that journal.
Dataset Analysis
Oftentimes, reviews of scholarly works require analyzing large amounts of data—an ordeal that is difficult and time-consuming even for the most skilled mathematicians. That means a human reviewer might get bogged down during this process, possibly even needing help from colleagues. This slows down the process, creating a barrier to the quick turnaround times most authors and editors are aiming for.
AI, meanwhile, can be trained to quickly, accurately, and efficiently analyze huge amounts of data in mere minutes. Moreover, it can also efficiently analyze publication trends for a particular field (SEO keywords, for example) that will help an editor determine how marketable the work will be, or how an author might improve that marketability during the revision process. A human reviewer would be forced to conduct tedious amounts of research, over a long period of time, to analyze those same types of trends. This makes AI invaluable for those on all sides of the process: editors, authors, and the reviewers themselves.
Reducing Bias Risks
If a human is evaluating an author’s manuscript or book chapter, and he/she knows the author’s identity, chances are, there’s inherently going to be some level of bias in their evaluation. Oftentimes, even if the reviewer isn’t supposed to know who the author is, they can pretty easily figure it out—with scientific or medical specialty areas in particular, this frequently occurs, just because the communities are so small.
With AI, though, that unavoidable lack of objectivity doesn’t exist. A computer is going to be impartial and is not going to possess the lack of objectivity that a human will have, with or without realizing it. In some ways, this eliminates the potential for bias that could muddy the waters during peer review, instead creating a fair and consistent playing field for all authors.
…In other ways, though, AI actually creates more room for bias, not less—more on that in the next section.
The Drawbacks
Bias—It’s a Double-Edged Sword
As previously mentioned, AI in peer review can, in some ways, hinder human biases in the peer-review process. In other ways, though, it can actually make bias worse. This is because AI algorithms often assume certain words or phrases mean certain things, when the truth of the matter is more complex.
Take, for example, a manuscript or book chapter that is designed to have a more casual, conversational voice and tone—so, there are a number of sentences that aren’t technically grammatically correct. A human reviewer would likely recognize this and evaluate accordingly. An AI tool, on the other hand, might automatically brand the manuscript/chapter as “poor quality.” Then, the editor will likely reject it without looking into the real reasons for the incorrect grammar or sentence structure, potentially missing out on work that actually is of good quality.
So, at the very least, an editor should quickly look into the reasons for AI labeling a particular submission as unworthy of further consideration. But realistically, not all editors are going to take the time to do that—and that’s bad news for hard-working authors.
The Complexity of the Human Mind
Traditionally, scholarly editors and publishers have invited the top minds in their journals’ fields, such as science and medicine, to review authors’ submissions. These experts have extensive education and experience in their fields, making them ideal candidates to evaluate the work of authors who might be more up and coming—and it’s the sort of worldly knowledge that AI simply cannot replace.
Sure, AI can be trained to recognize certain facts in a particular field. The problem is, though, it can’t always be trained to put those facts into the appropriate context the way the human mind can. This means that by moving away from human reviewers in favor of an AI-based approach, editors run the risk of only getting surface-level review remarks, as opposed to the deeper dive that is typically needed for scholarly works.
Confidentiality Breaches
Typically, the review process for authors’ scholarly work tends to involve some degree of confidentiality. For a “single blind” journal, the reviewer knows the identity of the author, but the author does not know the identity of the reviewer. For a “double blind” journal, the reviewer’s and the author’s identities are, at least in theory, unknown to each other. Either way, though, confidentiality is maintained to one extent or another.
But when AI enters the process, it can be very difficult—if not impossible—to maintain confidentiality. In order to assist in the reviewing process, AI tools must first be given the author’s work to analyze. In the case of many types of works being reviewed—such as grant applications or proposals in particular—this will, inevitably, include authors’ personal information. And unfortunately, with AI, there is just no way to guarantee that private data won’t be disseminated elsewhere.
There are, of course, steps that editors or publishers can take to mitigate this risk. These methods might include, for example, prohibiting the use of certain types of AI tools that are particularly susceptible to information leaks, and/or installing electronic controls that are designed to tighten security on AI technology.
Even so, when it comes to technology and AI, completely avoiding the possibility of data breaches is totally impossible. And if privacy violations do occur, they can lead to serious trust issues within the scholarly publishing community, between authors and editors.
Final Thoughts
The use of AI in peer review has the potential to add a valuable contribution to the publishing field. Still, though, there are major concerns about AI eventually replacing human reviewers entirely—and the negative ramifications of that possibility are very real.
This means it’s crucial for editors and publishers to avoid ever totally foregoing human help in the review process in favor of an AI-only model. Yes, it’s tempting to go full AI as a means of saving time, but it’s simply asking for too much trouble. On the other hand, a mix of both human and automated reviewing components will still expedite the process while simultaneously preserving the ethics of how the peer-review process was meant to operate.
By Anne Brenner
Anne is an Assistant Managing Editor at Technica Editorial




