Introduction
For multiple generations, peer review has been a huge piece of the puzzle for scholarly authors looking to get their work published. And until recently, that peer review has consisted exclusively of human eyes looking at a submission.
Now, with the rise of artificial intelligence (AI), the landscape is changing—it’s no longer just about human evaluation. Some journals are shifting to a model that includes AI in the peer-review process. Specific AI tools, such as Enago Read, Consensus, and Journal Article Peer Review Assistant, are already starting to pop up.

This is a drastic change that is understandably controversial. In fact, many journal editors haven’t implemented this model yet because they are apprehensive about possible negative consequences. Plenty of journals still have policies outright prohibiting the use of AI in reviewing—and imposing penalties on those who do not disclose that they have done so.
Slowly but surely, though, AI is making its way into peer review. Experts predict it will become an increasingly important staple to the peer-review equation.
And despite some editors’ reservations, experts also agree that AI in peer review isn’t a totally negative development. In many ways, it creates advantages for authors. Admittedly, it does also have the potential to cause major problems, but in Part I of this two-part series, we will focus on the positives that authors can enjoy.
Since AI in peer review is so new, its reputation among authors hasn’t been set in stone just yet. Still, most authors will agree that there are three main benefits of AI in reviewing:
- Ethical fairness
- Lack of bias
- Speed and efficiency
The Ethics Factor
Depending on their experience and knowledge of literature, a human reviewer certainly might be able to tell if an author is plagiarizing and/or falsifying results. It is, however, not guaranteed—it just depends on the specific person who ends up being asked to review.
That’s where AI comes in. The technology can pick out plagiarism or false data within a matter of seconds. This quickly weeds out the authors who are using dishonest methods to try to get their “work” into print—and makes room for those authors who are instead doing honest research.
Many AI tools currently also focus on the more mundane tasks, such as language and grammar checking. This gives reviewers much-needed time to focus on the ethical issues of a paper instead of getting bogged down in these types of details. A common complaint among many seasoned reviewers has been that they lack time to evaluate a paper’s ethical and scientific value because they are too busy dealing with poorly written sentence structure. But with AI, they can completely bypass that step, focusing instead on the heart of the matter.
No Room for Bias
During a “single-blind” review process, while the author does not know the identity of the reviewer, the reverse isn’t true—the reviewer does know who the author is. In theory, this should not impact the reviewer’s professional judgment of a book or manuscript. But in practice, sometimes bias is inevitable, particularly if the reviewer is already acquainted with the author or their reputation. In the science realm, where most areas are highly specialized, this is not uncommon.
The “double-blind” review process, where the reviewer and author are ostensibly unknown to each other, was designed to eliminate this problem. The reality is, though, that even with the best efforts to disguise identities, the reviewer might still be able to figure out who wrote the work being reviewed—again, these communities can be too small to hide everything. So reviewer bias might creep into the picture, even in a double-blind publication.
But with AI, because you’re talking about machine-generated evaluations, the issue of bias simply doesn’t exist. A computer isn’t going to have preconceived notions the way a human might.
The Need for Speed
Any seasoned reviewer will tell you that reviewing isn’t a task that can be completed easily or quickly—thoroughly and appropriately reading/evaluating work takes time, effort, and energy. This can negatively impact authors in multiple ways.
For one thing, a reviewer with 10+ papers on their desk might have too full of a plate to take anything else at any given time. If that reviewer happens to be the best and most knowledgeable in their field, this could keep an author from getting the right set of eyes on their paper, perhaps giving them an unfair disadvantage compared to other authors who were in the right place at the right time.
Or let’s say a reviewer is overloaded, facing burnout, and stops looking at assigned work as thoroughly due to fatigue—it’s a human flaw that happens to the best of us. Again, some authors might be shafted when this occurs, despite having put major energy into their manuscript.
In these types of cases, the unfortunate consequence is that finding reviewers often becomes extremely difficult for editors. When that happens, a manuscript can spend weeks, or even months, in limbo—this, understandably, makes many authors impatient and frustrated.
With AI, all of this becomes irrelevant. Human tiredness is simply not at play, nor is a paper limit set by reviewer or editor workload capacity. Therefore, everyone is on the same playing field, regardless of when their work entered the submission pile. And the issue of the review process dragging out is much less likely to occur, reducing aggravation for authors and editors alike.
Being Responsible
Most experts agree that AI will likely never completely replace human peer review. However, as time goes on, it will likely take a more prominent role. This necessitates peer reviewers educating themselves on best practices for AI use.
When a peer reviewer makes the decision to use AI as part of the peer-review process, they must first complete their due diligence in making sure they are doing so ethically. This includes taking steps such as carefully reviewing the publication’s AI policies to ensure they are compliant as well as keeping track of the specific AI tools that will be used to assist with peer review so that they can be disclosed in the report to the authors. Then, once the reviewer reaches the stage of writing that report, they should be completely transparent about the particular way that those tools were used.
Looking Ahead
Like most technology developments, AI can be a productive tool when it’s used in the spirit for which it was intended—but when abused, unfortunately, it can have the opposite effect. Part II of this series will focus on the pitfalls authors must be watching for when it comes to the implementation of AI in the peer-review process.
Check out Part II of this series here.
By Anne Brenner
Anne is a Managing Editor at Technica Editorial




