The Future of AI in Peer Review, Part II: The Pitfalls

In the first of this two-part series, we explored how the use of artificial intelligence (AI) during the peer review process can be helpful and productive for authors who are looking to submit their work to peer-reviewed journals. We delved into the benefits that authors can enjoy in terms of ethical fairness, lack of bias, and speed/efficiency.

Unfortunately, like most new technological developments, AI in peer review also has an ugly flip side. This second installment will provide an overview of the potential pitfalls authors might encounter if AI assists in peer reviewing their work—as well as what they can to do avoid falling victim.

AI In Peer Review Robot

The Good, the Bad, and the Ugly

Because AI in peer review is still new, all the pros and cons have yet to be clearly established and evaluated. It is simply impossible to know all the possible problems that might arise from it. Over the next few years, this will likely become clearer, based on both anecdotal evidence and overall trends.

For now, though, the consensus is that there are—at the very least—three major hurdles authors must overcome if their work is reviewed with the help of AI:

  1. Information leakage/breached confidentiality
  2. Lack of appropriate expertise
  3. Vagueness in feedback

Confidentiality Matters

When an author submits work to a journal, they do so with the understanding that their sensitive research data will be handled with care by all parties, including (but not limited to) reviewers. And if the reviewer is a human who is experienced in the peer-review realm, they will most likely know how to keep private information under wraps.

This is, however, not so with AI. Because AI tools can’t necessarily discern what’s public from what’s not, it becomes all too easy for that personal data to be leaked. If, for instance, the author made a breakthrough discovery that was meant to be disclosed as part of the paper, an AI tool could take this and run with it, leaving plenty of room for that discovery to be attributed to someone else, and/or turned into something that it’s not. This could negatively—and possibly irreversibly—impact not only the author’s specific manuscript but also their reputation.

And it goes beyond just research data. AI tools are sometimes able to store graphics from a manuscript, even long after the peer review process is complete. This makes authors vulnerable to their work being plagiarized, months or even years down the road. AI, after all, can write entire falsified papers—so it should be no surprise when it creates a breeding ground for authors’ work to be inappropriately copied before it even goes to print.

The Expertise Just Isn’t There

At least in theory, human peer reviewers are experienced, reputable professionals in their fields. This means that they should have a deep understanding of the manuscript’s topic, along with reliable knowledge about how (or if) the research will advance the field at hand.

Unfortunately, this kind of human expertise just isn’t something that can be replicated with any type of technology. Yes, AI tools can be trained to pick out certain words or phrases that likely indicate the important elements of a good manuscript, such as novelty and impact. Likewise, it can be programmed to weed out manuscripts that use words or phrases that point to a lack of these types of elements. But there is simply no such thing as an AI tool that gets it right every time.

AI tools also can’t verify mathematical calculations the way humans can, unless they have access to the raw data that went into those calculations. In many cases, the raw data isn’t included in an author’s manuscript and must be accessed separately. This makes it nearly impossible for any equations or formulas to be confirmed true if AI is the sole review method being used.

Unless there is at least one human set of eyes checking the AI’s work, the consequence of this tends to be either stellar papers that slip through the cracks, or mediocre papers that make it far further in the process than is deserved. Both are results that peer review was designed to help avoid, not promote.

When It’s Just Too Vague

As advanced as AI is becoming, it’s still obvious to many authors when it’s being used in place of human help. AI will often use nonspecific language that does little (if anything) to improve the quality of the work.

Multiple dilemmas can come out of this. An author might immediately recognize the peer-reviewer feedback as AI-generated, which defeats the purpose of its use in the first place. Even if the author doesn’t recognize the feedback as that of AI, they still won’t have much to work with when making revisions. And if the revision, once submitted, is again reviewed primarily with AI tools, the author will never receive the constructive criticism needed to make significant progress.

The result could be, again, a manuscript with outstanding ideas that never makes it to the publishing floor simply because not enough quality feedback was applied—or, perhaps even worse, a low-quality manuscript that does make it to print because no one bothered to check behind the vague, AI-generated critiques.

Which Tools to Use?

With so many different AI options at the disposal of peer reviewers (Enago Read, Consensus AI, and Journal Article Peer Review Assistant, just to name a few)—and so much room for such serious problems to develop—it can be tough for editors to know which AI tools are most effective while causing the least amount of potential trouble.

For this reason, it’s becoming mandatory for editors to research all the possible tools, using the combination that maximizes efficiency the most. For instance, one tool might be more effective at identifying plagiarism, while another might be geared more toward making the article structure easier for readers to follow. To cover all bases, a reviewer might want to use both—and this means authors should familiarize themselves with all the possible programs that might be assisting in the peer-review process for their work.

Final Thoughts

Right now, the publishing community seems to be strongly divided about the use of AI in peer review. Some believe that it’s the tool that will fix some of the long-standing problems associated with human peer review. Meanwhile, others insist that it creates far more problems than it solves. Moreover, many others take the stance that it depends on the situation; some peer-review circumstances lend themselves more to AI use than others. While many journals are adapting guidelines about how AI use in peer review should be regulated (if it’s permitted at all), it could be quite some time before those guidelines become standardized.

In the meantime, though, any authors who don’t consider the possibility of AI being used to assist in their manuscripts’ peer review—and educate themselves accordingly—are only doing themselves a disservice. Turning a blind eye to the new playing field of peer review won’t change the fact that AI, like it or not, is here to stay!

Haven’t read Part I of this series? Click Here!

By Anne Brenner
Anne is a Managing Editor at Technica Editorial

You May Also Be Interested In

Plagiarism and ChatGPT: What Every Author Needs to Know

Plagiarism and ChatGPT: What Every Author Needs to Know

For most authors, using ChatGPT might almost seem like it’s not even a choice anymore—it’s practically mandatory. It cuts down significantly on the amount of time that it takes to complete a book project—meaning those who don’t use it are going to fall seriously...

Ethics in Peer Review: Avoiding Conflict of Interest

Ethics in Peer Review: Avoiding Conflict of Interest

The peer review process, by nature, is designed to be free of conflict of interest—that is, the reviewers should be unbiased when it comes to the authors whose work they are evaluating. True objectivity, however, can be difficult to obtain, particularly if the review...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.