With the open science movement growing in popularity in the last decade, preprints have become a bigger and bigger part of the publishing industry. They’ve also, in some ways, become a bigger and bigger problem for the publishing industry as well. ISMTE’s recent global event discussed preprints’ growing impact on the publishing industry in a panel titled, “Research Integrity and Ethics (Part II) – preprints and misconduct”.
For those that don’t know, preprints are essentially unpublished manuscripts that have not gone through the peer review process. Authors upload their manuscripts to preprint servers to disseminate their research as quickly as possible. Richard Sever, the cofounder of bioRxiv and medRxiv (two of the largest preprint servers), made it clear that preprints are essentially a “service” to authors and should not be seen as a publication. While many preprints do end up going through peer review and getting published, the final article frequently goes through many changes compared to the initial preprint. Sever further claimed that preprints that are published to bioRxiv and MedRxiv go through a screening process to determine that the manuscript is science-based, contains actual research, does not spout obviously dangerous rhetoric, and is not plagiarized. However, the servers do not confirm the accuracy of the research on display asking that readers take what they read with a grain of salt. The Russian proverb, “trust, but verify,” is the name of the game when it comes to these preprint manuscripts, and most scientific researchers understand this.
However, this cannot always be said for the media at large. Without the context of what a preprint is and how the server works, media outside of the academic publishing world has picked up research in preprints and presented these ideas as peer-reviewed science. This has been particularly prevalent in the pandemic as many authors have published their COVID-19 studies as preprints to further discourse on the subject in a quick fashion to keep up with the constantly changing scientific landscape, especially in the early days of COVID infections. There were over 19,000 manuscripts about COVID-19 shared in the first four months of the pandemic, and one-third of these papers were preprints. Multiple examples of preprints with faulty data littered servers and were picked up by both mainstream and social media sources with articles arguing that COVID-19 was manufactured from HIV, articles overexaggerating the helpfulness of ivermectin in treating COVID-19 symptoms, and articles presenting an unfounded link between vaccines and myocarditis seeing the light of day.
Michele Avissar-Whiting, the former editor-in-chief at Research Square, argues that all these preprints were quickly (within days) criticized by the medical community at large and removed from the servers due to the spread of misinformation. However, the damage was done at this point as the information presented within was now in the discourse and being spread by non-experts. It’s a problem that all the experts on the panel agree will continue despite increased regulations and procedures. Sever notes that his servers do use automated tools as much as possible to verify author credentials and research in order to cut down on potentially faulty research. Yet, you can only do your best to build a better mousetrap. A hungry mouse (or an unethical researcher) will at the end of the day find a way to get the cheese.
By: Chris Moffitt
Chris is a Managing Editor at Technica Editorial