How Do You Spot a Fake Citation When the Government Can’t?

As AI becomes more egalitarian and user-friendly, more and more authors are starting to use it to assist with multiple different aspects of the writing process. Despite watching The Terminator more times than I can count, even I’ve started playing around with ChatGPT to see how it can help alleviate stress and make my life easier. In past blog posts, Technica has highlighted both the perks and the pitfalls of using AI in peer review and scholarly research but one of the biggest pitfalls is becoming more and more concerning: fake citations.

Fake Citation In Publishing

                As noted repeatedly, AI chatbots and large language models (LLM) are only as smart as the research they have available to them in aggregating searches. And sometimes, when in doubt, an LLM will seemingly make something up. When LLMs like ChatGPT generate fake references, it’s usually due to a process known as “hallucinating” where a tool can’t find an exact match for a citation looking through their datasets and fabricates one based on similar references. It’s one of the most problematic aspects of LLMs that is infiltrating almost every subject and discipline of publishing.

                In April of 2025, a paper was published in the journal, World of Media, titled “Monitoring the development of community radio: A comprehensive bibliometric analysis.” One of the reviewers received a notification about the acceptance on their Google Scholar account, but remembered that she recommended rejecting the submission as she suspected it was written by AI. Why did she recommend rejection? Simple. The reviewer noted that the manuscript cited an article allegedly authored by the reviewer. Except the reviewer knew her own work and knew that the cited article was fake. This was one of many references that ended up being determined to be fabricated. Following an inquiry, the manuscript was removed from the journal’s online page. The authors of this particular manuscript did note that AI was used “to assist with certain aspects of the manuscript, such as grammar and style only. We used standard bibliometric tools for data collection and management, as well as basic proofreading tools.”

                These authors aren’t the only ones throwing around fake citations. An expert (or at least self-proclaimed expert) on misinformation and social media landed themselves in hot water after citing fabricated references during testimony in a significant court case. Stanford Professor Dr. Jeff Hancock supplied legal documents as part of his testimony on Minnesota’s law on the ‘Use of Deep Fake Technology to Influence an Election” that included citations of research that did not exist. Like the previous authors, Hancock admitted that he did use ChatGPT to help generate the references for his testimonial documents.

                And not even government officials are immune to including fake citations in their research and studies. The Make America Healthy Again report released in February of 2025 by Health and Human Services Secretary Robert F. Kennedy Jr. was heralded by the Trump administration as the new “gold standard” on childhood disease. The problem? The report included several citations to studies – many focusing on ultraprocessed foods, pesticides, prescription drugs, and childhood vaccines – that were seemingly fabricated. Nonprofit news publication NOTUS noted not just false citations, but also found incorrect formatting in the references, missing authors, and incorrect issue numbers for certain citations. Following NOTUS’ report, a new report was generated and released to the public with the fake citations removed and replaced with sources that do appear to be real and many of the formatting concerns resolved. HHS spokesperson Andrew Nixon responded to the controversy in an email stating: “Minor citation and formatting errors have been corrected, but the substance of the MAHA report remains the same.”

                So, what can be done to catch these fake citations before they end up in published documents or testimony? It should be obvious, but authors should be doublechecking their research and verifying references before submitting any documents for consideration, whether in a peer reviewed journal or especially as an official government document. This is especially true for any sort of research that is conducted using AI chatbots or LLMs. Tools like Thrix can be used by authors to help them format their bibliography into multiple styles, but many also have cross-referencing features that peruse through databases of articles to determine whether a citation is correct and valid.

                For publishers, the motto of “trust, but verify” is becoming the norm. Edifix is an AI tool that provides an option for use by authors and publishers “to meet the demands of both speed and quality by incorporating automated bibliographic reference processing into your publishing workflow.” The tool claims to be able to check style requirements and accuracy. Other journals and publishers are getting even more creative and using similarity checkers in reverse to check for reference citations. By using tools like iThenticate, journals can check whether references have been cited in other papers as a way of verifying reference accuracy. Although this check could be a bit misleading depending on the database used by the software and the date of publication of the references.

                The key takeaway here is that anyone that engages with ChatGPT and other LLMs need to be aware of the potential for fabricated references. Authors should take the initiative and doublecheck their own work to make sure everything is verified whether by hand (preferable) or through a reference verification tool. And publishers need to be prepared with their own safeguards to ensure they are not publishing and perpetuating “fake news.”

By Chris Moffitt
Chris is a Managing Editor at Technica Editorial

You May Also Be Interested In

Plagiarism and ChatGPT: What Every Author Needs to Know

Plagiarism and ChatGPT: What Every Author Needs to Know

For most authors, using ChatGPT might almost seem like it’s not even a choice anymore—it’s practically mandatory. It cuts down significantly on the amount of time that it takes to complete a book project—meaning those who don’t use it are going to fall seriously...

Ethics in Peer Review: Avoiding Conflict of Interest

Ethics in Peer Review: Avoiding Conflict of Interest

The peer review process, by nature, is designed to be free of conflict of interest—that is, the reviewers should be unbiased when it comes to the authors whose work they are evaluating. True objectivity, however, can be difficult to obtain, particularly if the review...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.