Plagiarism and ChatGPT: What Every Author Needs to Know

For most authors, using ChatGPT might almost seem like it’s not even a choice anymore—it’s practically mandatory. It cuts down significantly on the amount of time that it takes to complete a book project—meaning those who don’t use it are going to fall seriously behind when it comes to getting their work published.

Fighting Plagiarism

There are, understandably, plenty of skeptics out there. But when it’s used correctly, ChatGPT really doesn’t have to be a negative development in the book-writing world; it can instead be a valuable tool.

Still, this comes with plenty of caveats. And one of the biggest obstacles for authors is making sure they’re not plagiarizing—whether on purpose or not—when they type their own ideas into the ChatGPT box.

By definition, although the free version of ChatGPT is not technically public domain, a ChatGPT user does own the content that ChatGPT creates. This means, generally speaking, that content can be used freely. So, in and of itself, using it doesn’t constitute crediting anyone else’s work as your own.

The potential issue, though, lies with the types of information authors are obtaining through ChatGPT usage—that information can, and will, stir up legal trouble if the wrong person (or organization) sees their ideas as your own in print without proper citation and attribution.

This relies on three major factors. The first is exactly what kind of work you’re typing into the box. The second is whether that work includes any proper names of people. And finally, there’s the question of whether an author is using ChatGPT to conduct research for them, as opposed to using it as a supplementary tool in conjunction with their own research.

What Kind of Content Is Going Into the Box?

If an author wants to have a certain type of voice—casual and friendly instead of formal, for example—there’s nothing wrong with writing out their ideas, plugging them into ChatGPT, and asking it, “Make this as casual-sounding as possible.”

If, on the other hand, an author instead wants to sound more like a particular other work with similar subject matter—that’s where things can get messy.

Let’s say an author plugs their own passage into ChatGPT, and then plugs in another passage written by a different author—an author who is known for a casual writing style. By asking ChatGPT to make his or her own work sound more like the voice of the second author, that’s plagiarism.

It might not seem like it at first, because the author is still asking ChatGPT to maintain their own ideas and thoughts. The problem, though, is that the author is still branding someone else’s style of expressing those thoughts as their own, which crosses the line of what is and is not acceptable.

The Name Game

Just by mentioning another public figure’s name—even if it’s not directly in the context of making their work sound like that person—an author is running the risk of plagiarizing. This is because when ChatGPT sees a commonly recognized name, its AI algorithm is going to crawl the internet for references to that person. This means, inevitably, that person’s voice is going to end up in an author’s work, with or without the intent of this happening.

Maybe an up-and-coming author is curious about what an established, well-known author might think about their work—someone like Stephen King or J.K. Rowling. That author might type their paragraph into ChatGPT and ask, “What would Stephen King or J.K. Rowling say about this?”

At that point, ChatGPT’s AI-based code is going to be flooded with publicly available quotes from those people—whether in the form of their published works (like books) or verbal prose (like speeches or interviews with magazines or podcasts).

And their style of writing and/or speaking will end up in the author’s original paragraph, even though that wasn’t necessarily what the author was trying to make happen. So, the moral of the story is: Whenever using a person’s name, for any reason, proceed with caution.

The Research Factor

Many authors, particularly scientific authors, are finding the “deep research” function of ChatGPT to be useful. To use this feature, authors can attach documents they have made using research they have conducted—such as graphs or spreadsheets—and query the algorithm to analyze those results in particular ways.

This can most certainly cut down on the amount of time authors would otherwise be spending on grunt work, such as figuring out mathematical formulas to interpret results from experiments they have conducted. By allowing AI to complete these types of steps for them, authors are streamlining the process—and that’s not only fair, but encouraged!

What’s not fair, and what should be strongly discouraged, is asking ChatGPT to actually conduct the research component instead of authors doing it themselves. There’s a huge difference, for instance, between asking ChatGPT to analyze results of one’s own experiments versus asking it to crawl the web in search of experiments others have done—and then listing those experiments as one’s own in the published product.

But there’s often a fine line between the two—things can sometimes get a little fuzzy in determining which is which. So, here’s a good rule of thumb: If you, as the author, took time out to actually perform an experiment—whether it was something physical (such as working with chemicals in a laboratory) or something verbal (such as conducting surveys)—you’re most likely fine to ask ChatGPT to analyze the meaning of the results you obtained.

However, if you’re actually having to ask ChatGPT what the results themselves were, that’s another story. In that case, you don’t have your own experiment that you even conducted, and you’re teetering on the plagiarism line.

Final Thoughts

…Guess what? I used ChatGPT to help me figure out a title for this blog post!

After completing the blog post—which I wrote 100% on my own, with no help whatsoever from ChatGPT (or any other form of AI, for that matter)—I typed it into ChatGPT, with the following question: “Help me come up with a title for this blog post that uses the word ‘plagiarism.’”

Well, ChatGPT immediately gave me numerous options that I thought were good fits—and “Plagiarism and ChatGPT: What Every Author Needs to Know” seemed like an especially great one: Succinct, to the point, and covering all of the major bases.

But there are several key takeaways here. First off, I used ChatGPT after finishing all of my own ideas for a title only. Moreover, this blog post was my voice and my thoughts—not some AI-generated content that had zero reflection whatsoever of my own writing style and tone.

And finally, I’m being open and transparent about the fact that I did, indeed, use ChatGPT—something authors should always also do in the acknowledgments section. Because without giving credit where credit’s due, it’s really no different from passing off another human’s ideas, whether big or small, as your own.

Here’s the bottom line: If you’re using ChatGPT to augment your original ideas, that’s fine—in fact, that’s exactly what it was made for. But authors must think about whether they might instead be turning someone else’s ideas into their own, whether it’s deliberate or not (and in most cases, it’s not).

…It’s a fine line, and one that authors are figuring out more and more every day!

By Anne Brenner
Anne is an Assistant Managing Editor at Technical Editorial

You May Also Be Interested In

Ethics in Peer Review: Avoiding Conflict of Interest

Ethics in Peer Review: Avoiding Conflict of Interest

The peer review process, by nature, is designed to be free of conflict of interest—that is, the reviewers should be unbiased when it comes to the authors whose work they are evaluating. True objectivity, however, can be difficult to obtain, particularly if the review...

How AI-Powered Social Media Tools Are Reshaping Scholarly Publishing

How AI-Powered Social Media Tools Are Reshaping Scholarly Publishing

AI has become baked into the tools researchers and publishers use to share, measure, and amplify scholarship. You’re probably tired of hearing us talk about AI. From auto-generated tweet threads summarizing new manuscripts to algorithms that aggregate research to the...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.