On March 16, 2023, ACES, The Society for Editing, hosted a webinar on ChatGPT, calling on the expertise of Samantha Enslen (former ACES board member and writer) and Corinne Jorgenson, COO of Redpoint AI Engineering and Consulting. Enslen and Jorgenson gave a general, succinct overview of ChatGPT (and language networks in general) and explained its benefits and drawbacks for both copywriting and copyediting.
The dates of this webinar and subsequent blog post are important because AI tools, including ChatGPT (as well as OpenAI’s GPT-4, Google’s LaMDA, Meta’s OPT, etc.), are constantly changing and learning, being updated and “improved” by their creators and the people who use them. The information in this post could quickly become outdated. But we have to start somewhere.
What is ChatGPT? ChatGPT stands for Chat Generative Pre-trained Transformer and is a free content algorithm. What this means is that ChatGPT is an online tool coded to scan the internet for content – any and all kinds – and learn patterns and relationships of words, thus being able to then generate its own content based off of existing content. ChatGPT and other similar AI tools feed on billions of words, “reading” anything from recipes to manifestos to marketing blogs. By figuring out language patterns, it can respond to prompts by recognizing the words based off what it has consumed, deriving new content. It doesn’t necessarily copy and paste from existing content but modifies it to fit the prompt.
This is a language model of deep learning neural networks: it can understand, process, and produce language, inspired by the human brain. ChatGPT does not know the meaning of the words but can mimic word patterns to the extent that it knows how to answer.
Enslen demonstrated using ChatGPT to both create and edit content. You can ask ChatGPT questions, such as “what is copyediting?” or give it tasks, such as “write a formal description of Technica Editorial’s service options,” or even give it passages to edit.
ChatGPT learns from content all over the internet, including pirated or copyrighted material as well as misinformation and disinformation. ChatGPT is an unsupervised learning algorithm, so we can’t fully control what it is or is not learning, though users can give it feedback to adjust its answers.
ChatGPT can be prompted to give different answers and edit its answers by sending the original prompt again, asking it to adjust its answer for tone, or giving a “thumbs up” or “thumbs down” to the answer. Through its interactions with users, ChatGPT is learning and collecting more patterns to recognize and use.
In terms of writing, ChatGPT does a reliable job creating comprehensible content. Of course, it has its limits. ChatGPT can synthesize research you provide, but it cannot do (reliable) research for you. It can provide ideas and outlines better than trustworthy printer-ready content. While ChatGPT could provide a first draft, it would need to be fact-checked, de-plagiarized, and re-designed.
ChatGPT’s content can include plagiarism, factual inaccuracies (sometimes even including completely made up books and articles as sources), and outdated content. Not to mention that, without the human element, it is oftentimes redundant and lacks personality. The material is rote, simply because it is all based off what is already in existence. None of it is entirely new (though this is up for debate). Many publishers, especially in academic journals, have come to terms with allowing AI tools such as ChatGPT to write papers, but the human authors must take responsibility for that content, including any factual inaccuracies, plagiarism, and generally bad content.
In terms of editing, ChatGPT is adept at handling “lay” editing, clarifying confusing passages/enhancing readability, but it does not meet professional standards and has gaps in mechanical knowledge and styles, not keeping to consistent answers. While it might not necessarily know the difference between slang and technical language, it knows the patterns of when one would be used over the other and can be prompted to change tone.
Enslen pointed out that ChatGPT was not error-free when editing and didn’t know specific editorial styles. There are, however, other sites that can fill gaps that ChatGPT leaves, such as Thrix for reference formatting and Grammarly for editing. These tools aren’t perfect and often require a readthrough and confirmation of edits, but they’re getting better at their job by the day.
AI tools are still missing, however, the “human factor.” Humans bring new findings, new quotations and new phrases, emotion, and unique, authentic human perspectives to writing that ChatGPT cannot. When looking at writing strategies of ethos, pathos, and logos, ChatGPT doesn’t have a full handle on any of the three without help from humans.
At this point, the possibilities of ChatGPT can be very daunting, especially for those in the fields of editorial work, such as copywriting and copyediting. We have to act against our instinct to shun these tools — they’re not going away, so we can’t ignore them as a factor in our work. We can, instead, stay informed and updated on what these tools can do, where and who is using them, what their strengths and weaknesses are, and how to use them without misusing them.
We don’t want to get left behind out of fear of the unknown or unregulated. A participant of the webinar likened the fears of ChatGPT to the fear of Wikipedia in the early 2000s. Wikipedia certainly hasn’t gone away, and while it has its problems, many rely on it as a source for basic information — a starting point rather than the end-all-be-all on a topic. The same could be hoped for ChatGPT. While ChatGPT has already brought forth many changes, and many are sure to come, those changes don’t all have to be bad.
By: Anali North Martin
Anali is a Senior Editor at Technica Editorial