Smash the Plagiarism Machine
The writing community’s hatred of generative AI
—By Liza Gavrilova
Earlier this year, OpenAI’s CEO, Sam Altman, announced on X that he and his team have been working on a model of ChatGPT that is “good at creative writing.” He posted the short story the AI apparently wrote, detailing a story about two people, along with its own musings on being a machine. “This is the first time I have been really struck by something written by AI,” wrote Altman.
Despite what could be seen by some as a great accomplishment—a machine apparently learning how to write like humans would—the reception was poor. Countless replies referred to it as soulless “slop”, lacking in skill, and devoid of emotional depth. “This is absolutely terrible, Sam,” said user @nic_carter.
In a world that wants us to use AI in every part of our lives, the idea of using it for creative writing seems to only bring out anger and disgust. AI writing is openly mocked, and AI writers are ridiculed. Using AI in the creative process can ruin a writer’s career and reputation, even if they use it a little bit. Some writers have even preferred death over using generative AI in their work.
But generative AI seems so useful. It can help a writer when they get stuck, brainstorming new ideas and generating new passages to get a story rolling again. It can help with the menial task of editing and revising. It can even generate a whole story from a general idea, if someone doesn’t have the skill to write but still wants to get the story out there. AI makes a writer’s life easier, so why do so many writers seem to abhor it?
The world of the LLM
When we think about generative AI text tools like ChatGPT, we are usually referring to a subgroup of machine learning algorithms called Large Language Models, or LLMs. These algorithms are built to understand and generate text based on patterns they learn from huge datasets of human-written text. An LLM “writes” by predicting the most statistically likely arrangement of words, one after another.
To do the things they do, LLMs are trained on large datasets of pre-existing written text—books, articles, and even social media posts. It takes billions upon billions of words to teach an LLM to learn language conventions such as grammar and sentence structure. For each piece of text, the algorithm breaks it down into small chunks of words and characters called “tokens” and turns them into a numerical form that the algorithm can understand. It passes these tokens through layers upon layers of neural networks called self-attention layers.
Self-attention is the bread and butter of the LLM. It allows the LLM to find the connections and relations between different tokens, allowing it to build an understanding of written text. “The goal in this process is for the model to learn semantic associations between words,” writes Cole Stryker in an IBM article on LLMs, “Words like ‘bark’ and ‘dog’ appear closer together in vector space in an essay about dogs than ‘bark’ and ‘tree’ would, based on the surrounding dog-related words in the essay.” By repeating this process over countless texts, the LLM slowly builds a statistical understanding of human language—a map of words and symbols that it can follow to mimic a human.
The dirty generative process
For an LLM to write a good piece of text, it needs to be trained on lots of well-written human text, which begs the question—where does this data come from? Not through legal means. Recently, Meta has come under fire for allegedly scraping millions of published books from book-pirating websites to train their AI model, without the authors’ consent. OpenAI has allegedly done the same. This is billions of published words, written by real human beings, that ChatGPT pulls from to generate stories like Altman’s. “The AI is essentially reading a bunch of books and spitting out the average of them,” says Elliot Patterson, a writer and recent UM graduate. “It is illegally reading all these copyrighted works, which brings harm to authors.”
And writers are the biggest victims. Authors who get their work scraped without their permission get no credit or pay for their stolen work, while AI companies rake in the profit. “I don’t want to share my work and then have it be stolen,” says Xander Phanes, an independent author based in Southeast Michigan. “It kind of borders on plagiarism.” Outside of being a plagiarism machine, generative AI has weaseled its way into the publishing industry like an invasive species. Books written by AI bloat websites like Amazon, and also make a habit of stealing books and creating summaries with similar titles to sell to confused shoppers. It’s created a hostile environment, one that does not care about the welfare of real-life authors.
Outside of generative AI’s habit of plagiarism, using LLMs has immense environmental consequences. AI data centers guzzle energy and fresh water. A single 100-word email generated by AI uses almost an Evian bottle of water and is “equal to powering a 14 LED lightbulb for an hour.” Data centers have dried up groundwater resources in Atlanta suburbs and caused epidemics of respiratory illness in Memphis. Its immense energy consumption also causes an excessive release of greenhouse gases, accelerating the Earth’s already rapid march towards climate disaster.
So, when considering these impacts, is using AI to do creative writing worth it? For many, it isn’t. “I definitely do not think it’s worth the environmental cost,” says Patterson. “That is probably the biggest reason I am against AI. The amount of water it takes to send even one query is so insane to comprehend that I definitely think it’s not worth the cost.”
Additionally, Phanes points out that the corporate cost of AI use is going to be vastly different from personal use. “It goes back to the question of whether it’s worth recycling in your home when corporations undo a year’s worth of recycling in maybe an hour or two?” says Phanes. Would the pollution caused by one 500-word short story compare to the pollution released from a data center every day? But even one less query means less pollution in the world. “I personally believe it’s still worth being mindful of our environmental footprint,” says Phanes, “whether it’s AI or otherwise.”
Not putting in the effort
But what if we fixed these issues? Made AI sustainable and all the content ethically sourced? Then AI can really become that go-to companion, helping writers skip through the frustrating and menial tasks of writing. But what do we lose in the process?
David Ward, a creative writing lecturer at the U-M English department, says that there is no part of him that would want to generate the content of his writing, especially the challenging parts. “So much of any art is the iterative development over time where you’re making a thing and revising and developing it and refining your technique,” Ward says. “It’s a long, slow, painful process, and that’s why it makes it special.”
The sustained effort a writer puts into a work is what makes it worthwhile. Knowing that some of that process is outsourced to generative AI makes the art feel less genuine. “AI use absolutely turns me off from reading someone’s work,” says Patterson. “I’m giving an author money because I see something in their craft that I really want to read. If I wanted to read something written by an AI, I could just type that into ChatGPT myself.
In general, Patterson views using AI as a shortcut to doing the actual work. “It’s not really you doing the hobby if you are just asking a computer to do it for you,” says Patterson. “It makes me think of them less as a person and a writer.”
There is a sort of desperation in leaning on generative AI tools for creative work. “There’s generally a discomfort with being bad at something,” says Ward. “But there’s also a pleasure that people get from not being good at something and then watching yourself get closer and closer to being able to make the kind of art you want to make. It’s one of the deepest pleasures of making creative work.”
Can AI actually make art?
At the core of any discussion of AI art is the question of whether AI is even capable of making it. Sure, it can generate text that looks like prose, but is what it’s doing truly art?
“There’s a lot of weirdness in the language that we use to talk about these programs,” says Ward. “Calling it artificial intelligence blurs the line unnecessarily.” If the machine were really intelligent, it would approach creative work with its own perspective, something that Ward believes would make people actually interested in its art. Without it, the work seems pointless. “No one wants to read what autocomplete has to say about anything because there’s no perspective there,” Ward remarks. “It’s missing genuine human emotions,” says Patterson. “The author’s experience and voice are what adds to so many stories. An AI can’t feel emotions and add that element to it.”
Phanes is skeptical of whether AI can even understand the core principles of writing. “Most art and writing is some sort of symbolism,” they remark. “The whole point of making art and writing is to convert a message or mirror something through a symbol. AI just kind of brings stuff together and it may masquerade as a symbol, but there’s no meaning behind it. It’s an empty shell.”
Look to the future
At the end of the day, AI creative writing is still mediocre. Even if it somehow learned how to write as well as a person, it doesn’t mean human writing and society’s preference for it will cease to exist. “My sense is that people want to read stuff and watch stuff that is intentional communication,” says Ward “If you tell someone that this was made with AI, then there’s an immediate skepticism or an immediate recoiling. There’s a sense that ‘Why would I care if nothing composed it?’”
While AI-generated writing may not fill a reader’s desire for something with depth and meaning, it does fulfill the desire of companies to profit. Generative AI can churn out hundreds of novels in the time frame it takes for a person to write one novel, and sell hundreds more before an author even sells one copy. “It seems to be the case that publishing companies just want authors who can churn out books that are easily consumable and rapidly mass produced,” says Phanes.”It doesn’t leave much room for folks who want to spend a bit more time with their work before sharing it.
In the face of this attack on their livelihood, writers have been fighting back. This summer, in light of the AI book scraping scandal, a group of authors wrote an open letter to American publishers requesting that the publishers stand with them against AI. In the letter, authors ask publishers to reject the use of AI, such as not using AI to generate books or replacing employees with AI. “We call on publishers to take a public stand for their authors against the theft of our art and the debased AI work that profits from that theft,” the letter states.
But the world is dead set on making AI the next technological wonder. With government support and a lack of regulation, AI can keep destroying the art of writing with little resistance. “I just wish that there was an easier way to voice our opinions,” says Phanes. “That there was an easier way to exert some control. To have some sort of boundaries when it comes to AI.”
Feature Photo courtesy of Emiliano Vittoriosi on Unsplash
