What Memoir Scandals Tell Us about Two LLM Writing Scandals
On James Frey, Amy Griffin, Shy Girl, and LLM book reviews.
Housekeeping
Hello all,
I’m excited to be joining the contributors of Josh Riedel’s very cool Thirty Cabins project, a collection of flash stories inspired by the 1918 book Sunset’s Cabin Plan Book. The project seems like a lot of fun and features a lot of great writers including several here on Substack like a. natasha joukovsky, Ed Park, and Rachel Khong. More info here.
Subscription Incentive… Extended!
For March, I’ve been running an annual subscription incentive where I’ll mail a signed copy of one of my novels or co-edited anthologies to any reader who signs up for an annual subscription (or extends their existing one) to Counter Craft. This worked really well, and I got many nice notes from readers. I also have some copies left. So, I’m going to extend it through April while supplies last. Just shoot me a message when you sign up with your address and which book you’d like. (And if you signed up in March and didn’t message me at the time, just shoot me a message now!) This is limited to readers in the US though, given postage costs.
Lastly, I’ve been neglecting my “Principles of Plotting” series. A few readers have nicely written me about how they enjoyed the first entry and hoped it continued. I’ve drafted most of the next few posts so I’m going to hold myself accountable here and promise to put the next entry up in April.
Scandals, Scams, and More Scandals
In my timelines, most of the literary discourse has been occupied with debates about the Shy Girl AI scandal in which Hachette cancelled the republication of an initially self-pubbed book by Mia Ballard over alleged AI use. The author’s defense seems to be that she paid an acquaintance to edit her book and that person rewrote the book with AI, thus she shouldn’t be penalized.
(AI questions aside, I find it hard to sympathize with an author noticing someone else had “changed a lot of the wordings” throughout a manuscript and then… publishing it anyway without even doing a final pass to see if you agree with the edits. If the author isn’t responsible for the writing in their book, well, what are we even doing here?)
As I drafted this post, another literary AI story broke. The Wrap reported that the New York Times cut ties with a freelance book reviewer for submitting a review that included language taken from a review by The Guardian. The reviewer, Alex Preston, admitted to using LLMs to write the review. “I took responsibility immediately and apologized to The New York Times,” Preston said. The similarities between the reviews read exactly like the copy-and-paste-then-tweak-a-few-words plagiarism that high school teachers and college professors have dealt with for generations. (Hat tip to Blake Lefray.)
Some passages are so similar that I wonder if they were LLM-generated or copy and pasted by the author into an LLM for revision. It could be either. Prompting ChatGPT to generate a review of the book also seems to take language from existing reviews, so the reviewer could have copy and pasted from an LLM to introduce the plagiarized text.
UPDATE 04/01: Sam Leith interviewed Preston here, who is apologetic and claims the LLM inserted the plagiarized text.
The short version is that I had written a draft review of the book, but it was under length and I was rushing badly and drowning slightly. I made the stupid decision to use an AI tool to help expand and smooth it, with instructions about US spelling and house style at the NYT which I always get wrong - Mr so and so etc.
I looked at how it had tidied up the end of the review but didn’t realise that it had also dropped in language from Christobel Kent’s Guardian review. I was rushed and stupid and I’m so sorry. That is the heart of it.
Despite this, some people are willing to defend Ballard or Preston, though often in a backhanded way. “Genre books [and/or book reviews] are all formulaic and boring, so what does it matter if people automate them?” Well, one reason might be that readers and institutions still care so using LLMs without disclosure might tank your career.
Remixing the Memoir Scandal Arguments
A common defense of Mia Ballard I’ve seen—primarily on Threads, if you want to search for those takes—could be summed up this way: It’s racist to single out AI use by Mia Ballard (a black woman) because James Frey (a white man) recently published a book using AI and no one critiqued him for it. Racism in publishing is quite real, to be clear. But I’m not quite sure the comparison holds up. First, James Frey was certainly critiqued for the book that was mostly panned. The book was not published by Hachette or any of the Big Five publishing houses. In any event, Frey was open about the LLM use unlike Preston and Ballard.
But reading the various defenses of Ballard (and now Preston) did have me thinking about James Frey in a different way. There are some parallels with Frey’s original memoir scandal and the ensuing debate about what, if any, standards should exist in non-fiction writing.
A quick recap: James Frey published two memoirs in the mid-00s—A Million Little Pieces and My Friend Leonard—that turned out to be almost comically fabricated. Frey wasn’t fudging a few details but inventing things whole cloth. For example, in reality he spent a few hours in police custody while in his memoir he pretended to have spent three months in prison. That scandal culminated in Oprah, who had chosen A Million Little Pieces for her book club, roasting him in a televised interview. (Oprah later apologized for being so harsh.) Frey’s publisher at the time offered refunds to offended readers.
(Frey actually did try to sell A Million Little Pieces as a novel first, but editors passed. The book only sold because of the claim to truth. Frey was unwilling or unable to improve the novel or else to rewrite it to be a true memoir.)
Despite the fabrications, Frey had his defenders. These included some notable academics and non-fiction writers like David Shields (“of course [Frey] made things up. Who doesn’t?”) and John D’Agata (“I never really understood why people think what nonfiction’s job is to give them information”.) These and others settled on an argument quite similar to defenses of the LLM writers. Who cares if a book is fraudulent? Didn’t you hear about “the death of the author”? That’s supposed to mean you can’t judge a text by anything except how you feel. Plus, we can never prove exactly how much [a memoir is fabricated / a text used LLMs] so everyone should just get over it.
While most people misunderstand Barthes’s “The Death of the Author,” there is something to this argument. All memoirs are fictionalized to some degree. No one can perfectly recall past conversations or scenes—our memories are so faulty that even the most shocking events, like 9/11, become fabricated in our brains—and so writers invariably fictionalize them. And most of us know that memoirists may fudge timelines, combine people, or otherwise tweak the details of events for the sake of clarity or storytelling. It’s also true we can never prove exactly how much of a memoir was fictionalized and many fraudulent memoirs have likely been published without being caught.
Similarly, there is something to the argument that all writing is borrowed from other sources and many texts are formulaic so why care if a text was written by LLMs? And yes we can’t prove exactly how much LLM use went into a text and many LLM-generated texts have likely been published without being caught.
At the same time, most people do think there is a line. Most people do want some standards. That the exact line or precise metric is disputed doesn’t mean that readers simply cease to care. Partly, that’s because most people do not like to be tricked. James Frey and Mia Ballard would likely have been in the clear if they openly said their work was fictionalized and LLM-assisted, respectively.
But also, some things clearly do cross even nebulous lines.
The Other Big New Literary Scandal
There’s a new fraudulent memoir scandal that I haven’t seen discussed much in my circles. Amy Griffin is being sued with allegations that her bestselling memoir The Tell was both fabricated and stolen. I wrote a bit about the book last year in a piece on the “book club industrial complex,” after the New York Times published an article casting doubt on the authenticity of the memoir, including its allegations of sexual abuse by a teacher. The new lawsuit brings up a darker possibility: the abuse was real, but experienced by someone else who Griffin ripped off. Griffin’s classmate is alleging that Griffin and/or her ghostwriter Sam Lansky not only stole the classmate’s traumatic experiences but engaged a third party to dig into her story to steal even more details:
In the lawsuit, Ms. Doe claims that she met with Ms. Griffin in 2019, at the author’s invitation, at a coffee shop in California and discussed growing up in Amarillo. The suit also says that in 2022, Ms. Doe was contacted by someone claiming to be a talent agent and producer who “expressed an interest in using her ‘life story’” for a film or television show. During several subsequent conversations, Ms. Doe revealed her middle school sexual abuse. When Ms. Doe asked for a contract, the purported agent cut off contact, according to the lawsuit.
Ms. Doe claims that the information she shared was then used in “The Tell.”
If truth doesn’t matter and the only thing to care about is the book itself, is it fine to steal someone else’s life story? I suspect the intellectual argument has limits and many would consider stealing another person’s trauma to be a step too far. Employing someone to gaslight an abused woman to steal her story for your memoir is pretty vile behavior, if true. It is certainly far worse than anything Ballard or Preston did with LLMs, and for that matter worse than what Frey did.
But the connection between all these scandals is despite academic arguments about why no one should care… readers do care. Most of us do consider factors beyond the decontextualized art itself when judging a work. This might be the genre label (it is fiction or non-fiction?) or the production methods (is this a painting or a photograph?) or the time period (is this an innovative early work or a derivative later imitation?) or any other number of factors.
One of those factors seems to be “did the author actually write the book themselves?”
The Muddy Battlefield of AI Writing Debates
Arguments about LLM use in literature (and art/entertainment more broadly) tend to fall into two polar opposite camps. AI skeptics predict that LLMs themselves will be a fad and, at least in the arts, we will look back on any use of AI with embarrassment. The AI believers predict that AI use will only increase, its ubiquity will be inevitable, and today’s fretting over it is what we will look back on in embarrassment. For the believers, LLMs are simply the future and looking askance at their use will seem as silly as looking down on authors for using computers instead of typewriters or doing research on the internet. Even many AI haters believe a version of this argument. They may not like that LLMs are here, but they think their ascendance and acceptance is inevitable.
I’m going to stake out my position in the middle. I don’t think LLM use will become entirely rejected or blindly accepted without limits. I think we’re in for a lot of confusing, muddy, fraught debates about what is acceptable use and what isn’t. For when the use is legitimate and when it is fraudulent. I expect people will stake out different positions and for the debate to remain a live one—as it remains for non-fiction—for the foreseeable future.
The rebuttal I hear to the above is that because LLM use is hard to detect (“and the models will only get better”) people will simply give up caring about questions like “did the author actually write this book?” But as the memoir scandals show, questions of authenticity do not disappear just because they’re hard to detect and what is acceptable is debated.
We could draw parallels to debates in fields outside of literature. When does performance enhancing drug use cross the line into athletic fraudulence? The same drugs athletes use to cheat are used in medical treatments. What treatments and medications are a matter of health and which are abuse? What happens when regular people use treatments banned by sports leagues? These are constantly fought over topics—and ones that inspired my science-fiction novel The Body Scout, which is set in a futuristic baseball league—and the lines and metrics are nebulous and shifting.
These debates are messy, yes, but life is messy.
I expect LLM use to end up in similar endlessly contested ground. Some uses of LLMs, like spell check or research, will likely be accepted by most of the population (though some will remain opposed to any use). Other uses of LLMs, such as calling a text you did nothing but prompt your work, I expect to never be considered legitimate by the general population (though some will stake out the extreme position there too).
Then there is the muddy middle ground, where I expect the debate to remain confusing and perhaps illuminating. For example, using fragments of LLM text within your work—especially if disclosed—seems artistically legitimate to me. But what’s the line? 10% LLM text? 25%? 50%? Or when does spitballing with an LLM about plot and character turn the work into something other than yours?
These questions aren’t just being debated among LLM skeptics or AI haters. They’re also fraught within the “AI artist community,” where there are frequent accusations of “stolen prompts” and “plagiarized” LLM outputs. Here on Substack, The Republic of Letters interviewed an author who openly writes in collaboration with LLMs. One thing in particular stood out to me. Despite the bombastic title—“I Write With AI. Deal With It.”—the LLM-assisted author, Chad Rye, felt some uses of LLMs were illegitimate: “I believe that once you start letting the AI write or come up with places, characters, story beats, etc. then its not really yours.”
The point is that what is considered acceptable LLM use is fraught even among LLM lovers. Why would we expect that to change any time soon, especially among a general public that is largely skeptical of AI?
Since I’m predicting these questions will remain unsettled for some time, I won’t pretend to settle them here. For myself, I’ve developed a simple rubric that I think holds up well. At least for now. The short version is that you should disclose any use of LLMs that you would disclose if that function were performed by something other than an LLM. E.g., you are not expected to disclose using Microsoft Word for spellchecking or Google for research. Nor are you expected to disclose batting around ideas with a friend or receiving minor copyedits from a colleague (though it is good practice to thank them in the acknowledgements). OTOH, you are expected to disclose or cite appropriated text whether it is quotations in an academic essay or the source material of an erasure poem. If you’re using any text from an LLM, I think it is good practice to disclose it. And, as the above scandals show, smart for your career.
At least as things stand now…
My new novel Metallic Realms is out in stores! Reviews have called the book “brilliant” (Esquire), “riveting” (Publishers Weekly), “hilariously clever” (Elle), “a total blast” (Chicago Tribune), “unrelentingly smart and inventive” (Locus), and “just plain wonderful” (Booklist). My previous books are the science fiction noir novel The Body Scout and the genre-bending story collection Upright Beasts. If you enjoy this newsletter, perhaps you’ll enjoy one or more of those books too.








Writing is hard. If someone's not willing to do the work themselves, they're not an artist.
"If the author isn’t responsible for the writing in their book, well, what are we even doing here?"
Preach. I can't imagine sending my nephew/boyfriend/guy-at-the-coffee-shop my novel then get an edited version back that I submitted unread to my readers or publisher. 😬
I've seen writers withdraw from contracts because the editorial hand was too heavy, but I've also seen writers who simply don't care and will accept any revision to get published and paid.