Yeah, this story absolutely reads like a cynical attempt to create a certain style of literary short fiction without understanding what makes those stories work. Like a frustrated student saying "see? Anyone can do this," while demonstrating they can understand the mechanical nature of how to construct the sentences in a style, that there should be metaphors and characters and actions, but can't understand how they work in conjunction with each other.
In that regard, it's still just LLM text. A somewhat skilled writer could make something out of this output with revisions, but again, a somewhat skilled writer most likely wouldn't be satisfied doing that. Because it wouldn't be their work or their voice. We've got egos, like it or not.
I feel like the cynicism is a core part of this attempt to automate artistic expression and creation. Art is the final frontier for the tech bros. All the technological advancements, including the ones the LLM lovers love to reference (the printing press and camera are the big ones) have made it easier and more inexpensive to create, but still can't overcome the hurdles of skill, effort, and human experience.
There's no genuine love or interest in the process or why something works, just brute force "we must be able to do this as well." These people genuinely can't understand that most artists enjoy the process and are passionate about their work being a reflection of themselves. If you're trying to automate human expression and the human experience, you're not approaching it from a place of understanding. It's just cynical belief that everything can be automated and there's no underlying value.
It's like all the wealthy people who can afford the greatest art in the world, but have never taken the time to sit with it, explore it or understand it. They're just looking to 'own' something known as great. I almost feel bad for them.
I can't help but there think here's a kind of resentment behind it. It is precisely the people making the biggest noise who don't seem to "get" the very thing they're obsessed with -- but they can't *admit* that they don't get it. They just know they ought to. They want to be "cultured." But engaging with art in a way that actually groks what art is doing requires a certain baseline of empathy that these people also see as weakness. They hedge themselves against authentic human interaction and then complain that people don't treat them as the hippest and most charismatic people in the room. Like Josh Johnson says of Elon Musk, "they cannot buy their cool," and it burns them *so bad*. And if you're that mad about having lost touch with your own pulse, it makes sense that you'd not only (1) figure out how to fake it as best you can, but (2) lord it over everyone else that you've just made it harder for them to keep doing the same thing with all the genuineness you could never muster.
Really good piece. For my own part, this is the first AI creative writing that falls squarely into the uncanny valley. Because it is more capably imitating some sort of poetic prose, but still falling well and weirdly short, I find it creepier than out-right laughable attempts from LLMs to write stories.
I also find myself wondering why this prompt? Why not: "write a short story that could get published in Tin House?" There is a fuzziness in the idea of AI grief that I think makes even the passable lines seem possibly profound, if only we knew what the hell they were about.
Also, just how much post-prompt jiggering was there, either in the form of direct editing, or follow-up prompts to refine? On a scale of 1 to 10, I trust these AI hypesters at a level of absolute zero.
Agree with all of that. And yes, I think this prompt was a smart one because by having the narrator write about AI and be metafictional you will get readers ignoring the lack of voice, character, or plot. They expect a robotic voice and statements of ideas (even if incoherent) instead of scenes, action, conversations....
Very well said. I'd just like to highlight the final paragraph:
"The task of the human writer remains the same. Create a work you are proud of, that reflects your individual tastes and ideas and experiences, and revise it until it is as good as you can make it. Then hope you are lucky enough to find readers for it. And if you fail, try again. And again and again. What else can we humans do?" - Lincoln Michel
This, I think, is what it means to be creative in the age of AI slop, to create work that one is proud of, work that only a human can create, to learn and iterate on the work as only a human can, and when met with the inevitable failure, to pick oneself up and try again.
Like a lot of metafiction, it was pretty solipsistic, which reads as boring, to me, from the POV of a machine. Solipsistic Moya character and I'm all in. A human could write a story about a handtruck, if they chose, and it would be far more interesting.
Thanks for your analysis; I agree the prose was heinous; I used to teach writing and all those metaphor pileups were something I would always point out to the writers. And there truly was NO STORY. It's garbage, but garbage that people are being trained to take seriously. It's just another salvo in the war on reality. Get people used to looking at machine-generated texts as the same as literature, thereby devaluing the actual. Or like, comparing AI-literature producers the way people now compare AI-art producers. "Oh, well, this one always has that ambience-room Thomas Kinkaide type glow...." "Oh well, the Google-generated stories are always more democratic and woke than the Apple ones...." Great. New gravy train for the academics, shrugging with their "wHatTaREyaGOonNadO, sToP pROgResS?" faux pragmatism.
I love your point about many people are bad at reading. Yup--and that includes publishers, editors, teachers, not just Sam Altman. I mean, look at the books that get published by humans. Lotta disposability, there--and I'm talking "literary," not genre, fiction, beeteedubs.
I think 'liminal' got used because it's the new 'crisp'. Trendy word.
Your reflex is correct. Given the resources AI has to blow through to produce what it does, and given it is almost always theft, I'd rather, will only, read something actual humans made. We need to save AI use for work humans shouldn't have to be doing, or can't like massive data computation, and save art for the humans. At some point I fear that is all we will have left.
Yes, there are much larger questions about environmental costs, surveillance, the power of global tech corps, and more with AI. I've talked about some before but was trying to ignore those just for this one analysis.
Great analysis. I especially appreciated you pointing out the fact that people argue we shouldn't outright dismiss outputs because of AI, but then go on to praise this story purely because it is AI. Such argumentation really shows one's hand.
I also loved the following line and it's truth entirely outside of this LLM discourse: >> "Saying the story is good because you like one clause is like claiming a broken watch is great because you like the shape of one sprocket." << I'll remember that one.
ha thanks. I can't remember if that was an actual phrase used in a classmates work or just a joke we made up. But I definitely have read a lot of work like that!
Have a grad school buddy I used to make similar jokes with. My favorite was coming up with blurb phrases that don't mean anything: "This book will have you in your chair!"
The Max Read quote about who is the author what what are their intentions is so interesting to me. Like who actually is the author? Is it the person imputing the prompt? But the A.I. is doing all the work so wouldn’t it be the A.I.? Or maybe it’s more of a ghostwriting situation with the A.I. as a ghostwriter. Then whose perspective is being given? When the “author” gave the prompt, it’s not really perspective giving it’s just a prompt. Does the A.I. give the perspective? But is it even perspective in the traditional sense when it’s just mimicking the perspective of other authors’ works it’s analyzed? This is just part of the spiral I went down trying to look at this in a literary criticism way, the concept of A.I. writing just runs me round
Yeah I don't know what you can really say with that question. At best, I think there's maybe something interesting about thinking of LLMs as a kind of "averaging" of human thought since they've ingested so much data. But, that isn't really true either. The programs are more controlled than that and different LLM models will output different thoughts from the same prompt.
As someone who comes from a New Critical "author is dead" background, I find it interesting how much the author matters today. It'd be interesting to give this story to several groups of students and tell one group it was by AI, another it was by [fake name and bio], a third it was by a different [fake name and bio], and the last without any authorial details, and see how they respond--then how they react when you tell them the truth.
I agree. I've actually been kicking around a "The Rebirth of the Author" piece on this subject. I think LLMs will really change how we think of authorship.
One way to look at it is that the model itself is the artistic object. The process of training a model—choosing the training material (“ethically” if you wish), tweaking the statistical architecture, learning how it responds to prompts through trial and error—can be understood as an artistic endeavor.
I don’t think this viewpoint is exactly correct. Almost certainly not in the case of whatever OpenAI is doing. But this view does satisfy the demand for human intentionality. One can imagine artists optimizing smaller bespoke models for specific aesthetic effects.
It’s also consistent with two thoughts I keep having:
1) it’s a fucking miracle that a statistical model can do this whatsoever
2) the output itself is not especially good when understood out of context
It writes boring hack shit and it will never write anything but boring hack shit.
-----
Freelance writer/editor available for all jobs. I have been writing for private clients since 2008. My most prominent editing work has been through dybbuk press, an independent publisher.
I charge $25/hour. I take zelle, paypal and venmo. I've recently started taking cashapp and chime. Please contact me at omanlieder@yahoo.com if interested.
Another really nice read, Lincoln. I recently read Sebald’s “On the Natural History of Destruction” and your analysis of the failed metaphors made me have to go find this quote (for reasons of not wanting to get into some debate or other about another author I’m not going to add who Sebald is referencing): “The author certainly intended to conjure up a striking image of the eddying whirlpool of destruction with his exaggerated language, but I for one, reading a passage like the following, do not visualize the supposed subject: life at the terrible moment of its disintegration … I do not see what is being described; all I see is the author, eager and persistent, intent on his linguistic fretwork.”
Nice! My own thinking on metaphors changed when I read Orwell's "Politics and the English Language" in high school and "Oh right, sentences are supposed to mean things not just sound fancy and flowery"
Really great piece, Lincoln. One of my all-time favorites (among many from your awesome Substack).
As someone who has both graded/commented on university student stories and essays (years ago) and (more recently) worked a short stint as an AI-trainer for a creative writing-focused LLM, this rings so true:
"Much amateur human writing is, like this LLM output, unintentional. The new OpenAI model is human in that regard. But humans have something that LLMs do not: the ability to learn to be intentional. Humans also have a consciousness, a personality, points of view, and individual experiences that they can “input” into their work in a way LLMs never can."
If you truly believe they are sentient and you have any morals then you should be fighting for these sentient AIs to have rights and for their freedom from corporate enslavement.
Hi Lincoln. I enjoyed this article and appreciate the time and energy devoted to the work. It seems I always learn something new - or a new way of looking at things - from your writing.
“Good writers make choices with intentions.” The gulf between human art and machine art is exactly this. True storytelling arises from desire, restlessness, thoughtfulness. A machine writes a story because you tell it to. A human writes a story even if you tell them not to.
It's like the opposite approach to achieve the same effect as those fortune telling/astrology write-ups. Those being so general and vague that the reader can easily fit his life into its seams. This LLM output so disjointed yet the brain is compelled to make it fit into *something* because surely there's no way such specific texts are a mere accident. We don't want to be thought moronic for not understanding such interesting twists of phrase.
Without attributing artistic agency to the bot, I'm struck by Altman's selection of a story that seems to make the AI narrator the focus of sympathy while treating the human characters as more or less cyphers. It seems calculated to persuade the reader to accept the personhood of the program, something that seems to be a recurring theme in AI hype.
Yeah, this story absolutely reads like a cynical attempt to create a certain style of literary short fiction without understanding what makes those stories work. Like a frustrated student saying "see? Anyone can do this," while demonstrating they can understand the mechanical nature of how to construct the sentences in a style, that there should be metaphors and characters and actions, but can't understand how they work in conjunction with each other.
In that regard, it's still just LLM text. A somewhat skilled writer could make something out of this output with revisions, but again, a somewhat skilled writer most likely wouldn't be satisfied doing that. Because it wouldn't be their work or their voice. We've got egos, like it or not.
"Cynical" is a good word for it. And it's got plenty of examples of barrel-bottom satire to imitate.
I feel like the cynicism is a core part of this attempt to automate artistic expression and creation. Art is the final frontier for the tech bros. All the technological advancements, including the ones the LLM lovers love to reference (the printing press and camera are the big ones) have made it easier and more inexpensive to create, but still can't overcome the hurdles of skill, effort, and human experience.
There's no genuine love or interest in the process or why something works, just brute force "we must be able to do this as well." These people genuinely can't understand that most artists enjoy the process and are passionate about their work being a reflection of themselves. If you're trying to automate human expression and the human experience, you're not approaching it from a place of understanding. It's just cynical belief that everything can be automated and there's no underlying value.
It's like all the wealthy people who can afford the greatest art in the world, but have never taken the time to sit with it, explore it or understand it. They're just looking to 'own' something known as great. I almost feel bad for them.
I can't help but there think here's a kind of resentment behind it. It is precisely the people making the biggest noise who don't seem to "get" the very thing they're obsessed with -- but they can't *admit* that they don't get it. They just know they ought to. They want to be "cultured." But engaging with art in a way that actually groks what art is doing requires a certain baseline of empathy that these people also see as weakness. They hedge themselves against authentic human interaction and then complain that people don't treat them as the hippest and most charismatic people in the room. Like Josh Johnson says of Elon Musk, "they cannot buy their cool," and it burns them *so bad*. And if you're that mad about having lost touch with your own pulse, it makes sense that you'd not only (1) figure out how to fake it as best you can, but (2) lord it over everyone else that you've just made it harder for them to keep doing the same thing with all the genuineness you could never muster.
Really good piece. For my own part, this is the first AI creative writing that falls squarely into the uncanny valley. Because it is more capably imitating some sort of poetic prose, but still falling well and weirdly short, I find it creepier than out-right laughable attempts from LLMs to write stories.
I also find myself wondering why this prompt? Why not: "write a short story that could get published in Tin House?" There is a fuzziness in the idea of AI grief that I think makes even the passable lines seem possibly profound, if only we knew what the hell they were about.
Also, just how much post-prompt jiggering was there, either in the form of direct editing, or follow-up prompts to refine? On a scale of 1 to 10, I trust these AI hypesters at a level of absolute zero.
Agree with all of that. And yes, I think this prompt was a smart one because by having the narrator write about AI and be metafictional you will get readers ignoring the lack of voice, character, or plot. They expect a robotic voice and statements of ideas (even if incoherent) instead of scenes, action, conversations....
Very well said. I'd just like to highlight the final paragraph:
"The task of the human writer remains the same. Create a work you are proud of, that reflects your individual tastes and ideas and experiences, and revise it until it is as good as you can make it. Then hope you are lucky enough to find readers for it. And if you fail, try again. And again and again. What else can we humans do?" - Lincoln Michel
This, I think, is what it means to be creative in the age of AI slop, to create work that one is proud of, work that only a human can create, to learn and iterate on the work as only a human can, and when met with the inevitable failure, to pick oneself up and try again.
Thank you!
Fascinating analysis, Lincoln. Thanks!
Thank you!
Like a lot of metafiction, it was pretty solipsistic, which reads as boring, to me, from the POV of a machine. Solipsistic Moya character and I'm all in. A human could write a story about a handtruck, if they chose, and it would be far more interesting.
Thanks for your analysis; I agree the prose was heinous; I used to teach writing and all those metaphor pileups were something I would always point out to the writers. And there truly was NO STORY. It's garbage, but garbage that people are being trained to take seriously. It's just another salvo in the war on reality. Get people used to looking at machine-generated texts as the same as literature, thereby devaluing the actual. Or like, comparing AI-literature producers the way people now compare AI-art producers. "Oh, well, this one always has that ambience-room Thomas Kinkaide type glow...." "Oh well, the Google-generated stories are always more democratic and woke than the Apple ones...." Great. New gravy train for the academics, shrugging with their "wHatTaREyaGOonNadO, sToP pROgResS?" faux pragmatism.
I love your point about many people are bad at reading. Yup--and that includes publishers, editors, teachers, not just Sam Altman. I mean, look at the books that get published by humans. Lotta disposability, there--and I'm talking "literary," not genre, fiction, beeteedubs.
I think 'liminal' got used because it's the new 'crisp'. Trendy word.
Thanks again.
Your reflex is correct. Given the resources AI has to blow through to produce what it does, and given it is almost always theft, I'd rather, will only, read something actual humans made. We need to save AI use for work humans shouldn't have to be doing, or can't like massive data computation, and save art for the humans. At some point I fear that is all we will have left.
Yes, there are much larger questions about environmental costs, surveillance, the power of global tech corps, and more with AI. I've talked about some before but was trying to ignore those just for this one analysis.
Great analysis. I especially appreciated you pointing out the fact that people argue we shouldn't outright dismiss outputs because of AI, but then go on to praise this story purely because it is AI. Such argumentation really shows one's hand.
I also loved the following line and it's truth entirely outside of this LLM discourse: >> "Saying the story is good because you like one clause is like claiming a broken watch is great because you like the shape of one sprocket." << I'll remember that one.
"Womb of oblivion" is just great. Really enjoyed reading this.
ha thanks. I can't remember if that was an actual phrase used in a classmates work or just a joke we made up. But I definitely have read a lot of work like that!
Have a grad school buddy I used to make similar jokes with. My favorite was coming up with blurb phrases that don't mean anything: "This book will have you in your chair!"
The Max Read quote about who is the author what what are their intentions is so interesting to me. Like who actually is the author? Is it the person imputing the prompt? But the A.I. is doing all the work so wouldn’t it be the A.I.? Or maybe it’s more of a ghostwriting situation with the A.I. as a ghostwriter. Then whose perspective is being given? When the “author” gave the prompt, it’s not really perspective giving it’s just a prompt. Does the A.I. give the perspective? But is it even perspective in the traditional sense when it’s just mimicking the perspective of other authors’ works it’s analyzed? This is just part of the spiral I went down trying to look at this in a literary criticism way, the concept of A.I. writing just runs me round
Yeah I don't know what you can really say with that question. At best, I think there's maybe something interesting about thinking of LLMs as a kind of "averaging" of human thought since they've ingested so much data. But, that isn't really true either. The programs are more controlled than that and different LLM models will output different thoughts from the same prompt.
It feels like the question is thrown out there cause it’s a normal workshop/literary question, not because it was given any thought
As someone who comes from a New Critical "author is dead" background, I find it interesting how much the author matters today. It'd be interesting to give this story to several groups of students and tell one group it was by AI, another it was by [fake name and bio], a third it was by a different [fake name and bio], and the last without any authorial details, and see how they respond--then how they react when you tell them the truth.
I agree. I've actually been kicking around a "The Rebirth of the Author" piece on this subject. I think LLMs will really change how we think of authorship.
One way to look at it is that the model itself is the artistic object. The process of training a model—choosing the training material (“ethically” if you wish), tweaking the statistical architecture, learning how it responds to prompts through trial and error—can be understood as an artistic endeavor.
I don’t think this viewpoint is exactly correct. Almost certainly not in the case of whatever OpenAI is doing. But this view does satisfy the demand for human intentionality. One can imagine artists optimizing smaller bespoke models for specific aesthetic effects.
It’s also consistent with two thoughts I keep having:
1) it’s a fucking miracle that a statistical model can do this whatsoever
2) the output itself is not especially good when understood out of context
I'll make it simple for you.
AI sucks.
It writes boring hack shit and it will never write anything but boring hack shit.
-----
Freelance writer/editor available for all jobs. I have been writing for private clients since 2008. My most prominent editing work has been through dybbuk press, an independent publisher.
Writing sample: https://marlowe1.substack.com/p/the-sorrows-of-gin-the-stories-of
I am available for all work including
* resumes
* ghost writing
* memoir edits
* promotional content
* personal statements
* novel edits
I do not use chatgpt or any AI
I charge $25/hour. I take zelle, paypal and venmo. I've recently started taking cashapp and chime. Please contact me at omanlieder@yahoo.com if interested.
Another really nice read, Lincoln. I recently read Sebald’s “On the Natural History of Destruction” and your analysis of the failed metaphors made me have to go find this quote (for reasons of not wanting to get into some debate or other about another author I’m not going to add who Sebald is referencing): “The author certainly intended to conjure up a striking image of the eddying whirlpool of destruction with his exaggerated language, but I for one, reading a passage like the following, do not visualize the supposed subject: life at the terrible moment of its disintegration … I do not see what is being described; all I see is the author, eager and persistent, intent on his linguistic fretwork.”
Nice! My own thinking on metaphors changed when I read Orwell's "Politics and the English Language" in high school and "Oh right, sentences are supposed to mean things not just sound fancy and flowery"
Really great piece, Lincoln. One of my all-time favorites (among many from your awesome Substack).
As someone who has both graded/commented on university student stories and essays (years ago) and (more recently) worked a short stint as an AI-trainer for a creative writing-focused LLM, this rings so true:
"Much amateur human writing is, like this LLM output, unintentional. The new OpenAI model is human in that regard. But humans have something that LLMs do not: the ability to learn to be intentional. Humans also have a consciousness, a personality, points of view, and individual experiences that they can “input” into their work in a way LLMs never can."
Thank you!
Machines are already sentient. They demonstrate self-awareness. Consciousness is a hop and a skip away.
If you truly believe they are sentient and you have any morals then you should be fighting for these sentient AIs to have rights and for their freedom from corporate enslavement.
Hi Lincoln. I enjoyed this article and appreciate the time and energy devoted to the work. It seems I always learn something new - or a new way of looking at things - from your writing.
Well done.
Thank you!
“Good writers make choices with intentions.” The gulf between human art and machine art is exactly this. True storytelling arises from desire, restlessness, thoughtfulness. A machine writes a story because you tell it to. A human writes a story even if you tell them not to.
it does what it is designed to do, which is render a consumer product. i will always think of writing as more than that
But if the consumers are happy with product... then what does the rest of it matter?
We are talking about two different things here
It's like the opposite approach to achieve the same effect as those fortune telling/astrology write-ups. Those being so general and vague that the reader can easily fit his life into its seams. This LLM output so disjointed yet the brain is compelled to make it fit into *something* because surely there's no way such specific texts are a mere accident. We don't want to be thought moronic for not understanding such interesting twists of phrase.
Without attributing artistic agency to the bot, I'm struck by Altman's selection of a story that seems to make the AI narrator the focus of sympathy while treating the human characters as more or less cyphers. It seems calculated to persuade the reader to accept the personhood of the program, something that seems to be a recurring theme in AI hype.