Another example of the inexorable failure is 3D films. Sound was so revolutionary well acted silent movies were replaced by stiffly voiced talkies, with people clustered round the microphone. Colour was so revolutionary, audiences ate it up. 3D has been pushed since the 50s. It didn't transform the experience, I barely notice if any film is 3D. I am one of the 25% of people who can't benefit from 3D.
Yeah I think that's a good example too. Sometimes even when something is a clear *technical* improvement, it isn't necessarily an improvement for the user. Seems pretty clear to me that 3D films will never be the norm as many predicted, not until some new technology (better VR headsets? I dunno) comes along.
There are definitely some writers who are messing around with LLMs in their work, they are just obscure and not in mainstream publishing. I think future artists who aren't as defensive about AI as our generation will be more open to messing around with them. The early, non-corporate models can actually write some literary prose if you know what you're doing. When GPT2 came out in 2019, I had no idea what I was doing but I downloaded it to my local machine and input text from the Bible, Flannery O'Connor stories, and Ray by Barry Hannah. It output a lot of wonderful sentences that made no sense. I put two of the paragraphs into my unpublished manuscript.
There are also mainstream writers messing with LLMs, and several big heavily publicized AI-authored or AI-collaborated books have been published. My skepticism isn't that LLMs can be used to help produce books. I'm sure they can. My skepticism is that anything has been demonstrated showing they produce *new* types of literature. Everything I've seen has been either not very readable or readable but no different than anything else out there. Might help some writers, but no evidence yet it is something writers writ large must embrace.
Agree to disagree! This just tells me that we are reading different things and neither of our opinions are a definitive judgement. FWIW I've not read any mainstream book in over a decade nor have I read an LLM generated novel. But the young artists are fucking with it and it's not going away, no matter how I feel about it. Picking a sentence out of an LLM is no different than eavesdropping a public conversation and putting the sentence into your novel.
We can definitely agree to disagree. Though for the record I agree with your last sentence. Artistically, spitballing titles with ChatGPT is no different than spitballing with a friend. Plucking an overheard sentence on the street is no different artistically than taking one from an LLM. (Ethically, one might make an argument.) But again, I want to see something NEW not just another way to do the same thing.
LLM by definition aren't going to make anything new. If they do, it's because a human told them to. I think that using an AI is a really good way to break writer's block. Having an AI write a couple paragraphs is a great way to become irritated enough to write something better.
Sam, you are onto something w/r/t base models. Almost everyone’s experience with LLMs is with the public chatbot interfaces—instances of the models that are completely lobotomized and write only in HR-ified corporatespeak. They’re much, much weirder under the hood.
You might be interested in the work of an account called @repligate on twitter. They are part of a group of weirdos who are making genuinely new forms of literary text-art by “jailbreaking” the models. I’m not sure it’s revolutionary but it’s definitely interesting.
Hmm. When I’m reading, I trust that what’s on the page originated in the mind of a fellow human being. Incorporating AI destabilizes the experience for me. If your manuscript becomes a book, would you include an Author’s Note to say “two paragraphs were written by AI”? Or would you resist doing that—and why? Because you want the credit maybe, or you suspect as I do that reading is an act of trust which is broken by AI? Or some other reason I am not seeing?
No, I wouldn't include that note. Because it's art. But we are talking about an unpublished manuscript here, not something that anybody is reading.
You can read this "poem" I wrote with the help of AI (there is your one and only disclaimer...maybe I'm using AI to write this comment, you will never knowwwww). I wrote the sentences and used AI to change the spacing. It was like running the text through a script, no different than the first writers to play with a word processor: https://www.sammcalilly.com/writing/2025/01/31/botspamming.html
"Because it's art" -- Most artists would be fine telling you about their tools and who helped with an insight. "I wrote this book using Word and a Merriam-Webster dictionary. When I wanted a synonym because I'd said 'vituperative' twice, I used thesaurus dotcom. I want to thank my friend Gary who suggested switching the order of chapters 5 and 6, and my editor Enid who made me add hope to the ending." So far I have encountered no advocates for AI who are willing to add, "Parts were written by AI."
I would find it strange if a writer told me they used Word and a dictionary within the context of a particular work. I have no problem divulging when necessary, like I did here bc it's relevant to this conversation. But who cares about the context if the work speaks for itself (maybe that's why it's an "unpublished" manuscript). I'm sure you've included an autocorrected or suggested word in something you've written, which is an application of AI in your writing. Use whatever tools you need, it's art, there are no rules unless you want them. I'm not interested in AI trigger warnings so I'm not gonna follow that rule.
It's just hard for me to understand why a writer who is supposed to love language (otherwise why write?) wouldn't find novelty or some sort of value in a machine that can generate infinite sentences.
The true trickiness with this kind of conversation is that "used AI" can mean anything from "asked for some spellchecking" to "the program generated the entire work itself with only minimal prompting." Anything close to the latter side of the scale seems more like collaborating, and I would expect a writer to explain if some other human had written say 10% of the text in a book. But I would not exact anyone to feel obligated to divulge having some other readers on a text or using a proofreader to catch errors.
I love language. Very much. For me, that joy is in generating my own sentences. That's part of the art.
Also, LLMs have been trained on stolen writing, so using them to continue writing and publishing is unethical. (I am not commenting on your unpublished manuscript, since it's unpublished.)
Ethically, the people should definitely own the means of production for AI. This is related to what Kropotkin has to say about invention, you don't have one invention without building upon prior ideas of other people, therefore it only makes sense that society collectively benefits from this (aka having LLMs freely available).
"Stealing the work" doesn't mean the technology is inherently bad. This is ultimately a problem of capitalism, under which it's famously impossible to consume anything ethical. So the ethical point isn't valid in my opinion.
But again, goes back to art, you know that phrase about how artists steal...
I think you are wrong but in an interesting way! Speaking as a mathematician who is not a genAI expert but does love a good vector embedding.....
Posited: This is a technology with impacts that won't come with better models, but with better human understanding of how to use it in non-trivial ways. And that takes time. (And, totally, there's a lot of triviality coming.) Like, we have not gotten our heads around what it means to be able to write so as to specify a point of view, and then interact with a writer grounded in just that point of view, on demand and all night if we want. It's a supremely weird power but unless you believe that co-writing is always worse than solo writing, there's great potential here when it's understood. Don't you find that feedback, even if it's only partially baked, is highly stimulating for going new places and discovering what you didn't know you believed?
What I'm inclined to try is developing a kind of commonplace book vector database, into which go the ideas that make me vibrate, and then with the points of view that can be interpolated from the space supported by those vectors, enter into conversation. My bet is that you're correct it won't be the people using LLMs to write by balloon, covering great aimless swaths of ground, to do something innovative, but the ones who figure out how to turn their obsessions into little characters that argue back, a menagerie of steelmanning debaters and authorial collaborators slash opponents providing food for reconsideration. Your own Age of Wonders-style bot café, inspiring & infuriating us to go further than the parts we contain within ourselves could go alone.
I think these are interesting ideas, and more what I'm looking for! I like the idea of arguing with bots of your characters to perhaps better understand them. Maybe that would produce new work. Although let me first say that I don't think there is anything artistically (again, ethical questions aside) with getting feedback from ChatGPT on a manuscript or anything like that. ChatGPT can probably replace a lot of human work, to varying degrees of skill. Proofreading. Editing. Spitballing ideas. Even co-writing. But on the face of it, I just don't see how that is revolutionary. Perhaps I'm overthinking it, and such tools will allow very introverted people who aren't comfortable sharing their work with other people to get better feedback? Or maybe it would help speed development of younger writers?
I will say that personally, I would love one of these AI companies to create a better proofreading and copyediting tool. Like a program trained to help flag weasel words, repeated (similar) phrases, unintended POV shifts, and the like on top of just quicker spell and grammar checking. Basically, help me skip the uncreative and mostly mechanical parts rather than the creative parts.
> The interactive chatbot novel idea harkens back to early text-based video games, and there is probably a reason we abandoned those.
You say this, but there's actually a *lot* of really excellent text-based video games that have come out in the past decade or so (google "Twine" - but also *absolutely* check out Emily Short's "Counterfeit Monkey"), and the first really notable GenAI project was probably AI Dungeon, which is nothing BUT a text-based RPG generated on the fly by ChatGPT.
(It is also funny to dismiss text-based video games in the same article where you correctly note that books did not necessarily benefit from throwing in visuals, music, video, etc! Not hypocritical - they're not the same medium - just funny.)
You're right, the wording there is off. I meant only that there is probably a reason that video games with graphics took over the vast majority of the market.
Hearing that people reinvented the concept of a visual novel from first principles in response to the rise of ereaders is really amusing. Not only have visual novels never come close to displacing books (I say this as someone who spent half an hour last night installing beloved visual novels onto my new laptop), they also existed for literally *decades* before the "invention" of the enhanced ebook.
Ha sorry I didn't mean to imply Visual Novels were inspired by ebooks in any way. Just when I've made the point that enhanced ebooks didn't take off, I often get people replying "Visual Novels are basically the same thing." You're right to point out they predate all this though!
Oh I didn't think you were suggesting that they were inspired by ebooks, sorry that my wording suggested that. The fact they were around for so long beforehand just made it especially amusing that a format which is essentially a less developed version of the same idea was being presented as The New Big Thing for a time. Particularly with this idea of displacement; if "a book, but with images and audiovisual files integrated into the experience" was going to displace regular books, it would have done so at some point in those previous decades.
Definitely AI has been really useful for students who want to cheat. Which screwed me since most of my income since 2008 has been charging these lazy college students to write their papers for them.
Beyond that, it's boring garbage that should be trashed and hopefully will just die already.
What AI can do, and is doing, is to wipe out smut as something humans are writing. An AI can easily write exactly the sort of story a reader wants, on demand, as long as the story isn't very sophisticated. The on-demand and specificity aspects are the key thing.
I'm surprised that generative AI hasn't been used in video games on a large scale yet. It seems obvious, but I think that the resources that already go into games haven't been reallocated yet.
Overall, AI hasn't changed much for authors, yet. My attitude is that competition with other humans was so rough already, that AI isn't that much worse. Also, if you can't write better than an AI, boo hoo you shouldn't be writing. See my first point.
Yes, we're thinking along the same lines. The on demand aspect is a real feature of GenAI, though I agree the obvious result of that is simply replacing writers is certain categories. Smut, yeah. Fan fiction too perhaps. If you want to see a specific thing in fan fiction, why wade through AO3 tags if you can just get ChatGPT to create the story you want for you.
And definitely agree genAI in video games makes a lot of sense. Especially as video games get increasingly sprawling with more and more side characters you need dialogue for.
GenAI is being experimented with in games and it's... not great. Yes, you can ask anyone anything, but why would you? It still needs authorial control and intent to be engaging. That might come, but it's certainly not in the current demos.
I like this perspective a lot, and it has me thinking that the areas where LLMs are likely to have a bigger disruptive effect are in the arts that users engage with more passively, like video/tv and music. There have been a number of stories recently about how companies like Netflix are looking for shows that viewers can “follow” while doing something else, usually on another screen, and while I haven’t seen anything about music streamers doing that, I’m sure they are. If all you need as a producer of content is something to drone in the background, then you don’t need something as good as talented humans. You just need something good enough, and an LLM can probably get you most the way there, sad to say.
Yeah, I also expect the biggest impact will be in areas where it saves a bunch of money. Hollywood films can cut corners with AI and save millions. Books are really pretty cheap. You can hire an AI-level ghostwriter for like 10k.
Hollywood writing is amazingly bad for the amount they spend on productions. I don't understand it, I think it's networking and studio meddling. Writing is cheap, and is the one thing that is under the complete creative control of the producers. I'm sure I'm not alone in wondering about this.
I'm also sure the writers' union will fight AI writing to a standstill for a while.
I was a bit vague there, but was thinking more that GenAI image and video generation could save Hollywood films a lot of money. I think we've already seen that in some movies with AI generated posters on sets or the Brutalist using genAI to make drawings of buildings that real artists built into models. Agree writing is relatively cheap even in Hollywood.
I remain somewhat skeptical that LLMs will ever be capable of producing great literary works. One reason is that since mid-2024, most LLM progress has come from an area called reinforcement learning, an algorithmic paradigm that excels in topics with verifiable answers (like math and coding) but struggles in activities with poorly defined notions of good/bad (like literary fiction). Another reason is that most art only has meaning within the context of human relations. I think there’s almost no appetite for custom fiction tailored to individual preferences, as some in SF predict. This is related to your “100x readers” point.
And yet: this technology has repeatedly exceeded my expectations of what I believed it was fundamentally capable of doing. Computer scientists invented the underlying statistical method in 2017. In 2020, it could hardly write a coherent paragraph. In 2022, it was only capable of producing Reddit-ass novelty pastiche. In early 2025, it’s fully capable of writing replacement-level genre fiction. (if you haven’t played around with these things in a while, I do recommend checking back in—they are much, much better than they were even a few months ago let alone in 2023.)
This is a disorienting pace of change. Until I see evidence that progress is slowing down, my default expectation is that LLMs will continue to get better at writing. I still think they top out somewhere around “compelling sci-fi novelist” or “generic Iowa workshop-core”. But I’m way less sure of this view than I was six months ago.
This is the most brilliant examination of AI that I've read thus far. Feeling sick of technology myself lately, especially my smartphone, as I am typing this using the AI provided by Substack to complete the words I type, I wonder how this will affect the future of college term papers and doctorate theses?
Another example of the inexorable failure is 3D films. Sound was so revolutionary well acted silent movies were replaced by stiffly voiced talkies, with people clustered round the microphone. Colour was so revolutionary, audiences ate it up. 3D has been pushed since the 50s. It didn't transform the experience, I barely notice if any film is 3D. I am one of the 25% of people who can't benefit from 3D.
Yeah I think that's a good example too. Sometimes even when something is a clear *technical* improvement, it isn't necessarily an improvement for the user. Seems pretty clear to me that 3D films will never be the norm as many predicted, not until some new technology (better VR headsets? I dunno) comes along.
People can read the 3D nature of reality in a 2D film perfectly well. Just as there are conventions in the theatre which we simply accept.
There are definitely some writers who are messing around with LLMs in their work, they are just obscure and not in mainstream publishing. I think future artists who aren't as defensive about AI as our generation will be more open to messing around with them. The early, non-corporate models can actually write some literary prose if you know what you're doing. When GPT2 came out in 2019, I had no idea what I was doing but I downloaded it to my local machine and input text from the Bible, Flannery O'Connor stories, and Ray by Barry Hannah. It output a lot of wonderful sentences that made no sense. I put two of the paragraphs into my unpublished manuscript.
There are also mainstream writers messing with LLMs, and several big heavily publicized AI-authored or AI-collaborated books have been published. My skepticism isn't that LLMs can be used to help produce books. I'm sure they can. My skepticism is that anything has been demonstrated showing they produce *new* types of literature. Everything I've seen has been either not very readable or readable but no different than anything else out there. Might help some writers, but no evidence yet it is something writers writ large must embrace.
Agree to disagree! This just tells me that we are reading different things and neither of our opinions are a definitive judgement. FWIW I've not read any mainstream book in over a decade nor have I read an LLM generated novel. But the young artists are fucking with it and it's not going away, no matter how I feel about it. Picking a sentence out of an LLM is no different than eavesdropping a public conversation and putting the sentence into your novel.
We can definitely agree to disagree. Though for the record I agree with your last sentence. Artistically, spitballing titles with ChatGPT is no different than spitballing with a friend. Plucking an overheard sentence on the street is no different artistically than taking one from an LLM. (Ethically, one might make an argument.) But again, I want to see something NEW not just another way to do the same thing.
LLM by definition aren't going to make anything new. If they do, it's because a human told them to. I think that using an AI is a really good way to break writer's block. Having an AI write a couple paragraphs is a great way to become irritated enough to write something better.
You are right to be sceptical about leaps to a new level.
Sam, you are onto something w/r/t base models. Almost everyone’s experience with LLMs is with the public chatbot interfaces—instances of the models that are completely lobotomized and write only in HR-ified corporatespeak. They’re much, much weirder under the hood.
You might be interested in the work of an account called @repligate on twitter. They are part of a group of weirdos who are making genuinely new forms of literary text-art by “jailbreaking” the models. I’m not sure it’s revolutionary but it’s definitely interesting.
Hmm. When I’m reading, I trust that what’s on the page originated in the mind of a fellow human being. Incorporating AI destabilizes the experience for me. If your manuscript becomes a book, would you include an Author’s Note to say “two paragraphs were written by AI”? Or would you resist doing that—and why? Because you want the credit maybe, or you suspect as I do that reading is an act of trust which is broken by AI? Or some other reason I am not seeing?
No, I wouldn't include that note. Because it's art. But we are talking about an unpublished manuscript here, not something that anybody is reading.
You can read this "poem" I wrote with the help of AI (there is your one and only disclaimer...maybe I'm using AI to write this comment, you will never knowwwww). I wrote the sentences and used AI to change the spacing. It was like running the text through a script, no different than the first writers to play with a word processor: https://www.sammcalilly.com/writing/2025/01/31/botspamming.html
"Because it's art" -- Most artists would be fine telling you about their tools and who helped with an insight. "I wrote this book using Word and a Merriam-Webster dictionary. When I wanted a synonym because I'd said 'vituperative' twice, I used thesaurus dotcom. I want to thank my friend Gary who suggested switching the order of chapters 5 and 6, and my editor Enid who made me add hope to the ending." So far I have encountered no advocates for AI who are willing to add, "Parts were written by AI."
I would find it strange if a writer told me they used Word and a dictionary within the context of a particular work. I have no problem divulging when necessary, like I did here bc it's relevant to this conversation. But who cares about the context if the work speaks for itself (maybe that's why it's an "unpublished" manuscript). I'm sure you've included an autocorrected or suggested word in something you've written, which is an application of AI in your writing. Use whatever tools you need, it's art, there are no rules unless you want them. I'm not interested in AI trigger warnings so I'm not gonna follow that rule.
It's just hard for me to understand why a writer who is supposed to love language (otherwise why write?) wouldn't find novelty or some sort of value in a machine that can generate infinite sentences.
The true trickiness with this kind of conversation is that "used AI" can mean anything from "asked for some spellchecking" to "the program generated the entire work itself with only minimal prompting." Anything close to the latter side of the scale seems more like collaborating, and I would expect a writer to explain if some other human had written say 10% of the text in a book. But I would not exact anyone to feel obligated to divulge having some other readers on a text or using a proofreader to catch errors.
I love language. Very much. For me, that joy is in generating my own sentences. That's part of the art.
Also, LLMs have been trained on stolen writing, so using them to continue writing and publishing is unethical. (I am not commenting on your unpublished manuscript, since it's unpublished.)
Ethically, the people should definitely own the means of production for AI. This is related to what Kropotkin has to say about invention, you don't have one invention without building upon prior ideas of other people, therefore it only makes sense that society collectively benefits from this (aka having LLMs freely available).
"Stealing the work" doesn't mean the technology is inherently bad. This is ultimately a problem of capitalism, under which it's famously impossible to consume anything ethical. So the ethical point isn't valid in my opinion.
But again, goes back to art, you know that phrase about how artists steal...
I think you are wrong but in an interesting way! Speaking as a mathematician who is not a genAI expert but does love a good vector embedding.....
Posited: This is a technology with impacts that won't come with better models, but with better human understanding of how to use it in non-trivial ways. And that takes time. (And, totally, there's a lot of triviality coming.) Like, we have not gotten our heads around what it means to be able to write so as to specify a point of view, and then interact with a writer grounded in just that point of view, on demand and all night if we want. It's a supremely weird power but unless you believe that co-writing is always worse than solo writing, there's great potential here when it's understood. Don't you find that feedback, even if it's only partially baked, is highly stimulating for going new places and discovering what you didn't know you believed?
What I'm inclined to try is developing a kind of commonplace book vector database, into which go the ideas that make me vibrate, and then with the points of view that can be interpolated from the space supported by those vectors, enter into conversation. My bet is that you're correct it won't be the people using LLMs to write by balloon, covering great aimless swaths of ground, to do something innovative, but the ones who figure out how to turn their obsessions into little characters that argue back, a menagerie of steelmanning debaters and authorial collaborators slash opponents providing food for reconsideration. Your own Age of Wonders-style bot café, inspiring & infuriating us to go further than the parts we contain within ourselves could go alone.
I think these are interesting ideas, and more what I'm looking for! I like the idea of arguing with bots of your characters to perhaps better understand them. Maybe that would produce new work. Although let me first say that I don't think there is anything artistically (again, ethical questions aside) with getting feedback from ChatGPT on a manuscript or anything like that. ChatGPT can probably replace a lot of human work, to varying degrees of skill. Proofreading. Editing. Spitballing ideas. Even co-writing. But on the face of it, I just don't see how that is revolutionary. Perhaps I'm overthinking it, and such tools will allow very introverted people who aren't comfortable sharing their work with other people to get better feedback? Or maybe it would help speed development of younger writers?
I will say that personally, I would love one of these AI companies to create a better proofreading and copyediting tool. Like a program trained to help flag weasel words, repeated (similar) phrases, unintended POV shifts, and the like on top of just quicker spell and grammar checking. Basically, help me skip the uncreative and mostly mechanical parts rather than the creative parts.
> The interactive chatbot novel idea harkens back to early text-based video games, and there is probably a reason we abandoned those.
You say this, but there's actually a *lot* of really excellent text-based video games that have come out in the past decade or so (google "Twine" - but also *absolutely* check out Emily Short's "Counterfeit Monkey"), and the first really notable GenAI project was probably AI Dungeon, which is nothing BUT a text-based RPG generated on the fly by ChatGPT.
(It is also funny to dismiss text-based video games in the same article where you correctly note that books did not necessarily benefit from throwing in visuals, music, video, etc! Not hypocritical - they're not the same medium - just funny.)
You're right, the wording there is off. I meant only that there is probably a reason that video games with graphics took over the vast majority of the market.
Hearing that people reinvented the concept of a visual novel from first principles in response to the rise of ereaders is really amusing. Not only have visual novels never come close to displacing books (I say this as someone who spent half an hour last night installing beloved visual novels onto my new laptop), they also existed for literally *decades* before the "invention" of the enhanced ebook.
Ha sorry I didn't mean to imply Visual Novels were inspired by ebooks in any way. Just when I've made the point that enhanced ebooks didn't take off, I often get people replying "Visual Novels are basically the same thing." You're right to point out they predate all this though!
Oh I didn't think you were suggesting that they were inspired by ebooks, sorry that my wording suggested that. The fact they were around for so long beforehand just made it especially amusing that a format which is essentially a less developed version of the same idea was being presented as The New Big Thing for a time. Particularly with this idea of displacement; if "a book, but with images and audiovisual files integrated into the experience" was going to displace regular books, it would have done so at some point in those previous decades.
Definitely AI has been really useful for students who want to cheat. Which screwed me since most of my income since 2008 has been charging these lazy college students to write their papers for them.
Beyond that, it's boring garbage that should be trashed and hopefully will just die already.
https://marlowe1.substack.com/p/announcing-my-retirement-from-academic
What AI can do, and is doing, is to wipe out smut as something humans are writing. An AI can easily write exactly the sort of story a reader wants, on demand, as long as the story isn't very sophisticated. The on-demand and specificity aspects are the key thing.
I'm surprised that generative AI hasn't been used in video games on a large scale yet. It seems obvious, but I think that the resources that already go into games haven't been reallocated yet.
Overall, AI hasn't changed much for authors, yet. My attitude is that competition with other humans was so rough already, that AI isn't that much worse. Also, if you can't write better than an AI, boo hoo you shouldn't be writing. See my first point.
Yes, we're thinking along the same lines. The on demand aspect is a real feature of GenAI, though I agree the obvious result of that is simply replacing writers is certain categories. Smut, yeah. Fan fiction too perhaps. If you want to see a specific thing in fan fiction, why wade through AO3 tags if you can just get ChatGPT to create the story you want for you.
And definitely agree genAI in video games makes a lot of sense. Especially as video games get increasingly sprawling with more and more side characters you need dialogue for.
GenAI is being experimented with in games and it's... not great. Yes, you can ask anyone anything, but why would you? It still needs authorial control and intent to be engaging. That might come, but it's certainly not in the current demos.
I loved The Age of Wonder. The chapters on gas and frankenstein were also... wonderful.
I like this perspective a lot, and it has me thinking that the areas where LLMs are likely to have a bigger disruptive effect are in the arts that users engage with more passively, like video/tv and music. There have been a number of stories recently about how companies like Netflix are looking for shows that viewers can “follow” while doing something else, usually on another screen, and while I haven’t seen anything about music streamers doing that, I’m sure they are. If all you need as a producer of content is something to drone in the background, then you don’t need something as good as talented humans. You just need something good enough, and an LLM can probably get you most the way there, sad to say.
Yeah, I also expect the biggest impact will be in areas where it saves a bunch of money. Hollywood films can cut corners with AI and save millions. Books are really pretty cheap. You can hire an AI-level ghostwriter for like 10k.
Regarding music streamers, I've seen some articles about Spotify basically generating imitation music that they control to keep the royalties. See: https://www.honest-broker.com/p/the-ugly-truth-about-spotify-is-finally
Hollywood writing is amazingly bad for the amount they spend on productions. I don't understand it, I think it's networking and studio meddling. Writing is cheap, and is the one thing that is under the complete creative control of the producers. I'm sure I'm not alone in wondering about this.
I'm also sure the writers' union will fight AI writing to a standstill for a while.
I was a bit vague there, but was thinking more that GenAI image and video generation could save Hollywood films a lot of money. I think we've already seen that in some movies with AI generated posters on sets or the Brutalist using genAI to make drawings of buildings that real artists built into models. Agree writing is relatively cheap even in Hollywood.
All I can say is that this perspective gives me hope that not all will writing and writers will be lost to AI so thank you!
I predict large scale theft. AI can't come up with a story, only re-write a story. But first needs the story.
I remain somewhat skeptical that LLMs will ever be capable of producing great literary works. One reason is that since mid-2024, most LLM progress has come from an area called reinforcement learning, an algorithmic paradigm that excels in topics with verifiable answers (like math and coding) but struggles in activities with poorly defined notions of good/bad (like literary fiction). Another reason is that most art only has meaning within the context of human relations. I think there’s almost no appetite for custom fiction tailored to individual preferences, as some in SF predict. This is related to your “100x readers” point.
And yet: this technology has repeatedly exceeded my expectations of what I believed it was fundamentally capable of doing. Computer scientists invented the underlying statistical method in 2017. In 2020, it could hardly write a coherent paragraph. In 2022, it was only capable of producing Reddit-ass novelty pastiche. In early 2025, it’s fully capable of writing replacement-level genre fiction. (if you haven’t played around with these things in a while, I do recommend checking back in—they are much, much better than they were even a few months ago let alone in 2023.)
This is a disorienting pace of change. Until I see evidence that progress is slowing down, my default expectation is that LLMs will continue to get better at writing. I still think they top out somewhere around “compelling sci-fi novelist” or “generic Iowa workshop-core”. But I’m way less sure of this view than I was six months ago.
This is the most brilliant examination of AI that I've read thus far. Feeling sick of technology myself lately, especially my smartphone, as I am typing this using the AI provided by Substack to complete the words I type, I wonder how this will affect the future of college term papers and doctorate theses?