I’m an anthology series editor (my anthology series is Best Microfiction) and the Clarkesworld story is sobering for those of us who read hundreds of submissions each year. We can no longer know for sure what is or isn’t robot written.. and I’m sorry to say that creative writers are training it to write fiction that simulates their particular style so well it’s often impossible to tell it apart. I’ve been surprised that other editors aren’t out there talking about this. Thank you so much for saying what you have here about the nightmare at large, and for putting it right. It’s just incredible that corporations are expecting us to gobble this crap up and feel some kind of utopian gratitude..
Oof yeah I think it's going to cause a lot of problems. I've been dealing with a lot in undergrad classes and it's easier for me to see ways around it in that setting (more quizzes and in-class writing for example) than in a submission setting. Not sure what editors are going to do...
Academic here too (biochemist) and we've shifted to a variety of in-class assessments this year to circumvent some of this, but for next semester I'm actually wanting to get students to use the tools and then research where/what factual errors have been written into the output. I'm hoping that will at least be a lesson to help students learn to not take responses as gospel. We'll see, though.
Sure, and copy and pasting in a word processing program is just a tool. They can both enable plagiarism. Doesn't mean everyone who uses them is plagiarizing of course.
But they are a tool that could be taken too far!! That’s the point! This is just the beginning! Why be lazy and use a chatbot anyway?? I have much more respect for a writer who did the hard work and grappled with the process than one who had to use an AI program! Don’t be lazy!
AI is the latest result of the uber-capitalism, anti-human, monetize-everything trend that has taken over society in my lifetime. It is the saddest future imaginable. I’m grateful I will be dead before life as lived by humans is a thing of the past.
Just because the models are trained on vast amounts of data/creative work doesn't make the achievement of these creations any less interesting or profound. Maybe the Chat-bots shouldn't have been made available and turned into a commercial product in the way they have, but to argue that there is no achievement here seems kind of amazing. As a sci fi writer aren't you more preoccupied with how precarious it's going to be writing any sci fi about the human future with the nature and extent of ai advances so unclear? The spamming of submissions is bad and sad, but surely the less interesting question here.
Given how bound up American sci fi is with the atom age and the space race, it seems very interesting how ai might affect the genre.
I didn't say there is "no achievement"? There's some impressive things they do, certainly, especially in image generation. (I find the image generators a lot more impressive than the text generation. Perhaps because we are familiar with surreal visual art and the "hallucinations" / errors can add interesting effects in images but in text they just add incoherency or factual errors.) I also think they're overpraised in certain ways and a lot of it is branding. These things aren't intelligences or sentient, but the branding of "AI" makes people impressed in a way they wouldn't be if you just said "a computer program did X." You're already seeing companies rush to label things "AI" that have existed for a long time. Spotify has an "AI DJ" to autogenerate playlists now even though Spotify has done that for a long time. Etc.
I think there's a reason science fiction writers, like Ted Chiang, are especially skeptical of this technology though. But I'm not sure what you mean about it being precarious to write about the human future with AI being unclear? That seems like a better time than ever to be a science fiction writer, no?
If you write a near future sci Fi with no no ai it might read like sci Fi with rotary telephones. And even speculation about what an ai might be like will be probably wrong -- in the time it takes to get the book written.
Also: no one talks about how tricky it is to really say what makes us so much more sentient or intelligent than a chat bot. Especially when they improve so rapidly.
Yeah I think that's always just a danger of science fiction. No one ever predicts the future accurately. Hard to predict that, say, the USSR will fall or that flying cars go nowhere but smart phones are invented.
But I think that's also a good lesson in not assuming that AI will necessarily keep improving rapidly. There's quite a good chance it will stall out (at least for some period of time), so you might look just as foolish imagining a 2030 with Data from Star Trek androids you know?
Just as I’m getting into my career as a writer ... this! How can I turn the future of AI around so that I can make it work for me in a productive ethical and creative way?
Look, I get that "don't believe the hype" is a good approach to reading about new technology, and that we should be wary of the Wired Magazine techno-libertarian-utopian THIS CHANGES EVERYTHING mindset. But the actual bias we should be aware of in writing about A.I. is, to steal a phrase from TVTropes, "most writers are writers." And thus, we are inundated with articles wherein journalists say A.I. is bad because it might obsolete journalists. I know a lot of artists who will scream your ear off about how Stable Diffusion is EVIL PLAGIARISM BAD because THE EVIL CAPITALISTS will use it to steal their jobs [sic!]... and then laugh, laugh, laugh, at the funny memes where the Pope is wearing Balenciaga, or share videos where Trump and Obama are playing Amogus over voice chat, because those ostensibly don't threaten their livelihoods. That the deepfake technology which lets you press a button and generate an audio track of Amtrak Joe doing the scat from "Freak on a Leash" can be (nay, WILL BE) used to disinform and propagandize voters come 2024 is a secondary concern to them, vs. "will I get hired to create a Nicktoon?"
If you want to see this bias in action, look no further than this very Substack! From Lincoln two months ago in "The Library of Blather" --
"I should probably wrap up here by saying LLMs and other “A.I.” programs will certainly have their uses. It’s easy to think of many potential benefits. But most of them—more accurate Google translate, better word processing grammar checks, cleaner automatic transcriptions of audio recordings—are merely improvements on existing functions."
Who could possibly object to "cleaner automatic transcriptions of audio recordings" as an unambiguous social good? How about every single stenographer in the USA? Autocaption tech maxed out at ~90% accuracy a decade ago, which sounds great, but is actually unusable for most purposes. (That last sentence is 20 words long - replace two of them chosen at random. Does the sentence still make sense? That's 90% accuracy for you.) This has not stopped many companies from angling to replace court reporters with a business model that is, basically, "put an Alexa in a courtroom and send its output to some ESL speaker in the Philippines making $2 an hour." This is quite bad for stenographers (for the reasons Lincoln lays out in THIS piece) - but speech-to-text tech is only on your average writer or journalist's radar as a tool for making a Ctrl+F-able copy of an interview you recorded on your iPhone, so the decimation of stenography as a profession doesn't register for them.
(Or maybe it does, and they don't care? After all, journalism is a CALLING, and writing is an ART, but stenography is mindless secretarial work, allegedly. It's neither mindless nor secretarial, actually - imagine playing piano and coding simultaneously - but since the average stenographer is a woman over the age of 40, people assume it must be low-skill work. Media professionals seemed pretty ambivalent re: automation when they thought it was only coming for factories and the retail sector.)
Yes, yes, of course a Substack about writing is going to focus on how A.I. effects writing - but the myopia of mainstream A.I. reporting is real and bad and obscuring the actual effects A.I. is going to have, good and bad, on society.
I don't think we disagree about much here! I think janky "A.I." programs replacing translators, say, will have a lot of bad results. Your examples are good ones. I wouldn't say any of those things you quoted are unambiguous goods, I said they were things that might have their uses as opposed to other features that don't seem useful at all I was blogging about. A more accurate transcription of an interview I've conducted is useful to me, as the current ones take forever to clean up. (Not because I think stenography isn't a real skill or stenographers are mostly older women but because I'm not paid enough to hire a stenographer, so the alternative is doing it myself.) Google translate having more accurate translations of any website a reporter might need to research could be useful for reporters. Etc.
And you bring up a good point that what is useful for someone can ruin other people's careers. And the larger effects of automation of jobs in a world where social safety nets are ever more shredded will lead to lots of bad outcomes.
And yet, you wrote and are distributing this essay on a seamless electronic network of interconnected computers that was the domain of a few eccentric nerds 30 years ago. My point here is that while sometimes tech hype under delivers, other times in massively over delivers and completely restructures society in the process.
Sure! I don't think anyone would argue technology doesn't change society.
Although often it's a lot slower than the predictions and is implemented in more dystopian ways. The internet had done plenty of good and plenty of bad things. I definitely don't think the utopian visions have panned out. Even the idea it would strengthen democracy seems questionable in our age of authoritarian dictators and misinformation campaigns.
I think I'd also say that tech advancements tend to move through sectors. It's possible information tech and Silicon Valley is starting to plateau and the next big leaps might be in, I dunno, bioengineering and genetics? We'll see!
> Even the idea it would strengthen democracy seems questionable in our age of authoritarian dictators and misinformation campaigns.
Well, "misinformation campaign" is what the authoritarian dictators call truthful information that thanks to the power of the internet they can't suppress.
Just today Twitter announced it was suppressing opposition accounts at the behest of Turkey's authoritarian president and China has done a great job of suppression information. These things are tools, and those in power know how to wield them imho.
Most countries don't have the will or ability to duplicate China's great firewall, and the other attempts to suppress information haven't been particularly effective.
On a technical level yes, but misinformation is rife. Are people as a whole better informed than 30 years ago? I don't know. But it certainly doesn't seem obvious that they are.
But probably more importantly the internet era has seen dramatic increase in inequality and the threat of climate change grow ever worse. The internet itself is ever more controlled by a few large corporations. Etc
The climate change danger was reported on over a century ago. The internet is a latecomer to misinformation trotted out for decades in print by ....... writers.
I’m an anthology series editor (my anthology series is Best Microfiction) and the Clarkesworld story is sobering for those of us who read hundreds of submissions each year. We can no longer know for sure what is or isn’t robot written.. and I’m sorry to say that creative writers are training it to write fiction that simulates their particular style so well it’s often impossible to tell it apart. I’ve been surprised that other editors aren’t out there talking about this. Thank you so much for saying what you have here about the nightmare at large, and for putting it right. It’s just incredible that corporations are expecting us to gobble this crap up and feel some kind of utopian gratitude..
Oof yeah I think it's going to cause a lot of problems. I've been dealing with a lot in undergrad classes and it's easier for me to see ways around it in that setting (more quizzes and in-class writing for example) than in a submission setting. Not sure what editors are going to do...
Academic here too (biochemist) and we've shifted to a variety of in-class assessments this year to circumvent some of this, but for next semester I'm actually wanting to get students to use the tools and then research where/what factual errors have been written into the output. I'm hoping that will at least be a lesson to help students learn to not take responses as gospel. We'll see, though.
PS great article, thank you.
> We can no longer know for sure what is or isn’t robot written
If you can't tell, neither will the readers. In which case, why does it matter?
Passing off another's work as your own is plagiarism, and I think that counts for chatbots.
Editors often can't tell if a story is plagiarized in more traditional methods either, without detection tools. But I'd say that matters still
Chatbots are just a tool.
Sure, and copy and pasting in a word processing program is just a tool. They can both enable plagiarism. Doesn't mean everyone who uses them is plagiarizing of course.
But they are a tool that could be taken too far!! That’s the point! This is just the beginning! Why be lazy and use a chatbot anyway?? I have much more respect for a writer who did the hard work and grappled with the process than one who had to use an AI program! Don’t be lazy!
It does to this editor. It’s an ethics issue.
> It does to this editor.
In which case "this editor" will soon become irrelevant.
And hopefully you will too!
Ethics
Excellent close read, I like how you bring in just a sliver of sarcasm here and there without becoming annoying.
A ghoulish side note. Remember the inventor of the Segway? Didn’t he die by directing one of those fucking things over a cliff while using it?
It was actually the guy who later bought the company, not the inventor but that did happen! https://en.wikipedia.org/wiki/Jimi_Heselden
Oh, man. Sad.
AI is the latest result of the uber-capitalism, anti-human, monetize-everything trend that has taken over society in my lifetime. It is the saddest future imaginable. I’m grateful I will be dead before life as lived by humans is a thing of the past.
I love to write. There is no pleasure in having a machine write for me.
Great writing and ideas, thanks.
It's going to get rid of journalism and most writers and artists, and personally I can't wait.
Just because the models are trained on vast amounts of data/creative work doesn't make the achievement of these creations any less interesting or profound. Maybe the Chat-bots shouldn't have been made available and turned into a commercial product in the way they have, but to argue that there is no achievement here seems kind of amazing. As a sci fi writer aren't you more preoccupied with how precarious it's going to be writing any sci fi about the human future with the nature and extent of ai advances so unclear? The spamming of submissions is bad and sad, but surely the less interesting question here.
Given how bound up American sci fi is with the atom age and the space race, it seems very interesting how ai might affect the genre.
I didn't say there is "no achievement"? There's some impressive things they do, certainly, especially in image generation. (I find the image generators a lot more impressive than the text generation. Perhaps because we are familiar with surreal visual art and the "hallucinations" / errors can add interesting effects in images but in text they just add incoherency or factual errors.) I also think they're overpraised in certain ways and a lot of it is branding. These things aren't intelligences or sentient, but the branding of "AI" makes people impressed in a way they wouldn't be if you just said "a computer program did X." You're already seeing companies rush to label things "AI" that have existed for a long time. Spotify has an "AI DJ" to autogenerate playlists now even though Spotify has done that for a long time. Etc.
I think there's a reason science fiction writers, like Ted Chiang, are especially skeptical of this technology though. But I'm not sure what you mean about it being precarious to write about the human future with AI being unclear? That seems like a better time than ever to be a science fiction writer, no?
If you write a near future sci Fi with no no ai it might read like sci Fi with rotary telephones. And even speculation about what an ai might be like will be probably wrong -- in the time it takes to get the book written.
Also: no one talks about how tricky it is to really say what makes us so much more sentient or intelligent than a chat bot. Especially when they improve so rapidly.
Yeah I think that's always just a danger of science fiction. No one ever predicts the future accurately. Hard to predict that, say, the USSR will fall or that flying cars go nowhere but smart phones are invented.
But I think that's also a good lesson in not assuming that AI will necessarily keep improving rapidly. There's quite a good chance it will stall out (at least for some period of time), so you might look just as foolish imagining a 2030 with Data from Star Trek androids you know?
More evidence of California trying to colonise the world. Good luck with that.
Just as I’m getting into my career as a writer ... this! How can I turn the future of AI around so that I can make it work for me in a productive ethical and creative way?
AI is FOMO at its best
Thanks I finally understand what’s the strike all about
Sigh.
Look, I get that "don't believe the hype" is a good approach to reading about new technology, and that we should be wary of the Wired Magazine techno-libertarian-utopian THIS CHANGES EVERYTHING mindset. But the actual bias we should be aware of in writing about A.I. is, to steal a phrase from TVTropes, "most writers are writers." And thus, we are inundated with articles wherein journalists say A.I. is bad because it might obsolete journalists. I know a lot of artists who will scream your ear off about how Stable Diffusion is EVIL PLAGIARISM BAD because THE EVIL CAPITALISTS will use it to steal their jobs [sic!]... and then laugh, laugh, laugh, at the funny memes where the Pope is wearing Balenciaga, or share videos where Trump and Obama are playing Amogus over voice chat, because those ostensibly don't threaten their livelihoods. That the deepfake technology which lets you press a button and generate an audio track of Amtrak Joe doing the scat from "Freak on a Leash" can be (nay, WILL BE) used to disinform and propagandize voters come 2024 is a secondary concern to them, vs. "will I get hired to create a Nicktoon?"
If you want to see this bias in action, look no further than this very Substack! From Lincoln two months ago in "The Library of Blather" --
"I should probably wrap up here by saying LLMs and other “A.I.” programs will certainly have their uses. It’s easy to think of many potential benefits. But most of them—more accurate Google translate, better word processing grammar checks, cleaner automatic transcriptions of audio recordings—are merely improvements on existing functions."
Who could possibly object to "cleaner automatic transcriptions of audio recordings" as an unambiguous social good? How about every single stenographer in the USA? Autocaption tech maxed out at ~90% accuracy a decade ago, which sounds great, but is actually unusable for most purposes. (That last sentence is 20 words long - replace two of them chosen at random. Does the sentence still make sense? That's 90% accuracy for you.) This has not stopped many companies from angling to replace court reporters with a business model that is, basically, "put an Alexa in a courtroom and send its output to some ESL speaker in the Philippines making $2 an hour." This is quite bad for stenographers (for the reasons Lincoln lays out in THIS piece) - but speech-to-text tech is only on your average writer or journalist's radar as a tool for making a Ctrl+F-able copy of an interview you recorded on your iPhone, so the decimation of stenography as a profession doesn't register for them.
(Or maybe it does, and they don't care? After all, journalism is a CALLING, and writing is an ART, but stenography is mindless secretarial work, allegedly. It's neither mindless nor secretarial, actually - imagine playing piano and coding simultaneously - but since the average stenographer is a woman over the age of 40, people assume it must be low-skill work. Media professionals seemed pretty ambivalent re: automation when they thought it was only coming for factories and the retail sector.)
Yes, yes, of course a Substack about writing is going to focus on how A.I. effects writing - but the myopia of mainstream A.I. reporting is real and bad and obscuring the actual effects A.I. is going to have, good and bad, on society.
I don't think we disagree about much here! I think janky "A.I." programs replacing translators, say, will have a lot of bad results. Your examples are good ones. I wouldn't say any of those things you quoted are unambiguous goods, I said they were things that might have their uses as opposed to other features that don't seem useful at all I was blogging about. A more accurate transcription of an interview I've conducted is useful to me, as the current ones take forever to clean up. (Not because I think stenography isn't a real skill or stenographers are mostly older women but because I'm not paid enough to hire a stenographer, so the alternative is doing it myself.) Google translate having more accurate translations of any website a reporter might need to research could be useful for reporters. Etc.
That doesn't make AI translation an unambiguous good certainly. There was a story recently you probably saw of a woman being turned away for asylum because AI translators messed up: https://restofworld.org/2023/ai-translation-errors-afghan-refugees-asylum/
And you bring up a good point that what is useful for someone can ruin other people's careers. And the larger effects of automation of jobs in a world where social safety nets are ever more shredded will lead to lots of bad outcomes.
And yet, you wrote and are distributing this essay on a seamless electronic network of interconnected computers that was the domain of a few eccentric nerds 30 years ago. My point here is that while sometimes tech hype under delivers, other times in massively over delivers and completely restructures society in the process.
Sure! I don't think anyone would argue technology doesn't change society.
Although often it's a lot slower than the predictions and is implemented in more dystopian ways. The internet had done plenty of good and plenty of bad things. I definitely don't think the utopian visions have panned out. Even the idea it would strengthen democracy seems questionable in our age of authoritarian dictators and misinformation campaigns.
I think I'd also say that tech advancements tend to move through sectors. It's possible information tech and Silicon Valley is starting to plateau and the next big leaps might be in, I dunno, bioengineering and genetics? We'll see!
> Even the idea it would strengthen democracy seems questionable in our age of authoritarian dictators and misinformation campaigns.
Well, "misinformation campaign" is what the authoritarian dictators call truthful information that thanks to the power of the internet they can't suppress.
Just today Twitter announced it was suppressing opposition accounts at the behest of Turkey's authoritarian president and China has done a great job of suppression information. These things are tools, and those in power know how to wield them imho.
Most countries don't have the will or ability to duplicate China's great firewall, and the other attempts to suppress information haven't been particularly effective.
On a technical level yes, but misinformation is rife. Are people as a whole better informed than 30 years ago? I don't know. But it certainly doesn't seem obvious that they are.
But probably more importantly the internet era has seen dramatic increase in inequality and the threat of climate change grow ever worse. The internet itself is ever more controlled by a few large corporations. Etc
> the threat of climate change grow ever worse
Thanks for providing an example of misinformation.
The climate change danger was reported on over a century ago. The internet is a latecomer to misinformation trotted out for decades in print by ....... writers.