Great essay, and I agree on many points. What do you think about translation (and specifically, self-translation, to nullify the IP issues) using AI? It is not a big jump to assume that in the very near future, most translations will be done with the help of AI (in the worst case, with AI as translator and a human as an editor), and we would have to figure out the ethics of that. So, if one translates their own text to another language using GenAI, is it, in your opinion, much different than using other translation apps, like Google Translate? And is it much different than using a dictionary?
These are interesting questions for sure. I think an author has control of their work, including the control to translate your work. I do think that good translation is an artform and involves a lot of knowledge of style, context, culture, and such that I'm not sure LLMs can ever achieve. (Just read multiple translations of the same book to see that.)
I'm not a translator myself, so the question of what tools a translator can use while translating and receive credit as translator is something I'd want translators to weigh in on.
Yes, that's fair. I agree that translation is a nuanced art, of course, but a common tool translators use is a word-for-word initial translated text that they then edit until the final version. I am trying to understand (and I have no idea myself), what are the implications of using GenAI as a resource for that word-for-word draft.
Personally, I wouldn't see any *ethical* issue with using AI to translate your own work, but, currently, I *definitely* wouldn't trust it to be a decent translation
I only really speak one, but would want a native speaker or human translator to check anything AI did, 'cause my experience with AI translation for even short posts has been extremely disappointing, and definitely wouldn't want to put my name to a complete work that came across as written really poorly in the other language.
Anything AI does can never be a final version, as a rule. You always would have to double-check, and you would always have to edit.
But that is also true if you hire a human translator. I had a lot of experience with hiring technical translators several years ago (before the AI option was available), and every word had to be checked every time.
But as I already wrote on a slightly different topic, checking is a much easier and faster job than the actual job.
I still think the biggest threat is theft. Since AI steals writing and represents it as it's own, those using AI won't even know they are stealing. But if I'm the copyright holder, I don't care who or what you used to steal my work. I'm coming after you.
Treating it as you would a human sounds good in terms of apportioning credit. But it seems like a bad idea to anthropomorphize something that is definitely not human. It also feeds into silicon valley PR and gives legitimacy to the billions in valuation.
I can't disagree with that rebuttal. There is probably a way to reword this, since I think the same rules apply to other programs. (E.g., I don't think using Word Spell and Grammar check is much different for this question than using ChatGPT)
Just to add an example of AI being used generatively in a novel: American Abductions by Mauro Javier Cardenas features a 'Semantic Carrington Generator' that responds to prompts with sentences from Leonora Carrington's fiction. Apparently Cardenas programmed such a chatbot and interpolated its outputs to introduce an 'automatic writing' type of influence into his chapter-length sentences. One chapter dramatizes the programming and debugging of the chatbot, and it listens in and intrudes at other points throughout the novel (i.e. Cardenas would feed it bits of his prose and then include whatever Carrington quotation it spat out as though it were responding to something a character had just spoken aloud), but it's a very limited sort of chatbot being used in a very clear manner.
The question is...must we use it? Has not the role of artists in society been underscored by protest? Does it not just cheapen the art, but groom a cheaper reader insensitive to the nuances of human expression?
The problem with this proposal is that treating AI like a human would mean paying it like a human, particularly for art creation. People use AI to save money and thus destroy the livelihoods of other creatives, which I can’t ever think is ok. (Aside from weakening your own creative abilities and critical thinking skills.) and the idea that you can use AI for research if you check it carefully is just silly; if you have the knowledge and skills to check it, why would you use it? People use it specifically because they do not have the area knowledge or research skills and thus cannot effectively check it. The only way to do reliable research is…to do research.
I largely agree with you but I'm not saying treat AI as a human as much as saying treat AI text as text generated by anything or anyone else. Including a paragraph of another author's text without credit is the same as including a paragraph of computer generated text without credit.
I disagree about research. Some AI programs can do things like analyze large data sets or pull information much easier than humans can.
I kinda disagree with the last part. Solving the research-related problem and checking whether the solution is correct are two very different tasks in terms of resources, time, etc.
Maybe we aren’t talking about the same kind of research? As a historian, I don’t see how you can “check” history, for instance, unless you have studied it enough to know if something is false (you need to already have a base and to know how to judge credibility and sources). And doing the research is an important part of the thinking and creative process; I would not trust it to an AI AND it would weaken my own ability to interpret and produce new understandings.
What I mean by research is, for example, historical research for a fantasy novel in a historical setting, and emphatically not for a thesis work in history.
Say, you need to come up with a poisonous plant for a story in a medieval setting. It takes orders of magnitude less time to ask a GenAI to give you a dozen viable options and then quickly google them to refine and fact-check than to do full-scale research by yourself.
Why would this be different from an ordinary non AI google search and a quick dip into Wikipedia (except they are less likely to hallucinate?) I don’t really see why it is necessary or an advantage. (And if it’s fantasy you can just make up a poisonous plant! Although I guess that depends how historicalish you are trying to be). I am not a medievalist, but if I were looking for armor styles or battle timelines or fashion, I wouldn’t ask AI for those things; I would just search and read.
Without disagreeing with you, I'd still say the chatbot way is okay to do. Like, if my student told me they asked ChatGPT to give them articles to read I might roll my eyes a bit but I wouldn't consider cheating. If they got ChatGPT to write their paragraphs, well...
I have a friend (also a prof) who had a PhD student provide a bibliography; the student used chatgpt and provided a bibliography that included hallucinated citations. I have a whole lot of wow about that story but one part of it is that no one would have noticed if it wasn’t reviewed by an expert in the field who knows who writes what. So I would have serious doubts that a chatgpt generated article list would include the articles a student should read for a project (peer reviewed, credible, central to topic) even if it didn’t hallucinate. You find those things by tracking down bibliographies, reading secondary sources, reflecting, spiraling out to find more that are relevant, etc. I would not necessarily call it cheating but I also wouldn’t call it helpful or learning research skills, which (for me) is usually the main point of assigning a research paper.
TBF, I have used chatGTP to suggest a reading list. It wasn't wildly useful, and Google scholar does a better job, but it *did* produce some surprising and wildly unlikely suggestions I probably wouldn't have considered otherwise (frankly because they weren't the best options, but fun in a kind of randomiser way).
You can, and people absolutely do that. My claim is that in this specific case, doing it with a decent AI and then doing a proper fact-check takes orders of magnitude less time and effort. If we treat both AI and Google/Wiki as tools of the trade, AI seems significantly more efficient.
But…ok, I don’t use chatgpt or ai at all, but I have seen some outputs. Would you not have to go to the source websites to read them in order to check it? Since you have to read it to check anyway, and probably read a few if you are really checking that the info you are getting is accurate and agreed on by multiple sources, what is the efficiency savings? Genuine question (I don’t have efficiency as my top goal tbh but I am curious about what advantage this really gives).
It's still pretty easy to spot AI-generated non-abstract art and illustration--certainly for my partner, a multimedia prof. Often, there's no caption or attribution either. It's usually not nearly as interesting as something created by a human with multimedia software. AI can imitate a style (van Gogh) flawlessly but, as yet, it can't create its own style.
Love this exploration of the ethics around AI and storytelling. Crediting AI as you would another human is a great rule. As an instructor, I don’t care if students use AI for proofing so long as the main content is theirs. It’s like having your English major roommate proof your paper. Many writers are entirely anti-AI. My view, AI ain’t going anywhere, so we need to get familiar with it. For those interested, this is my AI experiment: https://open.substack.com/pub/danieltoronto. It perhaps goes overboard on transparency, but I think of it as a learning tool. (I’ve certainly learned a lot.) Plus the stories are just fun. But this sort of story writing will never satisfy me in the way writing traditional fiction does.
You should tell people if you use AI. We can then choose to take our time, attention, and dollars elsewhere.
Yes I think this is what most people will do (myself included, for most uses)
Great essay, and I agree on many points. What do you think about translation (and specifically, self-translation, to nullify the IP issues) using AI? It is not a big jump to assume that in the very near future, most translations will be done with the help of AI (in the worst case, with AI as translator and a human as an editor), and we would have to figure out the ethics of that. So, if one translates their own text to another language using GenAI, is it, in your opinion, much different than using other translation apps, like Google Translate? And is it much different than using a dictionary?
These are interesting questions for sure. I think an author has control of their work, including the control to translate your work. I do think that good translation is an artform and involves a lot of knowledge of style, context, culture, and such that I'm not sure LLMs can ever achieve. (Just read multiple translations of the same book to see that.)
I'm not a translator myself, so the question of what tools a translator can use while translating and receive credit as translator is something I'd want translators to weigh in on.
Yes, that's fair. I agree that translation is a nuanced art, of course, but a common tool translators use is a word-for-word initial translated text that they then edit until the final version. I am trying to understand (and I have no idea myself), what are the implications of using GenAI as a resource for that word-for-word draft.
Personally, I wouldn't see any *ethical* issue with using AI to translate your own work, but, currently, I *definitely* wouldn't trust it to be a decent translation
I guess that would depend on your knowledge of the two languages.
I only really speak one, but would want a native speaker or human translator to check anything AI did, 'cause my experience with AI translation for even short posts has been extremely disappointing, and definitely wouldn't want to put my name to a complete work that came across as written really poorly in the other language.
Anything AI does can never be a final version, as a rule. You always would have to double-check, and you would always have to edit.
But that is also true if you hire a human translator. I had a lot of experience with hiring technical translators several years ago (before the AI option was available), and every word had to be checked every time.
But as I already wrote on a slightly different topic, checking is a much easier and faster job than the actual job.
I still think the biggest threat is theft. Since AI steals writing and represents it as it's own, those using AI won't even know they are stealing. But if I'm the copyright holder, I don't care who or what you used to steal my work. I'm coming after you.
Your observation about AI-generated novels flooding Amazon Kindle would seem to devalue that particular platform / route-to-market even more.
Treating it as you would a human sounds good in terms of apportioning credit. But it seems like a bad idea to anthropomorphize something that is definitely not human. It also feeds into silicon valley PR and gives legitimacy to the billions in valuation.
I can't disagree with that rebuttal. There is probably a way to reword this, since I think the same rules apply to other programs. (E.g., I don't think using Word Spell and Grammar check is much different for this question than using ChatGPT)
This is an interesting take. I've been afraid to use AI at all due to the backlash in the author community.
AI cannot be used ethically until it is trained ethically.
💯, co-sign!
As mentioned in the note that inspired the post, I co-sign this rule of thumb. It actually feels obvious to me.
Really enjoying METALLIC REALMS! It was awfully nice of you to write a book tailored precisely to my interests.
Thank you!
Just to add an example of AI being used generatively in a novel: American Abductions by Mauro Javier Cardenas features a 'Semantic Carrington Generator' that responds to prompts with sentences from Leonora Carrington's fiction. Apparently Cardenas programmed such a chatbot and interpolated its outputs to introduce an 'automatic writing' type of influence into his chapter-length sentences. One chapter dramatizes the programming and debugging of the chatbot, and it listens in and intrudes at other points throughout the novel (i.e. Cardenas would feed it bits of his prose and then include whatever Carrington quotation it spat out as though it were responding to something a character had just spoken aloud), but it's a very limited sort of chatbot being used in a very clear manner.
Very cool!
The question is...must we use it? Has not the role of artists in society been underscored by protest? Does it not just cheapen the art, but groom a cheaper reader insensitive to the nuances of human expression?
The problem with this proposal is that treating AI like a human would mean paying it like a human, particularly for art creation. People use AI to save money and thus destroy the livelihoods of other creatives, which I can’t ever think is ok. (Aside from weakening your own creative abilities and critical thinking skills.) and the idea that you can use AI for research if you check it carefully is just silly; if you have the knowledge and skills to check it, why would you use it? People use it specifically because they do not have the area knowledge or research skills and thus cannot effectively check it. The only way to do reliable research is…to do research.
I largely agree with you but I'm not saying treat AI as a human as much as saying treat AI text as text generated by anything or anyone else. Including a paragraph of another author's text without credit is the same as including a paragraph of computer generated text without credit.
I disagree about research. Some AI programs can do things like analyze large data sets or pull information much easier than humans can.
I kinda disagree with the last part. Solving the research-related problem and checking whether the solution is correct are two very different tasks in terms of resources, time, etc.
Maybe we aren’t talking about the same kind of research? As a historian, I don’t see how you can “check” history, for instance, unless you have studied it enough to know if something is false (you need to already have a base and to know how to judge credibility and sources). And doing the research is an important part of the thinking and creative process; I would not trust it to an AI AND it would weaken my own ability to interpret and produce new understandings.
What I mean by research is, for example, historical research for a fantasy novel in a historical setting, and emphatically not for a thesis work in history.
Say, you need to come up with a poisonous plant for a story in a medieval setting. It takes orders of magnitude less time to ask a GenAI to give you a dozen viable options and then quickly google them to refine and fact-check than to do full-scale research by yourself.
Why would this be different from an ordinary non AI google search and a quick dip into Wikipedia (except they are less likely to hallucinate?) I don’t really see why it is necessary or an advantage. (And if it’s fantasy you can just make up a poisonous plant! Although I guess that depends how historicalish you are trying to be). I am not a medievalist, but if I were looking for armor styles or battle timelines or fashion, I wouldn’t ask AI for those things; I would just search and read.
Without disagreeing with you, I'd still say the chatbot way is okay to do. Like, if my student told me they asked ChatGPT to give them articles to read I might roll my eyes a bit but I wouldn't consider cheating. If they got ChatGPT to write their paragraphs, well...
I have a friend (also a prof) who had a PhD student provide a bibliography; the student used chatgpt and provided a bibliography that included hallucinated citations. I have a whole lot of wow about that story but one part of it is that no one would have noticed if it wasn’t reviewed by an expert in the field who knows who writes what. So I would have serious doubts that a chatgpt generated article list would include the articles a student should read for a project (peer reviewed, credible, central to topic) even if it didn’t hallucinate. You find those things by tracking down bibliographies, reading secondary sources, reflecting, spiraling out to find more that are relevant, etc. I would not necessarily call it cheating but I also wouldn’t call it helpful or learning research skills, which (for me) is usually the main point of assigning a research paper.
TBF, I have used chatGTP to suggest a reading list. It wasn't wildly useful, and Google scholar does a better job, but it *did* produce some surprising and wildly unlikely suggestions I probably wouldn't have considered otherwise (frankly because they weren't the best options, but fun in a kind of randomiser way).
You can, and people absolutely do that. My claim is that in this specific case, doing it with a decent AI and then doing a proper fact-check takes orders of magnitude less time and effort. If we treat both AI and Google/Wiki as tools of the trade, AI seems significantly more efficient.
But…ok, I don’t use chatgpt or ai at all, but I have seen some outputs. Would you not have to go to the source websites to read them in order to check it? Since you have to read it to check anyway, and probably read a few if you are really checking that the info you are getting is accurate and agreed on by multiple sources, what is the efficiency savings? Genuine question (I don’t have efficiency as my top goal tbh but I am curious about what advantage this really gives).
It's still pretty easy to spot AI-generated non-abstract art and illustration--certainly for my partner, a multimedia prof. Often, there's no caption or attribution either. It's usually not nearly as interesting as something created by a human with multimedia software. AI can imitate a style (van Gogh) flawlessly but, as yet, it can't create its own style.
Love this exploration of the ethics around AI and storytelling. Crediting AI as you would another human is a great rule. As an instructor, I don’t care if students use AI for proofing so long as the main content is theirs. It’s like having your English major roommate proof your paper. Many writers are entirely anti-AI. My view, AI ain’t going anywhere, so we need to get familiar with it. For those interested, this is my AI experiment: https://open.substack.com/pub/danieltoronto. It perhaps goes overboard on transparency, but I think of it as a learning tool. (I’ve certainly learned a lot.) Plus the stories are just fun. But this sort of story writing will never satisfy me in the way writing traditional fiction does.
“Just say no.”