Keep up the good work Lincoln. As a writer you must know there are serious issues with AI, from our standpoint. For instance, since AI cannot hold a copyright, who do we sue if we're plagiarized? I suppose whoever gets the money, but can that person or entity just point and say, I didn't do it, that machine over there did : )
Richard, currently, a bunch of artists are suing Midjourney, Runway, etc., and a judge just let the case proceed. I feel like this actually has some potential because to me it seems ludicrous that their entire case (the AI companies) is based on Fair Use. The issue is if the courts rule for the artists, these companies will just move to Japan where they've already decided the AI training can be done on whatever the heck you want. Very pro tech.
The NY Times is also suing Chat GPT for copyright infringement in a separate case.
Essentially I advise against revealing plots, since these can't be copyrighted. But that doesn't mean the writer has no recourse. I have not done any legal research, but I suspect any obvious and sustained copy of plot is actionable. It is so rare, wrong, disgusting, and discreditable that courts might take it up, as they did in the case I cited.
I just read a post on LinkedIn about AI companies injecting "extra" instructions into user prompts before submitting them to the LLM with instructions like don't do this or that. It immediately occured to me that aside from the intended filtering, it would make for added plausible deniability should the model do something legally inconvenient. "See, we told it not to do that."
Made me question that idea a bit that AI can't be creative. In this example, the human chefs are the ones being the robots, just following the recipes from the AI. Kind of interesting.
Thanks for sharing this! One thing I don't understand is why there's so much pressure and conversation around AI making us more creative. Efficient I can understand. Making new connections across large data sets. Sure. Sounds great. But why are we pursuing something that can make us more creative (regardless of whether it actually can or not). It's harder to monetize creativity. And the world is not asking for more creativity. It's asking for more time and money so have space and freedom to be creative. It's not like we all got stuck and said, "I wish someone would invent a truly new recipe, but I'm just all out of ideas."
Very true! And I think that's kind of it exactly. They have the potential to be great new tools. And, actually, thinking about the video further, perhaps the chefs are still the artists - using their expertise in preparation and little details only they know how to do which isn't pointed out in the video (which it should be noted was done by IBM).
I really cant understand why some people come and defend ai. Its a tool and thats it, it has no merit. Right now there are thousands of bad movies that people never leased in blockbuster. We surf for hours through movies in netflix until we find something we want to see. If we do this with human work that as Lincoln says has been done with intention word by word, why would I care to sit through an hour of randomized text or video?
And if thats your thing thats great! Theres new tools to create more random video and outputs like a stumbleupon blender. But If I may mention something, you can even do a reverse image search and find the great work that is based in, like that space opera thats just a ripoff from klimt.
This is a misunderstanding of the mechanics of generative AI. Outputs are not "based" on anything unless prompted to specifically do so. I can recommend some good Youtube videos that explain how it works, if you are open to it.
I appreciate the offer, but I would recommend a couple papers on how the algorithms are made and how the same datasets are being used to construct their functionality.
Even a random number generator is biased on its programming unless there is a fractal chaos engine powering it. And we both know thats a lot of firepower for a simple roll of a dice, kind of just like ai does.
Large Language Models (LLMs) like GPT-4 work by predicting the next word in a sentence based on the patterns they've learned from massive amounts of text. They learn these patterns by analyzing text data from a wide variety of sources, such as books, articles, websites, and other written materials. During training, they develop an understanding of language, context, grammar, and factual information to generate text that makes sense and fits the context provided by the user.
Why LLMs Are Not Random
LLMs are not random because they are trained to identify and follow the statistical patterns in language. When generating text, they calculate probabilities for each possible next word and choose the one that best fits the context of the previous words. This process involves complex mathematics and algorithms that help the model make informed choices, rather than just picking words randomly. The output is guided by these patterns and probabilities, which is why the text usually makes sense and follows a logical flow.
Why LLMs Don't Steal Work
LLMs don't "steal" the work of others because they don't store or reproduce specific passages from their training data verbatim (unless prompted with very specific inputs). Instead, they learn general patterns, styles, and structures of language. Think of it like how a person might learn to write by reading many books – they learn how sentences are formed, how arguments are constructed, and how to express ideas in different ways, but they don't memorize and copy entire paragraphs word-for-word.
These models generate new text by combining learned patterns in ways that are similar to, but not identical to, the examples they have seen. So, while they are informed by existing works, they are not reproducing or copying those works directly. Nor do they access existing works while generating text.
A great example of “reading vs understanding” to show what I meant with my comment regarding algorithms and prompting a response to a comment.
Dr Strange didn’t learned how to defeat Thanos because he couldn’t do it. He just ran all the simulations until one had a better chance to perform the task he needed done. No warranties because he still doesn’t know how to defeat him. He did the same against Dormammu, just repeated the task with brute force until the result came out.
My mind went backwards reading this instead of forward. There have always been writers who look for the cheat codes when it comes to mastering their craft. Consider the popularity of Vogel's The Writer's Journey, with its Cambellian map to making a compelling story, and the effect it had on Hollywood in particular. Lots of people have great story ideas, a lot fewer are willing to put in the time to learn how to tell them. Why bother to figure out your story's structure when you can just plug it into the formula and move on to payday? And to be frank, lots of readers don't care about originality or inventiveness or craft, either. Perhaps generative AI will feed both these groups, creating a closed loop of uninspired narrative chow (the literary equivalent of reality tv???), while the rest of us who actually care about writing (writers and readers both) will go on doing our thing.
I'm reminded of this Toni Morrison quote from a video tribute the NYT put out after her passing in 2019:
“[Writing] is control. Nobody tells me what to do. I am in control. It is my world. It’s sometimes wild, the process by which I arrive at something. But nevertheless, it’s mine, it’s free and it’s a way of thinking. It’s pure knowledge."
On art being hard to define, I think intelligence isn't any easier.
I also see AI as a tool.
I think that creativity (and many, many other cognitive phenomena) is often combinatorial. In this picture, AI just gets folded in with all the other stuff. It loses degrees of freedom as it combines with other things to make something else (See W. Brain Arthur and Alicia Juarrero).
Lastly, I agree that if an "author" doesn't care about the writing, outside of sheer accidental creation of transcendent brilliance in a piece of AI writing, why should I?
I really like the line of thinking at the end here where Lincoln is pointing out that AI may simply force the current arts to evolve. To me, it completely comes down to the number of choices made. If the prompter simply says "write a poem about x," and does nothing more, it's obviously not art, and the prompter has no business taking credit for it. On the other hand, let's say you want to make a film, but don't have the budget to hire an animation company for a crazy idea you have...so you prompt an initial image in Midjourney, then refine it in Photoshop, along with 100 other stills to create a cohesive world, spend 3 days using tools like Akool to face swap and more photoshop to achieve character consistency, use Runway for lip-sync and animation to bring to life dialogue from a story you wrote, and then edit all these together yourself with music you've chosen as well as sound design every thing ... go through 50 versions of the little film until your satisfied...you've made an "AI film" where I believe you can call yourself the artist - simply using new AI tools. Essentially, you're still the director of the film, and have hired a robot illustrator to do those initial images vs. a human one. Obviously the courts need to decide whether this is legal - for Midjourney to have trained their robot on copyrighted works - and it's a huge problem that at the moment this hypothetical AI film I describe could not be copyrighted itself due to the use of AI ... but for me, looking at it this way the parallel to painting/photography becomes more defined.
One of the most challenging parts of being a writer is avoiding cliche. Writing asks you to invent details, but the mind reaches for most basic version of those details first. If you go on like that for too long, you write something completely unoriginal, predictable, and cliche. Thus: boring. The opposite of that is making choices.
This is what’s wrong with hacks and Marvel movies. Now, the entire model for making functional AI is to make something that consistently gives us what we expect.
Big yes to this but I also wonder how soon until ai gets to a point of being able to “make choices” ie being given some sort of pole star guiding its words? Not prompt engineering but something more inherent in its code
Life was not created but spontaneously and randomly generated. Therefore, life as we know it, in an evolved form is not art.
But if life were... intentionally created, then surely the creator is an artist and therefore life would be artistic. If the night sky were a painter's palette it would be artistic. Alas it cannot be due to the fact those those stars in the night sky just randomly assembled there over aeons. Sigh.
The mis-step in Ted Chiang's reasoning is that he describes art as making a sequence of choices but then forgets that GenAI is also making complex choices with every next token it predicts/chooses. If his distinction is that human intention is necessary in every single choice made (as opposed to the initial guiding prompt) then it becomes a self-serving & circular definition - no machine can produce art, ever.
In fact, as a thought experiment, I've argued in the past that the opposite case can also be made - that the vast majority of our artistic choices are fundamentally unconscious impulses and built on top of a vast canon of prior art thereby rendering us somewhat machine-like or parrotlike:
I guess it's a question of what "choice" means in this context. An LLM is not a sentient entity--I imagine that's what Chiang would argue one needs not a human per se--but it's also something more complex than the random word generator. It's somewhere in between.
Yes, and "sentience" is an easy escape word for stuff we don't quite understand. So by invoking this new requirement, Ted C has created a new and unattainable (for non-sentient beings) definition of art. In case you're interested, I explore much of this in my essay "We're all Stochastic Parrots":
I think it's a fair criticism, but I also think saying that LLMs are "making choices" akin to a human brain is just as much as an escape as saying they aren't making choices. Both statements are starting from their assumptions.
Agreed. What's fascinating to me is that these similarities/differences, as well as the successes (and failure modes) of GenAI provide us with fresh and intriguing new lenses to re-examine the nature of concepts like art, intentionality, intelligence, sentience and so forth.
Regardless of whether the internal mechanics are similar or not, as models get more capable, we keep having to shift boundaries, and keep recognizing more and more starkly, our "illusion of explanatory depth", until the only thing intentional remaining about intention is the fact that we're conscious about it (most likely post-fact) - i.e., that it feels like *something* to be intentional, and that in the end there's probably nothing special per-se about intention.
Imagine a variation of Searle's Chinese Room thought experiment, where the person in the room "writes" a novel in Chinese. From their point of view they aren't writing a novel, they are just guessing what symbols to use next based on their statistical knowledge of other symbols. I would say that novel isn't art, even though a human made it.
AI programs do make choices in some sense, but they don't understand what it is they are making choices about. An LLM doesn't know it's writing a book about events and characters, it just knows what words go together. Human artists make choices about their art in a way that current AI does not.
This does not mean AI art will never be art. It may be possible in the future to create an AI that makes choices about art the same way that humans do. But the AI we have now isn't that.
This misses Chiang's point, which is that the *human user* prompting a genAI tool is engaged in a less deeply intentional process than a human artist who is not using genAI.
This is evident from the example he gives in the third paragraph of his essay (emphasis mine): "When *you* are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When *you* give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices."
Many say AI is so good at drawing pictures, but there is a big difference that no one wants to speak about (and that also says alot about us and our current behavior of being fixed to screens): namely, AI art is no more than a 2D pixel representation of a painting. If you'd do the painting as well, on top of arranging the pixels and colors, it is still much, much more difficult and this difficulty is part of the art.
If you ever have seen in person The Garden of Earthly Delights by Hieronymos Bosch in Museo Del Prado in Madrid, Spain, you'll know what I mean. Don't websearch for this painting now, websearch for flights to Madrid, because you need to see it in person to understand. (and the same goes for all other paintings, from the first print of hunters and their prey on a wall of a cave to Bosch, Brieghel, Rembrandt, Caspar David Friedrich (who was born today 250 years ago) and so on).
But I took away another difference, namely, can it be art if there is no intention behind it?
The writer used an old version of Chat GPT to write a creative essay and she felt like the uncanniness helped her make creative connections and explore ideas she wouldn't have been able to do on her own. The article describes the piece as "AI assisted," not AI generated, but it sounds like a more poetic and inspiring process than anything created entirely in the latest version of Chat GPT could be today. She ends by asking readers to imagine "What if a band of diverse, anti-capitalist writers and developers got together and created their own language model?" It won't happen, but it's a good thought experiment!
Don't be too sure it won't happen. I have already seen some rumblings of similar intent by Charles Hoskinson on the Cardano project. He has the funds, tools, and technical teams to do something like that if he gets around to it.
Also, I haven't had access to Grok yet because I don't have a paid account on X, but I suspect it may have some leanings in that direction.
Personally, I feel that environmental costs are too often set aside in discussions about A.I., even when the question veers into the philosophical like "Are A.I. outputs art?"
A.I.'s environmental footprint is enormous and (as it exists today) unsustainable. Data centers across the globe consume huge amounts of water and electricity, a significant portion of which is powered by natural gas plants that emit planet-heating carbon, accelerating climate change.
These are the very data centers needed to generate the prose and images we're discussing. Something so integral to A.I's creative process should inform our opinion of it's output, especially when it is so dangerously wasteful. Per Ted Chiang's thoughts, it's also fascinating to consider that A.I. has no choice in this matter. Unlike the performance artist who knowingly wastes something precious to make a point, A.I. must consume wasteful amounts of energy to create any output, even an essay about saving Mother Nature.
Can a creative process that involves mindless destruction of the environment result in art? Of course it can. But I have a vision of a robot standing in a dry, cracked river bed, holding a book he recently self-published on Amazon, as he waits for the approval of any human that passes by looking for water. And I know that no matter what's in that book, it's not art.
I don't know about you, but you know how when you read some piece of writing and it has resonance? That is, the piece moves you. It touches you on the inside. Like when you put a tuning fork next to another vibrating tuning fork and they synchronize. In the same way, we sort of subconsciously "connect" to the mind who wrote the original piece across time and space. Reading AI is kind of tuning into a white noise television. There is signal, but it's so chopped that it just reads as static. Trying to identify sense in the static is like sweeping back the tide. There is just no resonance to tune into. That goes for AI art, writing, and video. It's a short cut that dampens the spatiotemporal signal to the point that meaning is not retained in the attempt to communicate.
I think that's far more accurate than you might think. Almost the whole of consciousness is about resonance which is largely unrestricted by time and space. Most of its more prosaic nature is an artifact of our brain and body's imperfect ability to embody and translate that resonance into normal consciousness.
If you found this article as interesting as I have, you might consider reading "Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing", by Naomi s. Baron.
AI can only aggregate and repeat. It can't surprise or invent. Of course most writers can't do this either, so who knows? : )
They can't. But then I think Chiang is right when he says we care more when a being with intention says something, even if it is a cliche.
Keep up the good work Lincoln. As a writer you must know there are serious issues with AI, from our standpoint. For instance, since AI cannot hold a copyright, who do we sue if we're plagiarized? I suppose whoever gets the money, but can that person or entity just point and say, I didn't do it, that machine over there did : )
Richard, currently, a bunch of artists are suing Midjourney, Runway, etc., and a judge just let the case proceed. I feel like this actually has some potential because to me it seems ludicrous that their entire case (the AI companies) is based on Fair Use. The issue is if the courts rule for the artists, these companies will just move to Japan where they've already decided the AI training can be done on whatever the heck you want. Very pro tech.
The NY Times is also suing Chat GPT for copyright infringement in a separate case.
Thanks very interesting. I wrote about copyright here https://richarddonnelly.substack.com/p/authors-own-it
Essentially I advise against revealing plots, since these can't be copyrighted. But that doesn't mean the writer has no recourse. I have not done any legal research, but I suspect any obvious and sustained copy of plot is actionable. It is so rare, wrong, disgusting, and discreditable that courts might take it up, as they did in the case I cited.
The NYT is going to lose that case
I just read a post on LinkedIn about AI companies injecting "extra" instructions into user prompts before submitting them to the LLM with instructions like don't do this or that. It immediately occured to me that aside from the intended filtering, it would make for added plausible deniability should the model do something legally inconvenient. "See, we told it not to do that."
I'm taking the AI "leadership and management" certificate program at the New School and they had us watch this video this week: https://www.youtube.com/watch?v=mr-1JAnairs
Made me question that idea a bit that AI can't be creative. In this example, the human chefs are the ones being the robots, just following the recipes from the AI. Kind of interesting.
Thanks for sharing this! One thing I don't understand is why there's so much pressure and conversation around AI making us more creative. Efficient I can understand. Making new connections across large data sets. Sure. Sounds great. But why are we pursuing something that can make us more creative (regardless of whether it actually can or not). It's harder to monetize creativity. And the world is not asking for more creativity. It's asking for more time and money so have space and freedom to be creative. It's not like we all got stuck and said, "I wish someone would invent a truly new recipe, but I'm just all out of ideas."
Hype for pump and dump. With more than a dash of hubris.
Thanks Jeff. My takeaway from the video is AI can help us be creative, but it can't create. These are two different things.
Very true! And I think that's kind of it exactly. They have the potential to be great new tools. And, actually, thinking about the video further, perhaps the chefs are still the artists - using their expertise in preparation and little details only they know how to do which isn't pointed out in the video (which it should be noted was done by IBM).
That's not how generative AI works
I really cant understand why some people come and defend ai. Its a tool and thats it, it has no merit. Right now there are thousands of bad movies that people never leased in blockbuster. We surf for hours through movies in netflix until we find something we want to see. If we do this with human work that as Lincoln says has been done with intention word by word, why would I care to sit through an hour of randomized text or video?
And if thats your thing thats great! Theres new tools to create more random video and outputs like a stumbleupon blender. But If I may mention something, you can even do a reverse image search and find the great work that is based in, like that space opera thats just a ripoff from klimt.
This is a misunderstanding of the mechanics of generative AI. Outputs are not "based" on anything unless prompted to specifically do so. I can recommend some good Youtube videos that explain how it works, if you are open to it.
I appreciate the offer, but I would recommend a couple papers on how the algorithms are made and how the same datasets are being used to construct their functionality.
Even a random number generator is biased on its programming unless there is a fractal chaos engine powering it. And we both know thats a lot of firepower for a simple roll of a dice, kind of just like ai does.
Large Language Models (LLMs) like GPT-4 work by predicting the next word in a sentence based on the patterns they've learned from massive amounts of text. They learn these patterns by analyzing text data from a wide variety of sources, such as books, articles, websites, and other written materials. During training, they develop an understanding of language, context, grammar, and factual information to generate text that makes sense and fits the context provided by the user.
Why LLMs Are Not Random
LLMs are not random because they are trained to identify and follow the statistical patterns in language. When generating text, they calculate probabilities for each possible next word and choose the one that best fits the context of the previous words. This process involves complex mathematics and algorithms that help the model make informed choices, rather than just picking words randomly. The output is guided by these patterns and probabilities, which is why the text usually makes sense and follows a logical flow.
Why LLMs Don't Steal Work
LLMs don't "steal" the work of others because they don't store or reproduce specific passages from their training data verbatim (unless prompted with very specific inputs). Instead, they learn general patterns, styles, and structures of language. Think of it like how a person might learn to write by reading many books – they learn how sentences are formed, how arguments are constructed, and how to express ideas in different ways, but they don't memorize and copy entire paragraphs word-for-word.
These models generate new text by combining learned patterns in ways that are similar to, but not identical to, the examples they have seen. So, while they are informed by existing works, they are not reproducing or copying those works directly. Nor do they access existing works while generating text.
A great example of “reading vs understanding” to show what I meant with my comment regarding algorithms and prompting a response to a comment.
Dr Strange didn’t learned how to defeat Thanos because he couldn’t do it. He just ran all the simulations until one had a better chance to perform the task he needed done. No warranties because he still doesn’t know how to defeat him. He did the same against Dormammu, just repeated the task with brute force until the result came out.
My mind went backwards reading this instead of forward. There have always been writers who look for the cheat codes when it comes to mastering their craft. Consider the popularity of Vogel's The Writer's Journey, with its Cambellian map to making a compelling story, and the effect it had on Hollywood in particular. Lots of people have great story ideas, a lot fewer are willing to put in the time to learn how to tell them. Why bother to figure out your story's structure when you can just plug it into the formula and move on to payday? And to be frank, lots of readers don't care about originality or inventiveness or craft, either. Perhaps generative AI will feed both these groups, creating a closed loop of uninspired narrative chow (the literary equivalent of reality tv???), while the rest of us who actually care about writing (writers and readers both) will go on doing our thing.
Sounds like a sad day we have reached. Perhaps more goosebumps...
I'm reminded of this Toni Morrison quote from a video tribute the NYT put out after her passing in 2019:
“[Writing] is control. Nobody tells me what to do. I am in control. It is my world. It’s sometimes wild, the process by which I arrive at something. But nevertheless, it’s mine, it’s free and it’s a way of thinking. It’s pure knowledge."
https://www.nytimes.com/video/obituaries/100000006648313/toni-morrison-death.html (The full quote starts at about 3:26)
On art being hard to define, I think intelligence isn't any easier.
I also see AI as a tool.
I think that creativity (and many, many other cognitive phenomena) is often combinatorial. In this picture, AI just gets folded in with all the other stuff. It loses degrees of freedom as it combines with other things to make something else (See W. Brain Arthur and Alicia Juarrero).
Lastly, I agree that if an "author" doesn't care about the writing, outside of sheer accidental creation of transcendent brilliance in a piece of AI writing, why should I?
Thanks for sharing!
I really like the line of thinking at the end here where Lincoln is pointing out that AI may simply force the current arts to evolve. To me, it completely comes down to the number of choices made. If the prompter simply says "write a poem about x," and does nothing more, it's obviously not art, and the prompter has no business taking credit for it. On the other hand, let's say you want to make a film, but don't have the budget to hire an animation company for a crazy idea you have...so you prompt an initial image in Midjourney, then refine it in Photoshop, along with 100 other stills to create a cohesive world, spend 3 days using tools like Akool to face swap and more photoshop to achieve character consistency, use Runway for lip-sync and animation to bring to life dialogue from a story you wrote, and then edit all these together yourself with music you've chosen as well as sound design every thing ... go through 50 versions of the little film until your satisfied...you've made an "AI film" where I believe you can call yourself the artist - simply using new AI tools. Essentially, you're still the director of the film, and have hired a robot illustrator to do those initial images vs. a human one. Obviously the courts need to decide whether this is legal - for Midjourney to have trained their robot on copyrighted works - and it's a huge problem that at the moment this hypothetical AI film I describe could not be copyrighted itself due to the use of AI ... but for me, looking at it this way the parallel to painting/photography becomes more defined.
my experience w/ ai - useful to create serviceable chapter covers on substack with minimal effort. its pulp, but gets the job done.
One of the most challenging parts of being a writer is avoiding cliche. Writing asks you to invent details, but the mind reaches for most basic version of those details first. If you go on like that for too long, you write something completely unoriginal, predictable, and cliche. Thus: boring. The opposite of that is making choices.
This is what’s wrong with hacks and Marvel movies. Now, the entire model for making functional AI is to make something that consistently gives us what we expect.
Yes, I think you we'll segments of the market that cater the most toward cliche and audience expectations get eaten up with AI.
Big yes to this but I also wonder how soon until ai gets to a point of being able to “make choices” ie being given some sort of pole star guiding its words? Not prompt engineering but something more inherent in its code
Life is art.
Life was not created but spontaneously and randomly generated. Therefore, life as we know it, in an evolved form is not art.
But if life were... intentionally created, then surely the creator is an artist and therefore life would be artistic. If the night sky were a painter's palette it would be artistic. Alas it cannot be due to the fact those those stars in the night sky just randomly assembled there over aeons. Sigh.
The mis-step in Ted Chiang's reasoning is that he describes art as making a sequence of choices but then forgets that GenAI is also making complex choices with every next token it predicts/chooses. If his distinction is that human intention is necessary in every single choice made (as opposed to the initial guiding prompt) then it becomes a self-serving & circular definition - no machine can produce art, ever.
In fact, as a thought experiment, I've argued in the past that the opposite case can also be made - that the vast majority of our artistic choices are fundamentally unconscious impulses and built on top of a vast canon of prior art thereby rendering us somewhat machine-like or parrotlike:
https://hyperstellar.substack.com/p/let-me-finish-your-sentences?open=false#%C2%A7is-elena-ferrante-an-llm
I guess it's a question of what "choice" means in this context. An LLM is not a sentient entity--I imagine that's what Chiang would argue one needs not a human per se--but it's also something more complex than the random word generator. It's somewhere in between.
Yes, and "sentience" is an easy escape word for stuff we don't quite understand. So by invoking this new requirement, Ted C has created a new and unattainable (for non-sentient beings) definition of art. In case you're interested, I explore much of this in my essay "We're all Stochastic Parrots":
https://hyperstellar.substack.com/p/let-me-finish-your-sentences
I think it's a fair criticism, but I also think saying that LLMs are "making choices" akin to a human brain is just as much as an escape as saying they aren't making choices. Both statements are starting from their assumptions.
Agreed. What's fascinating to me is that these similarities/differences, as well as the successes (and failure modes) of GenAI provide us with fresh and intriguing new lenses to re-examine the nature of concepts like art, intentionality, intelligence, sentience and so forth.
Regardless of whether the internal mechanics are similar or not, as models get more capable, we keep having to shift boundaries, and keep recognizing more and more starkly, our "illusion of explanatory depth", until the only thing intentional remaining about intention is the fact that we're conscious about it (most likely post-fact) - i.e., that it feels like *something* to be intentional, and that in the end there's probably nothing special per-se about intention.
Imagine a variation of Searle's Chinese Room thought experiment, where the person in the room "writes" a novel in Chinese. From their point of view they aren't writing a novel, they are just guessing what symbols to use next based on their statistical knowledge of other symbols. I would say that novel isn't art, even though a human made it.
AI programs do make choices in some sense, but they don't understand what it is they are making choices about. An LLM doesn't know it's writing a book about events and characters, it just knows what words go together. Human artists make choices about their art in a way that current AI does not.
This does not mean AI art will never be art. It may be possible in the future to create an AI that makes choices about art the same way that humans do. But the AI we have now isn't that.
This misses Chiang's point, which is that the *human user* prompting a genAI tool is engaged in a less deeply intentional process than a human artist who is not using genAI.
This is evident from the example he gives in the third paragraph of his essay (emphasis mine): "When *you* are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When *you* give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices."
What you are describing in your second paragraph is a stereotype.
Many say AI is so good at drawing pictures, but there is a big difference that no one wants to speak about (and that also says alot about us and our current behavior of being fixed to screens): namely, AI art is no more than a 2D pixel representation of a painting. If you'd do the painting as well, on top of arranging the pixels and colors, it is still much, much more difficult and this difficulty is part of the art.
If you ever have seen in person The Garden of Earthly Delights by Hieronymos Bosch in Museo Del Prado in Madrid, Spain, you'll know what I mean. Don't websearch for this painting now, websearch for flights to Madrid, because you need to see it in person to understand. (and the same goes for all other paintings, from the first print of hunters and their prey on a wall of a cave to Bosch, Brieghel, Rembrandt, Caspar David Friedrich (who was born today 250 years ago) and so on).
But I took away another difference, namely, can it be art if there is no intention behind it?
Thanks for this discussion! I've been thinking about this WIRED piece ever since I read it. https://www.wired.com/story/confessions-viral-ai-writer-chatgpt/
The writer used an old version of Chat GPT to write a creative essay and she felt like the uncanniness helped her make creative connections and explore ideas she wouldn't have been able to do on her own. The article describes the piece as "AI assisted," not AI generated, but it sounds like a more poetic and inspiring process than anything created entirely in the latest version of Chat GPT could be today. She ends by asking readers to imagine "What if a band of diverse, anti-capitalist writers and developers got together and created their own language model?" It won't happen, but it's a good thought experiment!
Don't be too sure it won't happen. I have already seen some rumblings of similar intent by Charles Hoskinson on the Cardano project. He has the funds, tools, and technical teams to do something like that if he gets around to it.
Also, I haven't had access to Grok yet because I don't have a paid account on X, but I suspect it may have some leanings in that direction.
Interesting! I'll keep an eye out for those.
Enjoyed reading your thoughts.
Personally, I feel that environmental costs are too often set aside in discussions about A.I., even when the question veers into the philosophical like "Are A.I. outputs art?"
A.I.'s environmental footprint is enormous and (as it exists today) unsustainable. Data centers across the globe consume huge amounts of water and electricity, a significant portion of which is powered by natural gas plants that emit planet-heating carbon, accelerating climate change.
These are the very data centers needed to generate the prose and images we're discussing. Something so integral to A.I's creative process should inform our opinion of it's output, especially when it is so dangerously wasteful. Per Ted Chiang's thoughts, it's also fascinating to consider that A.I. has no choice in this matter. Unlike the performance artist who knowingly wastes something precious to make a point, A.I. must consume wasteful amounts of energy to create any output, even an essay about saving Mother Nature.
Can a creative process that involves mindless destruction of the environment result in art? Of course it can. But I have a vision of a robot standing in a dry, cracked river bed, holding a book he recently self-published on Amazon, as he waits for the approval of any human that passes by looking for water. And I know that no matter what's in that book, it's not art.
I don't know about you, but you know how when you read some piece of writing and it has resonance? That is, the piece moves you. It touches you on the inside. Like when you put a tuning fork next to another vibrating tuning fork and they synchronize. In the same way, we sort of subconsciously "connect" to the mind who wrote the original piece across time and space. Reading AI is kind of tuning into a white noise television. There is signal, but it's so chopped that it just reads as static. Trying to identify sense in the static is like sweeping back the tide. There is just no resonance to tune into. That goes for AI art, writing, and video. It's a short cut that dampens the spatiotemporal signal to the point that meaning is not retained in the attempt to communicate.
I think that's far more accurate than you might think. Almost the whole of consciousness is about resonance which is largely unrestricted by time and space. Most of its more prosaic nature is an artifact of our brain and body's imperfect ability to embody and translate that resonance into normal consciousness.
If you found this article as interesting as I have, you might consider reading "Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing", by Naomi s. Baron.