Processing: How Vauhini Vara Wrote Searches: Selfhood in the Digital Age
The author on writing with and without AI, the beauty of human storytelling, and the biases of big tech.
On a semi-regular basis, I interview authors about their writing processes and the craft behind their books. You can find previous entries here. This week I’m excited to be talking to Vauhini Vara about her new essay collection, Searches: Selfhood in the Digital Age, which will be published next week on April 8th. Vara’s previous books include the excellent (and Pulitzer Prize finalist) novel The Immortal King Rao—you can read our interview about that novel here—and also excellent short story collection This Is Salvaged. Vara is also the author of one of the first, and still best, AI-assisted essays: “Ghosts.” In that viral 2021 essay, Vara uses GPT-3—a precursor to ChatGPT—to help her write about her sister’s death. Vara’s new book, Searches, draws on “Ghosts,” her career as a technology journalist, and her personal history to explore the ways modern technology and global tech companies are shaping our lives.
I talked with Vara over Zoom about writing with LLMs, the biases built into new technology, and the dangers of letting AI do our thinking for us. (The interview has been condensed and edited for clarity.)
Searches is structured with what felt to me like a tri-prong approach in which you have the kind of core essays, the more traditional essays, that are followed by what I thought of as more experimental essays that complicate and deepen the prior essays, and then those pairs of essays are fed into ChatGPT. There's an ongoing discussion between you and ChatGPT throughout the book. So, each section has three parts. My first question is how did you come up with this structure for the book? And in general, how did the whole book come together?
Yeah, so I sold this book on proposal. The reason I bring up the business side of it is that when I sold it, the book I proposed was just the experimental essays. The even-numbered chapters in the final book. So, it was just like the series of eight or ten weird, experimental essays. That was the book that my then editor, Lisa Lucas at Pantheon, bought. And that was the book that I sent to her, around this time last year. When she read it, she came back to me and said, “You know, I love these, but I think there is a lot that is subtextual in them where you as the author know what it is that you're trying to get across but readers might not. Have you thought about adding an introduction and conclusion where you're setting up what these pieces are going to do and then closing them out?” I came away from that conversation being like, wait a minute, this book doesn't just need an introduction and conclusion. It needs like a whole new arc that spans the entire book!
And so in about a month—and I'm a very, very slow writer. I wrote my first two books in more than ten years each—but in the span of a month, I drafted all these additional chapters. The chapters that are now odd-numbered and form this narrative arc that carries you from beginning to end of the book and sets up each of those experimental essays. It was just a small comment she made, but I realized how true it was that I was relying too much on subtext. I wanted certain things to be explicit, and the only way I could do that would be to say them outright. And I also wanted to give readers more of a grounding in my own personal context that led to the creation of each of these pieces.
Another thing that Lisa said—just to give all the credit where it's due—was, “You know, ChatGPT is a thing now, and it's not really that present in the book as you've currently written it. I wonder what would happen if you fed your book to ChatGPT.” My initial gut reaction was, oh, you know, that sounds terrible. [Laughs] No, never. But then I wondered. The way I thought about the book engaging with technology was when I have this essay made up of by Google searches, for example, or an essay made up of by Amazon reviews, I was thinking about the ways in which it both revealed something about me and revealed something about the technology itself. And so, I was thinking about whether engaging with ChatGPT might do a similar thing. I thought, well, let me see what happens if I give ChatGPT two chapters of this already finished book and let it engage with the book. I was sort of disgustedly delighted by the way in which it revealed some things about me, certainly, but also revealed so much about itself and the way the technology works. That's what made me realize, okay, this is a viable way to approach a third part of the book.
That’s fascinating, because I would have guessed the opposite. That the odd-numbered, more traditional essays came first. Maybe to give anyone reading this an example of these experimental chapters, your first chapter talks about the history of internet search and also your own personal history of early internet use. That is followed by an experimental chapter where you got Google to send you all your Google searches and you’ve made a really interesting collage of all the search queries you’ve done. Were there other experimental essays that you wanted to make work but didn't quite fit in the book that are now in your computer's recycling bin or hopefully saved in a draft folder?
Yeah, actually. Most of them are ideas that didn't quite pan out. I had this idea to write an essay made up of other people's tweets. Back when X was called Twitter. What I wanted to do was create a found material collage essay made up of what other people were saying as a commentary on how authorship can be fluid and shared in these kinds of spaces. The way also in which things we publish online can be appropriated by the big technology companies that create the platforms for their own needs. I ultimately didn't use that one for a more intellectual reason than a craft reason. I felt if I was using other people's language without their consent in a piece of my own writing I would be replicating what I am suggesting that big technology companies are doing. For that reason, I didn't include it. But I still wanted to get at that idea, which is why I included another chapter in which there's a more consensual use of other people's writing.
So, I call this interview series “processing,” mainly because I feel as writers we're often not asked much about writing process, craft, and those kind of topics. Given the focus of technology in the book—and you were just talking about using ChatGPT—I'm very curious about your writing process as it relates to technology. When you're not working with ChatGPT explicitly, are you the kind of writer who uses a Freedom app or turns off the WiFi? Or the reverse, are you always online while you're writing?
Yeah, no. My husband, the writer Andrew Altschul, is that kind of writer. He always finds he locked himself out of his computer by mistake because he was, like, too strict with his settings. I have to go in and let him back onto his computer. [Laughs] I’ve never done that. I think part of the reason is my work is quite engaged with technology. My use of like the internet is deeply intertwined with my writing process. I write in Google Docs. I have multiple tabs open if I get distracted while I'm writing, I never really stop myself from going down a Wikipedia rabbit hole. And in my first book, my novel, part of the reason I wrote this novel in which there's a character whose brain is attached to the internet is that I just found myself going down these rabbit holes and being like, Oh, I'd love to include that in my book but there’s no organic reason to do that. Anyway, that’s what allowed me to do that. But no, I'm always going down Wikipedia rabbit holes. I'm always pausing and checking my email or refreshing the New York Times website.
I don’t ever use AI models to produce prose that I call my own. However, I will use ChatGPT for things that I would use Google for. Like trying to think of a word that's on the tip of my tongue or if I'm just trying to get information about something. I feel less conflicted about that as a writer than using AI to create prose. There are other things I feel conflicted about, like the climate implications and other sort of ethical issues with AI. But I do use it for certain things.
I feel like one thing that's so tricky about talking about AI in general—whether you're talking about it in art or ethics or academia—is that there's just so many uses for it. The use of AI to do a little bit of research or to be a slightly better grammar and spellchecking service is dramatically different, artistically speaking, from letting ChatGPT write all the description in a scene that you don't want to write.
Right.
There's not really a question there. [Both laugh.] You brought up your great novel The Immortal King Rao. And you also have a story collection, This Is Salvaged. As someone who moves freely between fiction and essay writing and journalism, is your writing process different moving between nonfiction and fiction? I don't necessarily mean about technology, but, for example, do you outline in one case and not in another?
I tend to outline pretty much anything journalistic I write. I think maybe here's the difference. In what I would call creative or literary work, whether it's fiction or non-fiction, I'm writing to figure out what it is I'm trying to say in the first place, which resists for me outlining, right? In more conventional or traditional journalism, I also start out not knowing what it is I'm trying to say, but the process by which I start to understand what it is I want to say takes place in large part through the reporting process, through talking to people, compiling notes, looking at transcripts of conversations. By the time I've sort of done all that note taking, I often do have somewhat of a sense of what the story wants to be and therefore can outline it and then just plug in the pieces. I find myself to be a much faster writer when I'm writing more conventional or traditional nonfiction than when I'm writing either fiction or unconventional nonfiction. Like I said, my novel and short story collection took me more than 10 years. In this book, all the somewhat more traditional or conventional chapters I drafted in a month. And that's at least a third of the book. I don't know that I have a smart answer about why that's the case. I think it honestly has to do with the fact that I've spent 20 years on deadline and have trained myself to be in less precious about it than I am about the other kind of work. Like something needs to get written so that there's not a hole in the newspaper or magazine the next day. You know?
Yes, definitely. It’s the same for me when I write journalistic work. So, one thing that was personally satisfying about Searches was that your experiences from the early days of the internet to the present largely mirror mine. We must be about one year apart in age. I remember AOL chat rooms, joining Facebook in college, all of that. It made the book extra poignant for me but also made me think about how our generation has seen the entire history of the internet. Sure, we weren't there for like the 1960s ARPANET or whatever, but from when the internet was really broadly adopted in America, we’ve been there from the beginning. It made me curious to ask what, if anything, you miss about the old versions of the internet before it was all controlled by Google, Amazon, Meta and these other companies you write about in the book.
I think it was a sense of discovery and play early on, which you still find in pockets of the internet and even on big platforms like on TikTok. At the same time, when I started writing the book, I was very nostalgic about the 90s-era internet and the way in which it felt to us not as bound up in the power and wealth of big technology companies as it does now. But as I reported that out— you know, the reason my family and a lot of families got on AOL is that AOL had a multi-hundred-million-dollar campaign where they sent CDs to people's houses and floppy disks saying, “Hey, if you install this, we'll give you X number of hours of the internet for free and then you have to start paying.” So even my first use of the internet was bound up in what could be characterized as the big tech of the time. By 1994, just a couple of years after the internet was quote unquote invented, the first commercial website was created and ideas were percolating about the way in which the internet could be monetized. It was less visible to us at the time, and certainly more fragmented, like there were more corporate players involved, but it was always bound up with capitalism. So, that surprised me. It's not something that I recognized when I started writing those sections of the book.
I wanted to ask you more about AI, since I love your thoughts on the subject both in this book and when we’ve talked about it privately. One topic that is discussed in discourse about AI is whether our increasing dependence on things ChatGPT is actually degrading our thinking. If outsourcing our thought process makes us dumber. Certain studies have indicated that, and others disagree, but I was curious if that’s something you fear.
Yes. Well, rather than saying I fear it, I’ll say it's something that I have been thinking about a lot. Maybe because I have a background as a journalist, I'm always really concerned with evidence. I don't want to theorize about what ChatGPT is going to do with our brains. I will say, though, what I find troublesome is because there's so much money behind technology it proliferates faster than we're able to understand what the effects of it are, right? We know now that using social media more is correlated with anxiety and depression in teens, for example, but that research wasn't solid 10 or 15 years ago when social media started to become such a big part of our culture. It feels like we just start using this stuff because it's available, because there's a lot of capital behind getting us to use these products. The products are typically free to an extent. They’re there. And so, we think, how bad can this be? And we don't know what it's going to do to us, right? At least from my own perspective as a writer, given that I write to figure out what it is that I'm thinking, using a language model to produce writing is necessarily going to bypass that step. I was interested in this recent study from Microsoft and Carnegie Mellon that found that workers using AI—specifically, generative AI like ChatGPT—found that it hurt their ability to think critically and solve problems on their own. It feels like I'm on solid ground, even without academic papers to back me up, to say getting AI to produce text is always going to be functionally different from producing it myself, because that thought process is just not going to occur if I'm not writing it myself.
I feel similarly. Obviously, I have this craft blog and I try to think on a craft level and not in a woo woo mystical way. Yet there is a bit of woo woo mysticism for me and art. Part of that is that art seems like something we create by diving into ourselves in a way. It often requires a certain amount of isolation or even productive boredom. That’s something we're losing, not only from ChatGPT but also social media and constant cell phone use and so on. I'm also conscious that this might be a very outdated view. Maybe the idea of the artist will shift from expressing one’s self to collaborative “content creation” with machines. I don't know.
I have thoughts on this. Do you want me to tell you my thoughts?
Please!
The thing that I find really interesting is the question of what the function of communication is. What is language for in the first place? Human language exists so that humans can communicate with one another, right? I think we talk a lot about the perspective of the author, the speaker, and that's one part of it. But just as important is the role of the interlocutor, the listener, or the addressee. Have you read that paper “On the Dangers of Stochastic Parrots”? It's that paper by Emily M. Bender and Timnit Gebru and their co-authors.
Yes, I know it.
One thing I find really interesting in that paper is the way in which the authors talk about the role of a relationship between a speaker and an addressee in creating conversation. So, I not only know like the perspective from which I'm speaking, but I'm also creating a mental model of who I'm speaking to, whether it's you in this conversation or whether it's this abstract idea of an addressee in my book. I don't know which literal readers are going to pick it up but I have this idea of a reader, of this construct of an addressee.
The thing about large language models is that they not only don't represent an individual perspective, because they're just these statistical models, but also they're not conceiving of an addressee the way that we humans conceive of an addressee. So basically, the entire function of communication breaks down because they're not doing the same thing. What they're doing is imitating what we do by producing language. What they're doing only looks like what we're doing, but it's not actually what we're doing, you know? What I find really striking is people might say, Oh, I wouldn't use ChatGPT to write a novel for me but its fine to use it to send an email to my kids teacher. Or I'm a teacher, and I think it's fine to use it to send an email to the parent of one of my students. That potentially disregards what all the nuance of human communication entails that an AI model can't understand.
One of the essays in Searches is “Ghosts,” your viral essay published in The Believer in which you use ChatGPT to help you write about your sister's death. It remains for me one the most interesting artistic uses of LLMs or large language models. I recommend it all the time. Even as an AI skeptic, I'm interested in the potential artistic uses of AI. But I will say that when I was rereading “Ghosts,” I was struck by the way ChatGPT works by basically not working. It keeps producing text that is not what you want to say and also not accurate. It made me think of what you were just saying about kind of the concept of an addressee, or who a writer is writing for. In that essay, you will write a paragraph about your sister’s illness and the program doesn't really know what you're looking for and so it's like, “Well, maybe this person wants a romance novel,” and then it spits out a brief romance with a professor that becomes quickly creepy.
Obviously you're very conscious of this, and you put it this way “the machine generated falsehoods compelled me to assert my own consciousness by writing against the falsehoods,” which is an interesting way in which AI can paradoxically drive the writer deeper into the self.
So much to say on this topic. One thing I'll note is that I wasn't using ChatGPT. I was using this predecessor called GPT-3, which I point out only because that model tended to produce what I think of as quote unquote more creative or quote unquote more original prose than ChatGPT, which is why I feel sort of confident that it's possible to create a model that creates more interesting prose. But that's an aside. What’s interesting to me is there were sections of text that large language model produced that I found beautifully written. One example, which I write about in the book, is where it has me and my sister sitting at a stoplight and my sister grabs my hand and it says, “this is the hand she held, the hand I write with, the hand I am writing this with.” Part of what makes me find that so good is that language is representing an idea about embodiment that we think a machine can't represent because it's not embodied. A lot of the anti-AI arguments say something an AI model can never truly write because it doesn't have a body like humans do, right? But there it's like saying something fairly profound, in my reading, about embodiment and loss and grief. And yet it wasn't my experience. Like my sister literally never held my hand at the stop sign. It's just not the kind of thing we typically did. We didn't grab one another's hands at random moments, poignantly. I thought it was intellectually engaging for me, but it was still false. And so, for me as a writer that was insufficient. I think the jury's still out on the question of whether that matters to a reader.
But to speak to your other point, what was interesting about my engagement with GPT-3 was that by creating these falsehoods it was compelling me to write against those falsehoods and write something that was truer and truer as the essay went on. I think it can easily be argued that this large language model was helping me to write better. There was this research out of Cornell a couple of years ago—I can send you a link—where they gave people an essay prompt about whether social media is good or bad for us. One of the groups wrote the essay on their own, without any large language model. Another group wrote an essay using a large language model that had been fine-tuned to produce positive opinions about social media. And then a third group had to write essays using a large language model fine-tuned to produce negative opinions about social media. What they found was that the people who use those large language models ended up writing essays that were twice as likely to be aligned with quote unquote opinion of the model they were using. For me, somebody with 20 years of experience as a writer, who writes for a living and who knows a fair amount about technology and understood how large language models functioned, at least on a basic level, I was able to have this engagement with the large language model in which I responded in a way that like drew out my own perspective rather than being in my aligned with its perspective. But if I had written that essay 20 years ago, when I was just starting out as a writer, I don't know that I would have been able to do that. I think it took very active critical engagement on my part to do that. I think it's easier, maybe especially if you're not a writer for a living—which isn't to say there's anything special about those of us who are writers for a living, just that we have a lot of practice doing it—it's probably easy to not recognize that you're being influenced by what the large language model is asking you to do.
That’s exactly what I fear about younger artists, not because of their age just by lack of experience. If they are being unconsciously manipulated by these AI models. Not just with writing. With visual art too. If you use Dall-E or MidJourney or whatever and they're always producing the same kind of compositions, say, you might not experiment with composition as much yourself.
And there’s a larger political question, which your book spends a lot of time addressing in interesting ways, about how these biases built into these models might effect, well, everything. The racial biases, gender biases, and other biases built into these models that are now becoming the default for Google search and so on.
Totally. I write in the book about my dad's use of ChatGPT to edit an email. He said something like, you know, in order to understand experience people's experiences, you have to spend time with them and talk to spend time with them about what they've been through. Only then can you understand their perspective. That's what ChatGPT wrote. He also sent me his original email where it was the same sentence, pretty much, but “then” and “only” were inverted. I raised that because the “then only” construction is very common usage of Indian English. My dad's Indian, and it's totally correct in the context of Indian English. ChatGPT was using American English and so inverted that “then only” and turned it into “only then” and therefore subtly inserted this American English bias in a way that he might not have been consciously aware of. But it was there. It's hard for me also to look at what these models produce and analyze them for their biases. I think it's really difficult for any of us to do it, because they're black boxes. We don't know how they work.
That specific anecdote reminds me of the way that a lot of accents in America have been flattened from the introduction of TV. Once everyone's watching the same TV broadcast, then people start to speak the same way. Maybe there's a way in which using AI will make us all start to think in the same way.
There’s also this company, apparently, for call center agents who don't have American English accents where their employers can use the technology to turn their accents into American sounding accents.
I did hear about that. So much of AI is this game of hiding its existence, right? Like you're not supposed to know the call center is using this accent changer. Or you’re not supposed to know that your boss has sent you a ChatGPT-authored email and they're not supposed to recognize that you let AI write the email you wrote them.
Yeah.
It also makes me think about how your book investigates—and I think almost anyone can relate to this—how you wrestle with your complicity with these large, somewhat dystopian global tech companies. How you try not to order from Amazon, but do, for example.
Yeah. I was interested in our complicity in the rise of these companies and their products because I think sometimes when we talk about big tech, there's this binary way of talking about it. It's either we're being exploited, almost as if we have no agency and are just vulnerable. What the companies will often say in response is, you have agency. It's your choice. You can use these products if you want to. You're clearly using them because they provide some benefit to you. And the tension between those perspectives, and the fact both of these things are true, is much more interesting than the binary way of presenting it. For me, acknowledging my own agency, acknowledging the ways in which I am making decisions every day when I'm using these products is potentially a starting point toward making other choices as individuals, as a society. If it's the case, as companies argue, that we do have agency and we're making choices, which I would argue is accurate, then it would then follow that there are other choices we can make as individuals and collectively. And that’s what I find interesting and compelling.
Maybe I should wrap up here. You alluded to this before, but the book ends on a chapter in which you've given a survey to people who identify as women and asked them a bunch of questions. Then you collaged their answers. How did you how did you view the relationship between this accumulation of human-authored text in that chapter and the way LLMs like ChatGPT work?
I love that chapter so much, and the reason I can say that is I didn't write it. I think I became a writer because I love the way human beings use language and tell stories. I probably became a writer partly from just listening to my mom tell me stories about her life as a kid. Every time I’m in a long Uber ride I'll get into the most fascinating conversations with the drivers. I love talking to people. I love knowing people's stories. There's so much richness in the way all of us tell stories, not just those of us who are employed as writers. In my work as a journalist, I've always found that compelling. I did this project several years ago where I collected oral histories from workers who have jobs that didn't exist a generation ago. I felt like I was making an implicit argument. Anytime we publish oral histories, we're making an implicit argument for the artistry of language, all our language. It’s not just those of us who are employed as writers who can tell stories, and I love that.
The way LLMs function is sort of the opposite. There's this flattening effect of the technology. Because it's built to try to sound like this idea of the average human it then doesn't sound like any actual real human. The specificity of our experience is what makes it interesting to hear about one another's experiences. People are nuts. I could not dream up, and an LLM could not dream up, all the interesting ways that people answered those questions.

If you enjoy this newsletter, consider subscribing or checking out my recent science fiction novel The Body Scout—which The New York Times called “Timeless and original…a wild ride, sad and funny, surreal and intelligent”—or preorder my forthcoming weird-satirical-science-autofiction novel Metallic Realms.
It would be interesting to see the result of another try at "Ghosts" with the newer models.... GPT-3 is an abacus compared to a calculator.
I think true artistic writing will continue to shine through any LLM-slop. LLMs may on average raise the quality of contemporary fiction by making technical perfection easier to achieve.... But there is much more to art than mere technical perfection. This is not the case in science (my profession), where methods and technique are paramount.
Fascinating interview, Lincoln. I especially like the last question about human agency: even if a new technology like AI becomes ubiquitous ( and many might argue it already is), we must be discerning. The idea of feeding a prompt or outline into a generative AI model and seeing what the machine spits out is one thing. The idea of taking that as a seed and making it into your own work is quite another.
I feel like we creatives, in particular, must be guardians of human originality and the art of expression, and be on alert against that "flattening" you mention--the Walter Cronkitization of language, art, music, and storytelling, to use the TV analogy.
Thanks for going deep!*
--
*Note: for fun, I fed this comment into ChatGPT and asked it to create this as a response to a post about creativity, writing and AI's tendency towards homogenizing and flattening language. Here was its response:
Fascinating interview, Lincoln. I especially appreciated the final question about human agency. Even if a technology like AI becomes ubiquitous—and many would argue it already has—we still have to be discerning in how we use it. Feeding a prompt or outline into a generative model and seeing what the machine spits out is one thing. Taking that output as a seed, then shaping and transforming it into something truly your own, is something else entirely.
As creatives, I think we have a particular responsibility to be guardians of human originality and expression. We need to stay alert to the very real risk of “flattening” you mention—the Walter Cronkitization of language, art, music, and storytelling, to borrow the TV analogy.
Thanks for going deep on this—such a valuable conversation.
Thoughts?