The Endgame for A.I. Is Clear: Rip Off Everyone
Chatbots are unreliable and error-prone, but that won't stop corporations using them to make everything a little bit worse
It may surprise you—as this newsletter pops up on your Google Glass while a self-driving taxi takes you to your job mining ape NFTs in the metaverse—but new tech doesn’t always live up to the hype.
A solid skepticism of Silicon Valley hype crystalized in me when I was a teenager and the entire media was abuzz with a new technology with the mysterious codename of “IT.” TV shows and articles proclaimed IT would soon revolutionize every aspect of life. We’d have to rebuild all of our cities from scratch. The entire way we traveled would be revolutionized. No less than tech oracle Steve Jobs declared it as “as big a deal as the PC.” Then it was unveiled: the Segway. A dorky scooter. Needless to say, instead of having us build new shining cities in a gyroscopic utopia the scooter’s biggest contribution to society was a recurring joke in the Paul Blart Mall Cop franchise.
The hype around the Segway pales in comparison to the more recent hype around crypto, NFTs, and the intentionally ill-defined “web 3.” Segways were supposed to redesign our cities, but crypto was supposed to usher in a completely utopian future that would reinvent every corrupt institution. Crypto would free the people from the tyranny of governments, banks, and corporations! We’d put utopia on the blockchain! Yes, it was quite silly, even at the time. But billions were poured into crypto related projects from both institutions and individuals. There was no fiat-free utopia, but there were a whole lot of rugpulls, scams, and robberies. The result was a whole lot of people got ripped off.
Most of the hype merchants hyping crypto pivoted quickly to the Metaverse, yet that seemingly died before it even got off the ground and had a chance to rip everyone off. Regardless of whether these hyped techs could in some theoretical universe produce utopian changes, the hype almost never accounts for the cold realities of the system we live in. That is, yes, capitalism.
So now everyone has scrambled to “A.I.” whose hype has reached truly unbelievable levels. We’re told not only that “A.I.” will soon replace the vast majority of white collar jobs and be future doctors, professors, and artists, but A.I. can seemingly solve every problem we have. It will solve climate change, fix our politics, provide better services in every field, and free us from the drudgery of work. None of that has happened yet, of course. But “wait six months” or “just see in a year” because “things are changing too fast to keep up.” (One of the tech world’s favorite tricks is always appealing to a future in which the tech will work perfectly as a way to deflect any criticism of how the tech actually works in the real world today.)
“A.I.” is, like “Web 3,” an intentionally ill-defined term that can include countless things. Some of them will be useful to us. Some will be harmful. Many of them have already been a part of our lives for years. Since this is a newsletter about writing, I’m going to focus just on “A.I.” writing tools—the LLMs like ChatGPT—and their present or foreseeable capabilities. It’s possible that there is a future where chatbots are as sentient as Star Trek androids, or at least where they produce reliable information instead of constantly—yet confidently— producing errors. That future might never come. Or that future might be many years off, just as self-driving cars have consistently failed to meet their predicted takeover. Or that future might come down the road with completely different technology than LLMs.
For now, we know that chatbots are basically bullshit machines. They produce coherent short-form text but they become incoherent the longer they go and produce errors at an alarmingly high rate at any length. Websites that have used chatbots to write articles have found them filled with errors. As a professor, the students I’ve caught cheating with chatbots turned in essays riddled with false information including entirely invented quotations attributed to nonexistent books. (Others have found even more amusing errors.)
So, chatbots as they currently stand are unreliable. They’re no replacement for actual writers. Yet, at the same time, the plan for these chatbots has become clear: ripping everyone off.
To a large degree, the entire project is a ripoff, as Naomi Klein argues:
Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.
But let’s talk about the more specific ways companies plan to rip off writers with “A.I.” as the excuse.
A strong hint can be found in the current Writers Guild of America strike. A key sticking point is the use of A.I. writing. The writers aren’t asking for Hollywood to ban the use of A.I., rather they are asking that A.I writing not count as “literary material” or “source material.” This is technical Hollywood language related to the realities of how contracts work and how much money writers get. With the hard realities of capitalism and how corporations look to rip off writers.
The concern isn’t that ChatGPT can replace writers, but that studios will get chatbots to produce a crappy script then hire a writer at a lower rate to fix up the script into something usable. Fixing up a mess of ChatGPT vomit could take more work than writing a script from scratch, but cost the corporation less money.
I think this fear is completely justified and one that writers everywhere should take note of. Will websites and magazines start hiring writers or editors to “fix” chatbot outputs for less pay and no credit? Will book publishers decide they can feed an idea into ChatGPT then hire a novelist as a ghostwriter to rewrite it?
Again, the chatbots don’t have to produce good or even usable writing for this to be a threat. The threat is A.I being an excuse to rip off writers. If you hire a screenwriter to rewrite a chatbot script, you can pay them less. If you hire an author to rewrite a chatbot draft, you can avoid royalties. Etc .
It’s similar to the way that a flood of chatbot short stories managed to shut down several science fiction magazines even though the stories weren’t any good:
Again, I think it’s worth stressing that A.I. doesn’t have to be any good to be weaponized. It just needs to be good enough to be used as an excuse. This is why I think Ted Chiang’s metaphor is the most apt one I’ve seen: “Will A.I. Become the New McKinsey?”
A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
It’s easy to notice that the calls to replace people with AI are focused on artists and workers rather than management or owners. In a vacuum, this makes little sense. Surely the job of a studio head trying to analyze data and predict which films will bring back returns at the box office is easier to replace with computer program than a scriptwriter. You’d save a hell of a lot more money that way too. But technology is never neutral or logical. It’s a tool, usually in the hands of the powerful.
Of course, even as Hollywood studios, publishers, and magazines try to figure out how A.I. can rip off writers, there’s a bigger rip off game in the work. And this one is being driven by the corporations who actually control these A.I. programs.
As Maggie Harrison at Futurism writes, “Google Unveils Plan to Demolish the Journalism Industry Using AI.” Corporations like Google and Microsoft are moving ahead with unreliable chatbots to “power” their search engines by, essentially, plagiarizing the web. The plan is that when you search for information online, Google’s AI will spit back an answer so you don’t ever have to read an article. Google is going to scan the work of writers and then spit back a chatbot remix of their work. It’s easy to imagine social media companies doing the same. Websites still depend on search and social traffic, and the plan is to kill that traffic while ripping off the content.
This is the game of at least this stage of A.I. Everyone trying to rip off the actual creators without credit or renumeration.
A.I. will surely produce some tools that help people. I can even imagine some that will be useful for creative writing. But the safest bet is that within our system, AI will be used like everything else to increase inequality and make everything just a little bit worse. It’s pretty easy to see how it will exacerbate many of the biggest issues of our time. Scammers, spammers, and disinformation campaigns will be given a massive boost. Everything online will become increasingly unreliable and recycled.
No matter what happens, don’t fall for the argument that some utopian AI-powered future is just around the corner. Look at how this technology is being deployed right now to whose benefit and with who in control…
If you like this newsletter, consider subscribing or checking out my recent science fiction novel The Body Scout that The New York Times called “Timeless and original…a wild ride, sad and funny, surreal and intelligent.”
Other works I’ve written or co-edited include Upright Beasts (my story collection), Tiny Nightmares (an anthology of horror fiction), and Tiny Crimes (an anthology of crime fiction).
I’m an anthology series editor (my anthology series is Best Microfiction) and the Clarkesworld story is sobering for those of us who read hundreds of submissions each year. We can no longer know for sure what is or isn’t robot written.. and I’m sorry to say that creative writers are training it to write fiction that simulates their particular style so well it’s often impossible to tell it apart. I’ve been surprised that other editors aren’t out there talking about this. Thank you so much for saying what you have here about the nightmare at large, and for putting it right. It’s just incredible that corporations are expecting us to gobble this crap up and feel some kind of utopian gratitude..
Excellent close read, I like how you bring in just a sliver of sarcasm here and there without becoming annoying.
A ghoulish side note. Remember the inventor of the Segway? Didn’t he die by directing one of those fucking things over a cliff while using it?