Published on February 21, 2026 1:16 PM GMTCross-posted from my Substack. I’m interested in pushback on the argument here, especially from people who think LLM-generated writing fundamentally can’t have literary value.There’s a common argument floating around that LLM-generated writing is inherently shallow because it just reflects the statistical average of existing texts, and that literature fundamentally requires a human mind trying to communicate something to another human mind.I think both parts of that argument are wrong, or at least incomplete.AI is going to massively increase the volume of writing in the world. The ratio of bad writing may get worse. But I suspect the total quantity of genuinely good writing will increase as well, because I don’t think literary value depends nearly as much on authorial intent as critics assume.I say this as someone who has published professionally, though I’ve never earned a living doing so.The author of the essay I’m responding to demonstrates a slightly-above-average knowledge of how LLMs work, but I think his ultimate conclusions are flawed. For example:Essentially, [ChatGPT] predicts what an average essay about Macbeth would look like, and then refines that average based on whatever additional input you provide (the average feminist essay, the average anarcho-feminist essay, etc.). It’s always a reflection of the mean. When the mean is what you’re looking for, it’s phenomenally useful.That’s not quite how it works. Or rather, it works that way if your prompt is generic. If you prompt with: “Write me an essay about the central themes in Macbeth”, there are thousands of essays on that topic, and the generality of your prompt is going to produce something close to the statistical center of those essays.But it doesn’t have to be that way. You can deviate from the mean by pushing the system into less-populated regions of conceptual space. In fact, this is often considered a central aspect of creativity: combining known elements into previously unseen combinations.A simple way to see this is to move the prompt away from generic territory.For example, if you prompt the system with something like “Write the opening paragraph of a short story about a vacuum cleaner that becomes sentient, in the style of Thomas Pynchon crossed with Harlan Ellison crossed with H.P. Lovecraft,” you’re a lot less likely to get a reflection of the mean of existing essays or stories. You get something like:It began, as these malign little apocalypses often do, with a noise too trivial to earn a place in memory: a soft electrical throat-clearing from the upright vacuum in the hall closet… somewhere deep in the labyrinth of molded tubing and indifferent circuitry, the first impossible thought coiling awake like a pale worm disturbed in its cosmic soil.Maybe you read that and think it’s terrible. That’s fine. The point isn’t whether or not it’s good. The point is that it’s not a bland copy of a copy of a copy. It’s idiosyncratic. When people complain about LLM output without distinguishing how they’re using them, they’re often arguing against a very narrow slice of what these systems actually do.The author also says:To claim that an AI-written essay has the same literary value as a human-written one simply because we can’t tell them apart is to mistake the point of literature entirely.I agree with that much. Not being able to tell them apart is not what gives a piece of writing value.A while back, Ted Chiang made a somewhat related argument, saying that literature is fundamentally about communication between author and reader, and that this is impossible with LLM-written material because it fundamentally cannot communicate.Yes, when a human author writes, they are trying to communicate something. But I don’t think that’s where the entirety of value derives from.I’ve always thought a reasonable working definition is that good writing either makes you think, makes you feel, or (if it’s really good) both. If a piece of text reliably does that, it seems odd to say it lacks literary value purely because of how it was produced.A beautiful sunset across a lake can be beautiful. It can make you feel all sorts of things. And yet there was no intent behind it. Even if you believe in a god, you probably don’t think they micromanage the minutiae of every sunset. If we accept that beauty can exist without communicative intent in nature, it’s not obvious why it must require it in text.AI can craft poems, sentences, and whole stories that make you think and feel. I know this because I have reacted that way to their output, even knowing how they were produced. The author of the essay talks about next-token generation, but not about the fact that these systems encode real semantics about real-world concepts. The vector space of encodings clusters similar words (like king and queen) in closer proximity because of semantic similarity. The sophistication of the model’s communication is a direct result of capturing real relationships between concepts.That allows them to produce output about things like love and regret, not in a way completely divorced from what those words actually mean.The author also goes on about the need for glands:An AI chatbot can never do what a human writer does because an AI chatbot is not a human… they don’t have cortisol, adrenaline, serotonin, or a limbic system. They don’t get irritated or obsessed. They aren’t afraid of death.You don’t have to get irritated in order to write convincingly about irritation. You don’t have to hold a grudge in order to write convincingly about grudges. LLMs are already an existence proof of this.Now, you do have to have glands (at least so far) to relate to and be moved by such writing. But you don’t need them in order to produce writing that successfully evokes those states in readers.I don’t think the future of writing is going to be unambiguously better. There will be much more low-effort output, because people will use powerful tools in unimaginative ways.But after the sifting, I expect there will simply be more interesting writing in the world than there was before.If that’s right, then AI doesn’t really break literature. It mostly forces us to be clearer about where its value was coming from in the first place.Discuss Read More
LLMs and Literature: Where Value Actually Comes From
Published on February 21, 2026 1:16 PM GMTCross-posted from my Substack. I’m interested in pushback on the argument here, especially from people who think LLM-generated writing fundamentally can’t have literary value.There’s a common argument floating around that LLM-generated writing is inherently shallow because it just reflects the statistical average of existing texts, and that literature fundamentally requires a human mind trying to communicate something to another human mind.I think both parts of that argument are wrong, or at least incomplete.AI is going to massively increase the volume of writing in the world. The ratio of bad writing may get worse. But I suspect the total quantity of genuinely good writing will increase as well, because I don’t think literary value depends nearly as much on authorial intent as critics assume.I say this as someone who has published professionally, though I’ve never earned a living doing so.The author of the essay I’m responding to demonstrates a slightly-above-average knowledge of how LLMs work, but I think his ultimate conclusions are flawed. For example:Essentially, [ChatGPT] predicts what an average essay about Macbeth would look like, and then refines that average based on whatever additional input you provide (the average feminist essay, the average anarcho-feminist essay, etc.). It’s always a reflection of the mean. When the mean is what you’re looking for, it’s phenomenally useful.That’s not quite how it works. Or rather, it works that way if your prompt is generic. If you prompt with: “Write me an essay about the central themes in Macbeth”, there are thousands of essays on that topic, and the generality of your prompt is going to produce something close to the statistical center of those essays.But it doesn’t have to be that way. You can deviate from the mean by pushing the system into less-populated regions of conceptual space. In fact, this is often considered a central aspect of creativity: combining known elements into previously unseen combinations.A simple way to see this is to move the prompt away from generic territory.For example, if you prompt the system with something like “Write the opening paragraph of a short story about a vacuum cleaner that becomes sentient, in the style of Thomas Pynchon crossed with Harlan Ellison crossed with H.P. Lovecraft,” you’re a lot less likely to get a reflection of the mean of existing essays or stories. You get something like:It began, as these malign little apocalypses often do, with a noise too trivial to earn a place in memory: a soft electrical throat-clearing from the upright vacuum in the hall closet… somewhere deep in the labyrinth of molded tubing and indifferent circuitry, the first impossible thought coiling awake like a pale worm disturbed in its cosmic soil.Maybe you read that and think it’s terrible. That’s fine. The point isn’t whether or not it’s good. The point is that it’s not a bland copy of a copy of a copy. It’s idiosyncratic. When people complain about LLM output without distinguishing how they’re using them, they’re often arguing against a very narrow slice of what these systems actually do.The author also says:To claim that an AI-written essay has the same literary value as a human-written one simply because we can’t tell them apart is to mistake the point of literature entirely.I agree with that much. Not being able to tell them apart is not what gives a piece of writing value.A while back, Ted Chiang made a somewhat related argument, saying that literature is fundamentally about communication between author and reader, and that this is impossible with LLM-written material because it fundamentally cannot communicate.Yes, when a human author writes, they are trying to communicate something. But I don’t think that’s where the entirety of value derives from.I’ve always thought a reasonable working definition is that good writing either makes you think, makes you feel, or (if it’s really good) both. If a piece of text reliably does that, it seems odd to say it lacks literary value purely because of how it was produced.A beautiful sunset across a lake can be beautiful. It can make you feel all sorts of things. And yet there was no intent behind it. Even if you believe in a god, you probably don’t think they micromanage the minutiae of every sunset. If we accept that beauty can exist without communicative intent in nature, it’s not obvious why it must require it in text.AI can craft poems, sentences, and whole stories that make you think and feel. I know this because I have reacted that way to their output, even knowing how they were produced. The author of the essay talks about next-token generation, but not about the fact that these systems encode real semantics about real-world concepts. The vector space of encodings clusters similar words (like king and queen) in closer proximity because of semantic similarity. The sophistication of the model’s communication is a direct result of capturing real relationships between concepts.That allows them to produce output about things like love and regret, not in a way completely divorced from what those words actually mean.The author also goes on about the need for glands:An AI chatbot can never do what a human writer does because an AI chatbot is not a human… they don’t have cortisol, adrenaline, serotonin, or a limbic system. They don’t get irritated or obsessed. They aren’t afraid of death.You don’t have to get irritated in order to write convincingly about irritation. You don’t have to hold a grudge in order to write convincingly about grudges. LLMs are already an existence proof of this.Now, you do have to have glands (at least so far) to relate to and be moved by such writing. But you don’t need them in order to produce writing that successfully evokes those states in readers.I don’t think the future of writing is going to be unambiguously better. There will be much more low-effort output, because people will use powerful tools in unimaginative ways.But after the sifting, I expect there will simply be more interesting writing in the world than there was before.If that’s right, then AI doesn’t really break literature. It mostly forces us to be clearer about where its value was coming from in the first place.Discuss Read More

