Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you’ve been reading AI prose all along.It’s easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus.If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you.The ReasonsPeople may just be squicked out by AI, or lossily compress AI with crypto and assume you’re a “tech bro,” or think only uncreative idiots use AI at all. These are bad objections, and I don’t endorse them. But when I catch a whiff of LLM smell, I stop reading. I stop reading much faster than if I saw typos, or broken English, or disliked ideology. There are two reasons.First, human writing is evidence of human thinking. If you try writing something you don’t understand well, it becomes immediately apparent; you end up writing a mess, and it stays a mess until you sort out the underlying idea. So when I read clear prose, I assume that I’m reading a refined thought. LLM prose violently breaks this correlation. If some guy tells Claude to “help put this idea he has into words” then Claude will write clear prose even if the idea is vague and stupid. If the guy asks to “help find citations” and there are no actual good ones, Claude will find random D-tier writeups and link to them authoritatively. Worst of all, if the guy asks Claude to “poke holes in my argument” when the argument is sufficiently muddy, Claude will just kind of make up random “issues” that the guy will hedge against (or, let’s be real, have Claude hedge against). So you end up with a writeup which cites sources, has plenty of caveats, and… has no actual core of considered thought. If you read enough of these, then you start alt-tabbing away real fast when you see structured lists with bold headers, or weird clipped parenthetical asides, or splashy contrastive disclaimers every 2-3 sentences, or any number of other ineffable signs subtler than an em dash.Is it possible that a 50% AI-generated hunk of text contains a pearl of careful thinking, that the poor human author simply didn’t have the time or technical skill to express? I suppose. But it ain’t worth checking.Second, and closely related, AI prose is a slog. There’s way too much framing, there are too many lists and each list has a few items that serve no purpose, the bold and italics feel desperate, and it’s just all so same-y. In your own conversation with an AI that you can fully steer, you can sometimes break out of this feeling for a little bit. But reading the output of someone else’s AI conversation is rarely any fun.In short, if someone reads writing “by you” and it seems LLM-y, they will think both that:You probably don’t have an actual good idea under the cruftEven if you do, the cruft is going to suck to get throughIf you want them there, they are not going to stick around. In fact, the more you want a reader, the more likely they are to be turned off by this stuff. Even if they’re the biggest AI fan in the world.Luddite! Moralizer!Fine. I admit it. Just this week, I too experienced Temptation.You may know me as an editor. In this capacity, I was revising an academic paper’s abstract in response to reviewer comments. But I had several papers to work on in the same project, and the owner of that project actively encouraged me to use AI to move fast enough to meet deadlines.[1]So I gave Claude the paper and the reviewer comments, and asked it to come up with a new abstract that would satisfy the reviewers. The result looked good.“It’s just an abstract,” I whispered to myself, face lit eerily in my laptop screen’s blue light. “Summary. Synthesis.” I rocked back and forth. “I could… just…”But no. Claude’s abstract was a useful reminder of which paper this was, and Claude helpfully catalogued what the reviewer requests were. Still, I rewrote the abstract myself, from scratch. In so doing, I noticed a lot of things I hadn’t seen, when I was just skimming the AI output. Stuff it included that it didn’t really need to. Stuff it emphasized that wasn’t actually that important.Did I run my abstract by Claude in turn? Yes! It had two nitpicks, one of which I agreed with, and fixed in my own words. Use these tools. You should totally ask Claude to find you sources for a claim, but then you should check those sources like you would check the sources of an eager day one intern, and expect to throw most (or all) of them away. You should totally ask Claude to fact check, but expect it to miss some factual errors and unhelpfully nitpick others. You can even ask Claude to “help clarify your thinking.” But if you’re really just clarifying it, then you won’t use its text. Because once your thinking’s clear, you can write the text yourself, and you should.^To be clear, editing I do as part of the LessWrong Feedback Service uses my own human judgment, and I don’t use LLMs to make edits.Discuss Read More
Don’t Let LLMs Write For You
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you’ve been reading AI prose all along.It’s easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus.If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you.The ReasonsPeople may just be squicked out by AI, or lossily compress AI with crypto and assume you’re a “tech bro,” or think only uncreative idiots use AI at all. These are bad objections, and I don’t endorse them. But when I catch a whiff of LLM smell, I stop reading. I stop reading much faster than if I saw typos, or broken English, or disliked ideology. There are two reasons.First, human writing is evidence of human thinking. If you try writing something you don’t understand well, it becomes immediately apparent; you end up writing a mess, and it stays a mess until you sort out the underlying idea. So when I read clear prose, I assume that I’m reading a refined thought. LLM prose violently breaks this correlation. If some guy tells Claude to “help put this idea he has into words” then Claude will write clear prose even if the idea is vague and stupid. If the guy asks to “help find citations” and there are no actual good ones, Claude will find random D-tier writeups and link to them authoritatively. Worst of all, if the guy asks Claude to “poke holes in my argument” when the argument is sufficiently muddy, Claude will just kind of make up random “issues” that the guy will hedge against (or, let’s be real, have Claude hedge against). So you end up with a writeup which cites sources, has plenty of caveats, and… has no actual core of considered thought. If you read enough of these, then you start alt-tabbing away real fast when you see structured lists with bold headers, or weird clipped parenthetical asides, or splashy contrastive disclaimers every 2-3 sentences, or any number of other ineffable signs subtler than an em dash.Is it possible that a 50% AI-generated hunk of text contains a pearl of careful thinking, that the poor human author simply didn’t have the time or technical skill to express? I suppose. But it ain’t worth checking.Second, and closely related, AI prose is a slog. There’s way too much framing, there are too many lists and each list has a few items that serve no purpose, the bold and italics feel desperate, and it’s just all so same-y. In your own conversation with an AI that you can fully steer, you can sometimes break out of this feeling for a little bit. But reading the output of someone else’s AI conversation is rarely any fun.In short, if someone reads writing “by you” and it seems LLM-y, they will think both that:You probably don’t have an actual good idea under the cruftEven if you do, the cruft is going to suck to get throughIf you want them there, they are not going to stick around. In fact, the more you want a reader, the more likely they are to be turned off by this stuff. Even if they’re the biggest AI fan in the world.Luddite! Moralizer!Fine. I admit it. Just this week, I too experienced Temptation.You may know me as an editor. In this capacity, I was revising an academic paper’s abstract in response to reviewer comments. But I had several papers to work on in the same project, and the owner of that project actively encouraged me to use AI to move fast enough to meet deadlines.[1]So I gave Claude the paper and the reviewer comments, and asked it to come up with a new abstract that would satisfy the reviewers. The result looked good.“It’s just an abstract,” I whispered to myself, face lit eerily in my laptop screen’s blue light. “Summary. Synthesis.” I rocked back and forth. “I could… just…”But no. Claude’s abstract was a useful reminder of which paper this was, and Claude helpfully catalogued what the reviewer requests were. Still, I rewrote the abstract myself, from scratch. In so doing, I noticed a lot of things I hadn’t seen, when I was just skimming the AI output. Stuff it included that it didn’t really need to. Stuff it emphasized that wasn’t actually that important.Did I run my abstract by Claude in turn? Yes! It had two nitpicks, one of which I agreed with, and fixed in my own words. Use these tools. You should totally ask Claude to find you sources for a claim, but then you should check those sources like you would check the sources of an eager day one intern, and expect to throw most (or all) of them away. You should totally ask Claude to fact check, but expect it to miss some factual errors and unhelpfully nitpick others. You can even ask Claude to “help clarify your thinking.” But if you’re really just clarifying it, then you won’t use its text. Because once your thinking’s clear, you can write the text yourself, and you should.^To be clear, editing I do as part of the LessWrong Feedback Service uses my own human judgment, and I don’t use LLMs to make edits.Discuss Read More

