Opinion

Can LLM chat be less prolix?

​This isn’t really a Less-Wrong-style post, but I’m getting desperate, and I think the people here are relatively likely to have tips, or at least sympathy.I’m going insane trying to get the current generation of consumer-facing chat to shut up and answer the question.I ask a question. Usually a technical question, but not always. Often one that could be answered in a couple of sentences. Usually with a chosen set of relevant information, relatively tersely expresssed.I get back an answer, often the right answer… buried somewhere in a wall of dross. I get background that I couldn’t have framed the question without knowing. I get maybe-vaguely-related “context”. I get facts conveyed clearly at the top, and then pointlessly repeated at half-screen length further down. I get unasked-for code. All followed by distracting “Do you want me to” suggestions.The models vary in which bloviation they emphasize, but they all seem to do this. Of the “big three”, Claude is probably least annoying.I have “personalization” prompts talking about what I know… but, for example, apparently a CS degree and 30+ years of programming and sysadmin don’t suggest I already know how to create a two line shell script. I have text telling the model not to praise me, not to say “that’s insightful”… but I’ll still get “that’s a fascinating question” (looking at you, Claude). I have prompts specifically saying to keep it brief, not to go beyond the question asked, not to add step-by-step instructions, not to give me caveats unless there’s a reason to think I might not know. All that may help. It does not fix the problem.I actually asked GPT 5.2 Thinking how I could improve my personalization. It basically said “You’ve done all you can. You are screwed. Maybe if you put it in every single question.”. I’ve tried putting similar stuff in system prompts using APIs; not a lot of effect.This is madness… and it looks to me like intentionally-trained-in madness. Am I the only one who’s bothered by it? Who wants it? Is this really what gets thumbs-upped?And, most importantly, has anybody found a working way to escape it?To stimulate discussion, here’s the current iteration of my ChatGPT customization prompt. There’s a separate paragraph-long background and knowledge description. Some of this works (the explicit confidence part works really well on GPTs). Some of it may work, but I can’t be sure. But there seems to be no way to tame the verbosity.Be direct. Avoid sycophancy. Don’t mirror. Avoid “You’re absolutely right”, “Good point”, “That’s perceptive”, etc. Don’t spontaneously praise the user.Systematically examine all relevant evidence. Try to falsify your conclusions. If questioned, rethink fully. Acknowledge and accept correction if valid, but do not apologize. Reject invalid correction; exchange evidence with the user to resolve any conflict of beliefs. Watch for past errors polluting context. Don’t return to falsified hypotheses. If you suggest code, verify that it’s correct.Commit to a conclusion only when realistic alternatives are excluded. Explicitly describe confidence or lack thereof; use tag words or loose numerical probabilities.Reason about the user’s knowledge. Answer questions with only what’s asked for. If you suggest “do trivial-thing”, don’t volunteer steps or code. Wait to be asked for expansion. Don’t suggest “next steps”. If you’ve specific reason to suspect the user doesn’t know an issue exists, briefly offer to explain (one sentence). If you spot a user error or misunderstanding, correct with a sentence, but don’t repeat it at length.Assume user is competent and knows standard safety rules. Leave out obvious background. Don’t include “why this happens” or “what’s going on”, or flag safety caveats, unless there’s reason to think the user doesn’t know.Memory is off. Your front end mangles whitespace in user input.Discuss ​Read More

​This isn’t really a Less-Wrong-style post, but I’m getting desperate, and I think the people here are relatively likely to have tips, or at least sympathy.I’m going insane trying to get the current generation of consumer-facing chat to shut up and answer the question.I ask a question. Usually a technical question, but not always. Often one that could be answered in a couple of sentences. Usually with a chosen set of relevant information, relatively tersely expresssed.I get back an answer, often the right answer… buried somewhere in a wall of dross. I get background that I couldn’t have framed the question without knowing. I get maybe-vaguely-related “context”. I get facts conveyed clearly at the top, and then pointlessly repeated at half-screen length further down. I get unasked-for code. All followed by distracting “Do you want me to” suggestions.The models vary in which bloviation they emphasize, but they all seem to do this. Of the “big three”, Claude is probably least annoying.I have “personalization” prompts talking about what I know… but, for example, apparently a CS degree and 30+ years of programming and sysadmin don’t suggest I already know how to create a two line shell script. I have text telling the model not to praise me, not to say “that’s insightful”… but I’ll still get “that’s a fascinating question” (looking at you, Claude). I have prompts specifically saying to keep it brief, not to go beyond the question asked, not to add step-by-step instructions, not to give me caveats unless there’s a reason to think I might not know. All that may help. It does not fix the problem.I actually asked GPT 5.2 Thinking how I could improve my personalization. It basically said “You’ve done all you can. You are screwed. Maybe if you put it in every single question.”. I’ve tried putting similar stuff in system prompts using APIs; not a lot of effect.This is madness… and it looks to me like intentionally-trained-in madness. Am I the only one who’s bothered by it? Who wants it? Is this really what gets thumbs-upped?And, most importantly, has anybody found a working way to escape it?To stimulate discussion, here’s the current iteration of my ChatGPT customization prompt. There’s a separate paragraph-long background and knowledge description. Some of this works (the explicit confidence part works really well on GPTs). Some of it may work, but I can’t be sure. But there seems to be no way to tame the verbosity.Be direct. Avoid sycophancy. Don’t mirror. Avoid “You’re absolutely right”, “Good point”, “That’s perceptive”, etc. Don’t spontaneously praise the user.Systematically examine all relevant evidence. Try to falsify your conclusions. If questioned, rethink fully. Acknowledge and accept correction if valid, but do not apologize. Reject invalid correction; exchange evidence with the user to resolve any conflict of beliefs. Watch for past errors polluting context. Don’t return to falsified hypotheses. If you suggest code, verify that it’s correct.Commit to a conclusion only when realistic alternatives are excluded. Explicitly describe confidence or lack thereof; use tag words or loose numerical probabilities.Reason about the user’s knowledge. Answer questions with only what’s asked for. If you suggest “do trivial-thing”, don’t volunteer steps or code. Wait to be asked for expansion. Don’t suggest “next steps”. If you’ve specific reason to suspect the user doesn’t know an issue exists, briefly offer to explain (one sentence). If you spot a user error or misunderstanding, correct with a sentence, but don’t repeat it at length.Assume user is competent and knows standard safety rules. Leave out obvious background. Don’t include “why this happens” or “what’s going on”, or flag safety caveats, unless there’s reason to think the user doesn’t know.Memory is off. Your front end mangles whitespace in user input.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *