Published on February 5, 2026 4:27 PM GMTLessWrong as a community idolizes agency to a great extent. However, the content I have seen seems to be full of generic exhortations of agency and lacking in concrete implementable strategies for how to build it. The CFAR Handbook, and the Hammertime sequence based on an older version of it, are among the best resources on the subject I have found, but even they seem too focused on feelings and self-assessments and not enough on carrying out a concrete plan to massively improve your agency on a short timeframe. As such, I thought it would be good to put out a call for resources on how to build agency.I expect this post to serve two purposes. First of all, I’ll use it as a compilation of the best resources I have on building agency. Second of all, I’ll use it to explain which ones have worked best for me, where they are lacking, and what potential avenues could be for improving them.Resources I have seenNicholas Kross posted a question titled “Ways to be more agenty?” which had some good answers, but I can’t find the link and besides, not much new was proposed in it. However, it did point me to some resources that I will include below.The anonymous post “Seven ways to be unstoppably agentic” is generally close to what I want here, but these are all very broad pieces of advice that aren’t immediately actionable. In addition, the author has since cautioned against trusting the post too much.Neel Nanda wrote a post on becoming a person who actually does things, and it does tend to focus on forming a favorable self-identity which I see as important to bring your System 1 in line with your System 2, but the challenge still is, the advice here is very broad and not especially actionable.I also liked Dwarkesh Patel’s post “Examples of barbell strategies” (original link broken, archive here) but that’s more just forms of heterodox advice than a comprehensive formation program surrounding agency.Parts of the Inadequate Equilibria book also fall into this group, but again, they also seem to fall too close into the “abstract praise of agency” trap. The Craft and the Community Sequence is similar and similarly disappointing in terms of lacking a clear roadmap.An ideal resourceIdeally, the resource I’m thinking of would be some sort of high-intensity program, with a clear connection back to real-world performance in a range of tasks that require high agency. It seems to me that CFAR has come close, but historically had trouble with the real-world feedback issue. They’ve been moving more in that direction, but I’m unclear exactly what their plans are.For instance, a program for forecasting would likely use calibration training or exercises similar to those on Clearer Thinking. However, I’m not sure what similar tests would be for agency, and it seems hard to build them because a key part of high agency is recognizing and taking advantage of opportunities that are non-obvious to other people.In terms of examples of successful leadership training, probably something close to how military officers are trained would do well here. However, I’m not sure that “agency” in a broad sense works similarly to military leadership in terms of trainability, and again I’m unsure how to measure that.Alternative plan: building systems that do not require agencyIt is distinctly possible that agency cannot be trained, and should be treated as a trait that is predetermined and rare. However, if that is true, that is also not a case for business as usual. Instead, the focus should be on building structures within AI safety so that virtually no one has to be an agenty person to contribute.An obvious way, though of course with similarly obvious advantages and disadvantages, would be expanding normal academic pipelines. Set up undergraduate courses and majors in AI alignment, PhD programs and fellowships in AI alignment, faculty positions in AI alignment, and similar sorts of things, so as to expand the field while piggybacking on the administrative infrastructure that already exists for academia. The key thing I see as the objection to this model is if timelines are too short for this to succeed in time. But there are similar options in that case. Here are a few:Make relatively normal start-ups doing genome editing or BCIs or whatever to speed up human intelligence augmentation once an AI pause is achievedSet up activist groups with clear roles, expectations, and assignments, as well as layers of intermediation and management to prevent any one person from needing too much agency or skillSet up intensive training programs for AI safety, similar to AGI Safety Fundamentals but more advanced, or MATS but non-selective and not requiring as much direct mentoringPromote earning to give, particularly aggressive hits-based earning to give in entrepreneurship, startup work, or finance, as a normal, even prestigious path within AI safetyMy questionAll right, so I admit so far I’ve fallen into the same trap that I’ve accused existing work of. But now I’d like to go a bit past that and ask: what should I personally do? I feel that I kind of have to move fast on this, because there are sudden ebbs and flows in terms of how capable I am to solve this, how much time I have, how worried I am about not solving this, and whatever else. And with world problems being what they are, this isn’t something that can wait idly for someone else to solve it. In particular, if AI timelines are potentially as short as some people say, then I would want an extremely high level of commitment to high-impact AI safety work, either by massively increasing my agency and doing this on my own, or maintaining my current level of agency and volunteering for someone else to direct me in supporting their projects.Discuss Read More
What’s the concrete plan to become an incredibly agentic person?
Published on February 5, 2026 4:27 PM GMTLessWrong as a community idolizes agency to a great extent. However, the content I have seen seems to be full of generic exhortations of agency and lacking in concrete implementable strategies for how to build it. The CFAR Handbook, and the Hammertime sequence based on an older version of it, are among the best resources on the subject I have found, but even they seem too focused on feelings and self-assessments and not enough on carrying out a concrete plan to massively improve your agency on a short timeframe. As such, I thought it would be good to put out a call for resources on how to build agency.I expect this post to serve two purposes. First of all, I’ll use it as a compilation of the best resources I have on building agency. Second of all, I’ll use it to explain which ones have worked best for me, where they are lacking, and what potential avenues could be for improving them.Resources I have seenNicholas Kross posted a question titled “Ways to be more agenty?” which had some good answers, but I can’t find the link and besides, not much new was proposed in it. However, it did point me to some resources that I will include below.The anonymous post “Seven ways to be unstoppably agentic” is generally close to what I want here, but these are all very broad pieces of advice that aren’t immediately actionable. In addition, the author has since cautioned against trusting the post too much.Neel Nanda wrote a post on becoming a person who actually does things, and it does tend to focus on forming a favorable self-identity which I see as important to bring your System 1 in line with your System 2, but the challenge still is, the advice here is very broad and not especially actionable.I also liked Dwarkesh Patel’s post “Examples of barbell strategies” (original link broken, archive here) but that’s more just forms of heterodox advice than a comprehensive formation program surrounding agency.Parts of the Inadequate Equilibria book also fall into this group, but again, they also seem to fall too close into the “abstract praise of agency” trap. The Craft and the Community Sequence is similar and similarly disappointing in terms of lacking a clear roadmap.An ideal resourceIdeally, the resource I’m thinking of would be some sort of high-intensity program, with a clear connection back to real-world performance in a range of tasks that require high agency. It seems to me that CFAR has come close, but historically had trouble with the real-world feedback issue. They’ve been moving more in that direction, but I’m unclear exactly what their plans are.For instance, a program for forecasting would likely use calibration training or exercises similar to those on Clearer Thinking. However, I’m not sure what similar tests would be for agency, and it seems hard to build them because a key part of high agency is recognizing and taking advantage of opportunities that are non-obvious to other people.In terms of examples of successful leadership training, probably something close to how military officers are trained would do well here. However, I’m not sure that “agency” in a broad sense works similarly to military leadership in terms of trainability, and again I’m unsure how to measure that.Alternative plan: building systems that do not require agencyIt is distinctly possible that agency cannot be trained, and should be treated as a trait that is predetermined and rare. However, if that is true, that is also not a case for business as usual. Instead, the focus should be on building structures within AI safety so that virtually no one has to be an agenty person to contribute.An obvious way, though of course with similarly obvious advantages and disadvantages, would be expanding normal academic pipelines. Set up undergraduate courses and majors in AI alignment, PhD programs and fellowships in AI alignment, faculty positions in AI alignment, and similar sorts of things, so as to expand the field while piggybacking on the administrative infrastructure that already exists for academia. The key thing I see as the objection to this model is if timelines are too short for this to succeed in time. But there are similar options in that case. Here are a few:Make relatively normal start-ups doing genome editing or BCIs or whatever to speed up human intelligence augmentation once an AI pause is achievedSet up activist groups with clear roles, expectations, and assignments, as well as layers of intermediation and management to prevent any one person from needing too much agency or skillSet up intensive training programs for AI safety, similar to AGI Safety Fundamentals but more advanced, or MATS but non-selective and not requiring as much direct mentoringPromote earning to give, particularly aggressive hits-based earning to give in entrepreneurship, startup work, or finance, as a normal, even prestigious path within AI safetyMy questionAll right, so I admit so far I’ve fallen into the same trap that I’ve accused existing work of. But now I’d like to go a bit past that and ask: what should I personally do? I feel that I kind of have to move fast on this, because there are sudden ebbs and flows in terms of how capable I am to solve this, how much time I have, how worried I am about not solving this, and whatever else. And with world problems being what they are, this isn’t something that can wait idly for someone else to solve it. In particular, if AI timelines are potentially as short as some people say, then I would want an extremely high level of commitment to high-impact AI safety work, either by massively increasing my agency and doing this on my own, or maintaining my current level of agency and volunteering for someone else to direct me in supporting their projects.Discuss Read More

