Published on February 7, 2026 10:06 PM GMTI find myself, for the first time in a while, with enough energy and stability to attempt nontrivial projects outside my dayjob. Regarding the next ~10 months, I’ve narrowed my options to two general approaches; as expected beneficiaries of both, I’d like the LessWrong hivemind’s help choosing between them.The first option is making more D&D.Sci Scenarios, running them on a more consistent schedule, crossposting them to more platforms, and getting more adventurous about their form and content. The second is creating Epistemic Roguelikes, a new[1] genre of rationalist videogame about deducing and applying the newly-randomized ruleset each run.Prima facie, prioritizing D&D.Sci this year (and leaving more speculative aspirations to be done next year if at all) seems like the obvious move, since:D&D.Sci projects are shorter and more self-contained than game projects, and I have a better track record with them.At time of writing, D&D.Scis can still flummox conventionally-applied conventional AIs[2]. Open opportunities for robots, humans and centaurs to test their mettle would be a helpful (if infuriatingly low-N) sanity check on other metrics.This time next year, a data-centric challenge hard enough to mess with AIs but toyish enough to be fun for humans could be an oxymoron; if I want to apply my backlog of scenario ideas, it might be now-or-never[3].Conversely, if AI capabilities do stay at about this level for a while, publicly and repeatedly demonstrating that I can make good AI-proof test tasks may end up being really good for my career.However:Content creation is in general a long-tailed domain. I’ve been making D&D.Scis for half a decade now, and while it’s been fun, it hasn’t led to runaway success. Trying other things – on the off-chance they do lead to runaway success – seems warranted.It turns out I’m actually a pretty good writer. D&D.Sci leans on that skill only lightly; the game(s) I’m interested in would make much more intensive use of it.Three of the four points in favor center on AI; having plans involving short-term frontier AI progress inherently makes them much less stable and much more nerve-wracking.I really enjoyed inventing a genre and I’d like to do that again.Any thoughts would be appreciated.^As far as I know; please prove me wrong!^I tried a handful of them on chatgpt-thinking; tough straightforward ones like the original were handled better than the average human player at the time, but easy tricky ones like these two were fumbled.^I’m pretty bearish on AI by LW standards, so I actually don’t think this is likely, but the possibility perturbs me.Discuss Read More
What should I try to do this year?
Published on February 7, 2026 10:06 PM GMTI find myself, for the first time in a while, with enough energy and stability to attempt nontrivial projects outside my dayjob. Regarding the next ~10 months, I’ve narrowed my options to two general approaches; as expected beneficiaries of both, I’d like the LessWrong hivemind’s help choosing between them.The first option is making more D&D.Sci Scenarios, running them on a more consistent schedule, crossposting them to more platforms, and getting more adventurous about their form and content. The second is creating Epistemic Roguelikes, a new[1] genre of rationalist videogame about deducing and applying the newly-randomized ruleset each run.Prima facie, prioritizing D&D.Sci this year (and leaving more speculative aspirations to be done next year if at all) seems like the obvious move, since:D&D.Sci projects are shorter and more self-contained than game projects, and I have a better track record with them.At time of writing, D&D.Scis can still flummox conventionally-applied conventional AIs[2]. Open opportunities for robots, humans and centaurs to test their mettle would be a helpful (if infuriatingly low-N) sanity check on other metrics.This time next year, a data-centric challenge hard enough to mess with AIs but toyish enough to be fun for humans could be an oxymoron; if I want to apply my backlog of scenario ideas, it might be now-or-never[3].Conversely, if AI capabilities do stay at about this level for a while, publicly and repeatedly demonstrating that I can make good AI-proof test tasks may end up being really good for my career.However:Content creation is in general a long-tailed domain. I’ve been making D&D.Scis for half a decade now, and while it’s been fun, it hasn’t led to runaway success. Trying other things – on the off-chance they do lead to runaway success – seems warranted.It turns out I’m actually a pretty good writer. D&D.Sci leans on that skill only lightly; the game(s) I’m interested in would make much more intensive use of it.Three of the four points in favor center on AI; having plans involving short-term frontier AI progress inherently makes them much less stable and much more nerve-wracking.I really enjoyed inventing a genre and I’d like to do that again.Any thoughts would be appreciated.^As far as I know; please prove me wrong!^I tried a handful of them on chatgpt-thinking; tough straightforward ones like the original were handled better than the average human player at the time, but easy tricky ones like these two were fumbled.^I’m pretty bearish on AI by LW standards, so I actually don’t think this is likely, but the possibility perturbs me.Discuss Read More

