Opinion

LessWrong’s UX may not be living up to its ideas

​The issue:I love LessWrong, and largely credit the information I’ve found on it with shaping the way I think and view the world.At the same time, I think it’s quite unfriendly to newcomers – I for one found it difficult to develop a coherent picture of the AI safety problem in all of its dimensions – taking several months of reading through old threads to realize why certain people held certain beliefs, for example.I think this is the case because much of the discussion here is happening “on the cutting edge” so to speak; most users are actively exploring different lines of thought with new ideas built on complex world models, and are not necessarily revisiting or referencing the years of prior knowledge and belief that underpin their thinking.I’m interested in seeing how this process could be made more efficient; I think a more streamlined “onboarding” process for newcomers could save a lot of valuable time and help people become more impactful quicker.A potential solution is to develop a curated “101 curriculum” for AI safety (I’m sure many exist), but I think interfacing with the forum directly is better because:A) It feels like you are part of an active discussion between experts, rather than a student in a class. This personally enhanced my sense of agency around AI safety and big problems more generally.B) Reading especially prescient discussions from years before the AI boom greatly boosted my respect for particular thinkers and the broader community.What features could improve the UX? I have a few rough ideas:Encourage and facilitate creation of “current beliefs” pages for each user, where they can detail their present-day positions on certain subjects (AI timelines are a relatively simple example), recount their intellectual journey / past updates, reference particularly impactful posts and threads, etc.This would have been, and will be, super useful for me – there are many thinkers I respect transitively, without necessarily understanding how they arrived at their current views.Perhaps this can be expanded into a community-wide analysis of viewpoints that could help inform strategic decision making or signal clear consensus with additional context/detail.Better search: the current search function is very hard to use for finding specific posts – I often struggle to find articles that I vaguely remember even if I feel like I have enough detail. Not sure if this is because the info here is unusually conceptually dense – but I’m guessing there are improvements that can be made to the architecture.Other organizational tools: content tags are nice, but they are a little too broad to be very helpful IMO. Maybe a more sophisticated tagging scheme, or some way for users to individually tag/curate articles (and maybe comments) in a publicly visible way? It would be nice to accurately situate individual posts within the larger discussion they are a part of and I don’t think the current “mentioned in” section does a good enough job at that.I think improving the interfaces we use for navigating information can be hugely impactful if implemented at scale, by improving coordination and group agency.I am interested in hearing feedback!Do any of these feature additions seem potentially good? Have any of them been attempted in the past? Do they exist elsewhere?Have I missed any similar, previous discussions about this?Are there people actively working on these issues? I know Forethought has published some very useful adjacent content but I’m not sure what it has led to.Are there any prototypes/projects/ideas you would be interested in seeing explored/built?Thanks.Discuss ​Read More

​The issue:I love LessWrong, and largely credit the information I’ve found on it with shaping the way I think and view the world.At the same time, I think it’s quite unfriendly to newcomers – I for one found it difficult to develop a coherent picture of the AI safety problem in all of its dimensions – taking several months of reading through old threads to realize why certain people held certain beliefs, for example.I think this is the case because much of the discussion here is happening “on the cutting edge” so to speak; most users are actively exploring different lines of thought with new ideas built on complex world models, and are not necessarily revisiting or referencing the years of prior knowledge and belief that underpin their thinking.I’m interested in seeing how this process could be made more efficient; I think a more streamlined “onboarding” process for newcomers could save a lot of valuable time and help people become more impactful quicker.A potential solution is to develop a curated “101 curriculum” for AI safety (I’m sure many exist), but I think interfacing with the forum directly is better because:A) It feels like you are part of an active discussion between experts, rather than a student in a class. This personally enhanced my sense of agency around AI safety and big problems more generally.B) Reading especially prescient discussions from years before the AI boom greatly boosted my respect for particular thinkers and the broader community.What features could improve the UX? I have a few rough ideas:Encourage and facilitate creation of “current beliefs” pages for each user, where they can detail their present-day positions on certain subjects (AI timelines are a relatively simple example), recount their intellectual journey / past updates, reference particularly impactful posts and threads, etc.This would have been, and will be, super useful for me – there are many thinkers I respect transitively, without necessarily understanding how they arrived at their current views.Perhaps this can be expanded into a community-wide analysis of viewpoints that could help inform strategic decision making or signal clear consensus with additional context/detail.Better search: the current search function is very hard to use for finding specific posts – I often struggle to find articles that I vaguely remember even if I feel like I have enough detail. Not sure if this is because the info here is unusually conceptually dense – but I’m guessing there are improvements that can be made to the architecture.Other organizational tools: content tags are nice, but they are a little too broad to be very helpful IMO. Maybe a more sophisticated tagging scheme, or some way for users to individually tag/curate articles (and maybe comments) in a publicly visible way? It would be nice to accurately situate individual posts within the larger discussion they are a part of and I don’t think the current “mentioned in” section does a good enough job at that.I think improving the interfaces we use for navigating information can be hugely impactful if implemented at scale, by improving coordination and group agency.I am interested in hearing feedback!Do any of these feature additions seem potentially good? Have any of them been attempted in the past? Do they exist elsewhere?Have I missed any similar, previous discussions about this?Are there people actively working on these issues? I know Forethought has published some very useful adjacent content but I’m not sure what it has led to.Are there any prototypes/projects/ideas you would be interested in seeing explored/built?Thanks.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *