Opinion

Best Intro AI X-Risk Resource?

​I’d like the best short article and video intro explainers, shooting for the 15 minute range.
At least one of the articles shouldn’t be on LessWrong, because some will get turned off by this forum.
It should be simple and not require prerequisite knowledge. My parents, and ideally my grandparents, should be able to understand it. Failing that, a normal college student at an average university should be able to; or at least a STEM major.It should have links to more details, in case someone’s interested. There are smart 13 year olds who will gladly read a million words and then have their lives changed, if there are enticing links – I was one of them, many of you probably were too. The Sequences and HPMOR are good on the rationality front, but I’d like an AI X-risk intro with more focused links.
Lastly: I don’t mean to be presumptious, but if I were running LessWrong I would pin the best couple intros to the sidebar, or something similar. It needs to be really easy for someone who randomly followed a link to this “LessWrong” thing and has no clue what the hell all this is to click the “Why AI Will Kill Everyone” button, and then read or watch what’s linked. Sometimes, there’s a fitting moment to link an outsider to a short, simple explanation of the basic arguments for AI x-risk. I don’t know what to link them to! AGI Ruin: List of Lethalities is not a good intro. IABIED would be great, except it’s a whole book. Maybe AGI Safety From First Principles? I haven’t yet read through it, so I don’t know if it’d fit.Video-wise, Rob Miles has an old Intro to AI Safety video. My memories of it suggest it’s not great as an intro, even though I found his other videos excellent for introducing specific topics (e.g. he’s how I originally heard about quantilizers and the inner-outer misalignment distinction). I’ll plan to review it later, along with the first principles sequence.AI2027 is mostly about forecasting. It’s also too detailed for the kind of intro I’m looking for, though would be a great secondary resource for the subpoint of capabilities forecasting. I also happen to disagree with the conclusion, and worry that it’ll make even the parts I agree with lose credibility as time passes.I could just link them to the Sequences, except they’re really long, many don’t like the writing style, and they’re not just about AI risk. Sure, obviously lots of it is relevant, but it’s still not a good short intro summary.The people I want to give explainers to usually only have two hands, unlike this guy. Resources tailored for non-baselines are not the main focus of this post.Discuss ​Read More

Best Intro AI X-Risk Resource?

​I’d like the best short article and video intro explainers, shooting for the 15 minute range.
At least one of the articles shouldn’t be on LessWrong, because some will get turned off by this forum.
It should be simple and not require prerequisite knowledge. My parents, and ideally my grandparents, should be able to understand it. Failing that, a normal college student at an average university should be able to; or at least a STEM major.It should have links to more details, in case someone’s interested. There are smart 13 year olds who will gladly read a million words and then have their lives changed, if there are enticing links – I was one of them, many of you probably were too. The Sequences and HPMOR are good on the rationality front, but I’d like an AI X-risk intro with more focused links.
Lastly: I don’t mean to be presumptious, but if I were running LessWrong I would pin the best couple intros to the sidebar, or something similar. It needs to be really easy for someone who randomly followed a link to this “LessWrong” thing and has no clue what the hell all this is to click the “Why AI Will Kill Everyone” button, and then read or watch what’s linked. Sometimes, there’s a fitting moment to link an outsider to a short, simple explanation of the basic arguments for AI x-risk. I don’t know what to link them to! AGI Ruin: List of Lethalities is not a good intro. IABIED would be great, except it’s a whole book. Maybe AGI Safety From First Principles? I haven’t yet read through it, so I don’t know if it’d fit.Video-wise, Rob Miles has an old Intro to AI Safety video. My memories of it suggest it’s not great as an intro, even though I found his other videos excellent for introducing specific topics (e.g. he’s how I originally heard about quantilizers and the inner-outer misalignment distinction). I’ll plan to review it later, along with the first principles sequence.AI2027 is mostly about forecasting. It’s also too detailed for the kind of intro I’m looking for, though would be a great secondary resource for the subpoint of capabilities forecasting. I also happen to disagree with the conclusion, and worry that it’ll make even the parts I agree with lose credibility as time passes.I could just link them to the Sequences, except they’re really long, many don’t like the writing style, and they’re not just about AI risk. Sure, obviously lots of it is relevant, but it’s still not a good short intro summary.The people I want to give explainers to usually only have two hands, unlike this guy. Resources tailored for non-baselines are not the main focus of this post.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *