“I consider myself a middle of the road person on AI” – My local congresswoman at the start of our 30 minute meetingOne weird trick to dramatically cut AI X-risk in 30 minutes. So, you’re concerned about AI going well and benefiting humanity? Or maybe more aptly, you’re concerned that by default AI won’t go well and will lead to extinction (or will lead to a panoply of other not good outcomes). And being a rational, discerning Lesswrong reader you also likely want to have a real impact on this. The most optimal choice may be working on AI alignment, but for those of us in the 99.99% that don’t have the technical skills or theoretical understanding to make an impact – this is for you.For a long time I have had enormous anxiety about AI X-Risk. People in my life would tell me “there’s nothing you can do about it, so why worry”. I would argue that they were half right. Anxiety is pointless unless it leads to action. I am by default a spiteful person, I loathe being told I can’t have an impact, and to be quite honest I enjoy proving others wrong so I set out to try and do something. After basically no consideration, I determined that the most likely (and direct) path to impact would come through discussing AI with policy makers, and so far I have had (what I would consider to be) enormous impact for relatively little time investment. I’ve had the opportunity to: Brief staff on the Senate Commerce committee who are currently writing an AI Bill on risks posed by advanced AIMeet directly with my congressional representative (who started by saying she was a middle of the road rep on AI but ended by saying she’s rethinking that)Get introduced to a senior U.S. senator to provide a briefing for them on AI policy, and two other congress people. Here’s a list of tips I’ve found work extraordinarily well. (I will admit this is U.S. centric but I think things are similar in other democratic countries) Just do itFirst and most importantly, do not overthink it. Just go to your local representatives website (state, local, federal). Aim high (but also don’t discount staffers, they have a huge level of importance on how an individual representative thinks about things). The number one reason people fail to have impact is they fail to act. Don’t be the person who spends 2 months planning and doesn’t end up acting. Bonus Tip: If you know people with political connections this can also work. Often wealthy people donate to campaigns. Bonus Tip 2: If you struggle at the federal level, state representatives are often very easy to contact and also have huge influence. A state law being passed can also impose obligations on AI providers. Be CredibleIf you want to maximize impact, showing up for a meeting with a policy maker requires a certain degree of seriousness. Introduce yourself and any relevant credentials, dress professionally, be the kind of person they are going to take seriously. Policy makers (like everyone else) use heuristics to judge who they are talking to. Wearing a fedora, having goofy facial hair, dressing in a graphic t-shirt all damage your credibility. (I’m not saying these things are bad, just that they will harm your ability to achieve an outcome) If you have any credentials that relate to AI, use those. If not reference your direct experience (for example, if you work in tech and AI is, in your experience, resulting in less need to hire use that!). You need to credibly establish two things:You are a serious personYou have expertise or insight that the representative doesn’t. Don’t assume they AI ExpertsMost people have no understanding of AI. Their experience with it may have had a beginning and ending two years ago when they tried out GPT4 and it lied to them. If you jump straight into the Orthogonality thesis you’ve already lost. I’ve found the simplest narrative that resonates (or at least gets me the most head nods is)AI is progressing extraordinarily fast (with a slide showing the METR graph and an explanation)AI Models also pose substantial immediate risks in uplifting bad actors for cyber/bio/chemical. Labs are intentionally trying to build superintelligence (with quotes from Lab CEO’s and Bengio/Hinton and Neil Degrasse Tyson)We do not know when this will happen, but if it did it could end disastrously as we would have built a second more intelligent species. We should not wait to regulate – we waited until after there were disasters with airplanes and nuclear power, we need to regulate before there is a disaster with AI.Specific policy proposals you recommend. Use analogies, find ways to explain complex topics very simply while also making sure they understand you are an expert (which if you read Lesswrong much, you are more of an expert than almost anyone they would talk to in a given month so don’t sell yourself short). Don’t undersell your fears. Worried about 50% unemployment? Tell them that. Worried about extinction risks from AI? Tell them that (but maybe don’t jump straight there at first, build some credibility in the conversation).Be ConcretePoliticians are busy. They have many meetings. Make your policy proposals concrete and actionable. If you say “we should regulate AI” that’s not very useful or helpful. The specific policy proposals you would recommend should be written down and sent to the congress persons staff after the meeting. The more specific (and copy pasteable into a law being written in a word doc) the better.Make AsksYou would be amazed how often just asking for something works. At the end of the call I had with my congressperson, I asked “Hey is there anyone else you would recommend I meet with?” She listed out a senator and two other influential congress people. I then asked “Would it be possible to get an introduction for the meeting?” which she happily instructed her legislative affairs staffer to do. Just asking for things is a superpower few people realize they have. Follow UpBy this point you should have the emails of congressional staffers. Follow up with a written document outlining your meeting. Follow up again later with more information. Ask for more introductions, if you impressed them they will likely help you. Ironically I find that people who don’t work in tech often find AI risk far easier to comprehend than people who use it often and are deep in X. Using simple logical arguments of mutual interest works wonders. “We don’t want to be to AI what dogs are to humans” My congress woman at the end of the meeting.Discuss Read More
When the World Ends you Will Regret Not filling out that Contact Us Form
“I consider myself a middle of the road person on AI” – My local congresswoman at the start of our 30 minute meetingOne weird trick to dramatically cut AI X-risk in 30 minutes. So, you’re concerned about AI going well and benefiting humanity? Or maybe more aptly, you’re concerned that by default AI won’t go well and will lead to extinction (or will lead to a panoply of other not good outcomes). And being a rational, discerning Lesswrong reader you also likely want to have a real impact on this. The most optimal choice may be working on AI alignment, but for those of us in the 99.99% that don’t have the technical skills or theoretical understanding to make an impact – this is for you.For a long time I have had enormous anxiety about AI X-Risk. People in my life would tell me “there’s nothing you can do about it, so why worry”. I would argue that they were half right. Anxiety is pointless unless it leads to action. I am by default a spiteful person, I loathe being told I can’t have an impact, and to be quite honest I enjoy proving others wrong so I set out to try and do something. After basically no consideration, I determined that the most likely (and direct) path to impact would come through discussing AI with policy makers, and so far I have had (what I would consider to be) enormous impact for relatively little time investment. I’ve had the opportunity to: Brief staff on the Senate Commerce committee who are currently writing an AI Bill on risks posed by advanced AIMeet directly with my congressional representative (who started by saying she was a middle of the road rep on AI but ended by saying she’s rethinking that)Get introduced to a senior U.S. senator to provide a briefing for them on AI policy, and two other congress people. Here’s a list of tips I’ve found work extraordinarily well. (I will admit this is U.S. centric but I think things are similar in other democratic countries) Just do itFirst and most importantly, do not overthink it. Just go to your local representatives website (state, local, federal). Aim high (but also don’t discount staffers, they have a huge level of importance on how an individual representative thinks about things). The number one reason people fail to have impact is they fail to act. Don’t be the person who spends 2 months planning and doesn’t end up acting. Bonus Tip: If you know people with political connections this can also work. Often wealthy people donate to campaigns. Bonus Tip 2: If you struggle at the federal level, state representatives are often very easy to contact and also have huge influence. A state law being passed can also impose obligations on AI providers. Be CredibleIf you want to maximize impact, showing up for a meeting with a policy maker requires a certain degree of seriousness. Introduce yourself and any relevant credentials, dress professionally, be the kind of person they are going to take seriously. Policy makers (like everyone else) use heuristics to judge who they are talking to. Wearing a fedora, having goofy facial hair, dressing in a graphic t-shirt all damage your credibility. (I’m not saying these things are bad, just that they will harm your ability to achieve an outcome) If you have any credentials that relate to AI, use those. If not reference your direct experience (for example, if you work in tech and AI is, in your experience, resulting in less need to hire use that!). You need to credibly establish two things:You are a serious personYou have expertise or insight that the representative doesn’t. Don’t assume they AI ExpertsMost people have no understanding of AI. Their experience with it may have had a beginning and ending two years ago when they tried out GPT4 and it lied to them. If you jump straight into the Orthogonality thesis you’ve already lost. I’ve found the simplest narrative that resonates (or at least gets me the most head nods is)AI is progressing extraordinarily fast (with a slide showing the METR graph and an explanation)AI Models also pose substantial immediate risks in uplifting bad actors for cyber/bio/chemical. Labs are intentionally trying to build superintelligence (with quotes from Lab CEO’s and Bengio/Hinton and Neil Degrasse Tyson)We do not know when this will happen, but if it did it could end disastrously as we would have built a second more intelligent species. We should not wait to regulate – we waited until after there were disasters with airplanes and nuclear power, we need to regulate before there is a disaster with AI.Specific policy proposals you recommend. Use analogies, find ways to explain complex topics very simply while also making sure they understand you are an expert (which if you read Lesswrong much, you are more of an expert than almost anyone they would talk to in a given month so don’t sell yourself short). Don’t undersell your fears. Worried about 50% unemployment? Tell them that. Worried about extinction risks from AI? Tell them that (but maybe don’t jump straight there at first, build some credibility in the conversation).Be ConcretePoliticians are busy. They have many meetings. Make your policy proposals concrete and actionable. If you say “we should regulate AI” that’s not very useful or helpful. The specific policy proposals you would recommend should be written down and sent to the congress persons staff after the meeting. The more specific (and copy pasteable into a law being written in a word doc) the better.Make AsksYou would be amazed how often just asking for something works. At the end of the call I had with my congressperson, I asked “Hey is there anyone else you would recommend I meet with?” She listed out a senator and two other influential congress people. I then asked “Would it be possible to get an introduction for the meeting?” which she happily instructed her legislative affairs staffer to do. Just asking for things is a superpower few people realize they have. Follow UpBy this point you should have the emails of congressional staffers. Follow up with a written document outlining your meeting. Follow up again later with more information. Ask for more introductions, if you impressed them they will likely help you. Ironically I find that people who don’t work in tech often find AI risk far easier to comprehend than people who use it often and are deep in X. Using simple logical arguments of mutual interest works wonders. “We don’t want to be to AI what dogs are to humans” My congress woman at the end of the meeting.Discuss Read More


