Opinion

What do we know about AI company employee giving?

​Many Anthropic employees, especially, are sympathetic to AI safety and (will) have lots of money.  This is something that is being talked about a lot (semi-)privately, but I haven’t seen any public discussion of it.  I find that striking.  It seems like the topic is worthy of extensive public discussion, and it seems to me that perhaps this community is inheriting anti-helpful cultural norms against publicly discussing how individuals make use of their money.It also seems likely that many/most of AI company employees who are passionate about reducing AI risk should rapidly give much/most of their money to effective projects that would otherwise not be adequately funded.There’s a lot of potential for this to do tremendous good.  There are of course things like political giving.  But I think most of this potential would come from employees having different theories of change than institutional funders, moving faster, and having higher risk appetite.  This is especially true given short timelines.A few specific thoughts:I hear a lot of AI company employees give primarly to cause areas other than AI risk reduction.  It seems like donations to AI risk reduction would be much more valuable.I’m concerned that many AI company employees may default to deferring to existing institutional funders who make decisions slowly and have biases.  It seems like giving faster and giving to projects that institutional funders are unwilling to support would make such donations much more valuable.It seems like AI company employees typically wait for their equity to become liquid, but they could instead take out loans against that equity to accelerate their giving.  This could be very valuable given short timelines.I know that AI companies have policies including matching donations for approved organizations.  It seems like influencing which organizations are elligible for matching could be very valuable, and like employees should not restrict their giving to already approved organizations.To the extent things like the above are issues, it seems like coordination failures amongst company employees might be a large contributing factor.  Groups of AI company employees could address this by delegating relevant work to individual members who volunteer or are selected randomly.I’m fundraising for my nonprofit, Evitable, and might benefit from such things.  But my purpose in writing this is to promote public discussion that I think can benefit others in similar situations to me/Evitable. I haven’t put much effort into fundraising for Evitable yet, and expect I will learn a lot more about the situation as I do.Much of the discussion here could equally well apply to individual HNWI giving more broadly.Discuss ​Read More

​Many Anthropic employees, especially, are sympathetic to AI safety and (will) have lots of money.  This is something that is being talked about a lot (semi-)privately, but I haven’t seen any public discussion of it.  I find that striking.  It seems like the topic is worthy of extensive public discussion, and it seems to me that perhaps this community is inheriting anti-helpful cultural norms against publicly discussing how individuals make use of their money.It also seems likely that many/most of AI company employees who are passionate about reducing AI risk should rapidly give much/most of their money to effective projects that would otherwise not be adequately funded.There’s a lot of potential for this to do tremendous good.  There are of course things like political giving.  But I think most of this potential would come from employees having different theories of change than institutional funders, moving faster, and having higher risk appetite.  This is especially true given short timelines.A few specific thoughts:I hear a lot of AI company employees give primarly to cause areas other than AI risk reduction.  It seems like donations to AI risk reduction would be much more valuable.I’m concerned that many AI company employees may default to deferring to existing institutional funders who make decisions slowly and have biases.  It seems like giving faster and giving to projects that institutional funders are unwilling to support would make such donations much more valuable.It seems like AI company employees typically wait for their equity to become liquid, but they could instead take out loans against that equity to accelerate their giving.  This could be very valuable given short timelines.I know that AI companies have policies including matching donations for approved organizations.  It seems like influencing which organizations are elligible for matching could be very valuable, and like employees should not restrict their giving to already approved organizations.To the extent things like the above are issues, it seems like coordination failures amongst company employees might be a large contributing factor.  Groups of AI company employees could address this by delegating relevant work to individual members who volunteer or are selected randomly.I’m fundraising for my nonprofit, Evitable, and might benefit from such things.  But my purpose in writing this is to promote public discussion that I think can benefit others in similar situations to me/Evitable. I haven’t put much effort into fundraising for Evitable yet, and expect I will learn a lot more about the situation as I do.Much of the discussion here could equally well apply to individual HNWI giving more broadly.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *