There is a basic question that has been confusing me for a while that I would like to ask about: Why are the goals of AI safety, like achieving safety from extinction risks, or protection for human wellbeing, not more often framed as the goal of making moral machines? Or in other words, building AI that has a strong and reliable sense of morality and ethics.There is definitely a lot of discussion around the edges of this question. For example, one recent post by @Richard_Ngo asked whether AI should be aligned to virtues. Or, a post from last year by @johnswentworth described thinking about what the alignment problem is. However, there’s also a huge swath of writing where the concept of machine morality is never invoked or mentioned. Part of the reason for my curiosity it that it seems like this framing could resolve a lot of confusion and in many ways it seems the most intuitive. For example, this seems like probably the most important framing that we apply, broadly, when trying to raise and educate safe and good humans. This framing would also provide a nice way of synthesizing many different core AI safety results, like ’emergent misalignment.’ We could simply say that AI exhibiting emergent misalignment did not possess a strong moral compass, or a strong sense of morality, prior to its fine-tuning.Is there a kind of history with this framing where it was at some point made to seem outmoded or obsolete? I can imagine various obvious-ish objections, like the fact that morality is hard to define. (But again, the fact that this is the framing we run with humans makes it seem pretty powerful and flexible.) But it’s not clear to me why this framing has any more or less issues than any other. Greatly appreciate any input, or suggestions of where to look further. Discuss Read More
Question: Why is the goal of AI safety not ‘moral machines’?
There is a basic question that has been confusing me for a while that I would like to ask about: Why are the goals of AI safety, like achieving safety from extinction risks, or protection for human wellbeing, not more often framed as the goal of making moral machines? Or in other words, building AI that has a strong and reliable sense of morality and ethics.There is definitely a lot of discussion around the edges of this question. For example, one recent post by @Richard_Ngo asked whether AI should be aligned to virtues. Or, a post from last year by @johnswentworth described thinking about what the alignment problem is. However, there’s also a huge swath of writing where the concept of machine morality is never invoked or mentioned. Part of the reason for my curiosity it that it seems like this framing could resolve a lot of confusion and in many ways it seems the most intuitive. For example, this seems like probably the most important framing that we apply, broadly, when trying to raise and educate safe and good humans. This framing would also provide a nice way of synthesizing many different core AI safety results, like ’emergent misalignment.’ We could simply say that AI exhibiting emergent misalignment did not possess a strong moral compass, or a strong sense of morality, prior to its fine-tuning.Is there a kind of history with this framing where it was at some point made to seem outmoded or obsolete? I can imagine various obvious-ish objections, like the fact that morality is hard to define. (But again, the fact that this is the framing we run with humans makes it seem pretty powerful and flexible.) But it’s not clear to me why this framing has any more or less issues than any other. Greatly appreciate any input, or suggestions of where to look further. Discuss Read More