Opinion

Analyzing the claim “The most fundamental right is the right to exist”

​The idea that “The most fundamental right is the right to exist” seems to come from following the idea of expanding the moral circle: First, we step by step included humans in the moral circle. At some point we got to including animals, and now we are thinking about the moral status of AIs. The next natural extension is an extension in time, so that we consider the future people (and other sentient beings) as well.I think John Rawls’ “Original position” is a good way to ground many considerations. Here’s a description by Scott Alexander:So again, the question is – what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?My favorite heuristic for thinking about this is John Rawls’ “original position” – if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible?This kind of thinking is somewhat harder when considering animals or AIs. On the other hand, when thinking about the future, we also think about future humans, so I think this could work quite well there. This view for example tells that we should not rush to use almost all the possible resources in the next one billion years and condemn everyone else to scarcity for the trillions of years to come. Or at least that we shouldn’t condemn later people to suffering with only marginal gain to ourselves.However, this reasoning doesn’t work that well on the right to exist. The original position takes for granted that we will be one of the beings in the universe. The right to exist, on the other hand, seems to point to a probability of getting to exist. This leads to some difficulties.So, say that one is designing the universe. They might or might not appear in the universe depending on the specifications. One could think that by creating a universe with more sentient beings (or more humans) one would increase the chances of getting to appear in the universe. But what is the “99 % probability” like here? To get an exact copy of yourself, the universe would need a very large amount of humans in it, and so the difference between creating more people (or getting humans to live that much longer)[1] in option B than option A might only increase the chance from to .[2]In particular, if the most fundamental right is the right to exist, it seems to me that we are completely failing if we cannot create every possible sentient being.(If there’s a finite upper bound M on the number of sentient beings we ever get to create, then this divided by countable infinity is zero. Note that this is not a fundamental mathematical problem: If we solve entropy problems, we could just create all sentient beings in the order of simplicity. Continuing this indefinitely, every possible sentient being would get to exist at some point, so I’d say this would count as solving the problem. On the other hand, this solution highlights how (infinitely) far off we are if we only create more beings over a finite time span.)———————-You could try to salvage the situation by saying that you only want a person sufficiently similar to you to get to exist. This might at first look like useless goal to maximize: Increasing the count of subjective person-years by a factor of 10 would probably decrease the distance between your brain and the most similar brain state to come extremely little. On the other hand, quite much of a person’s personality can be expressed using a two-digit number of discrete parameters. Thus, creating ten times more people might get one more parameter right in the most similar person, which doesn’t sound that useless.So one can argue that the right to exist implies that we should have a very large amount of people!Before starting to optimize only the number of people it is good to note that the original-position-type argument tells more than this. In the same way as in the original application, the pre-incarnation angelic intelligences also want to consider the quality of life of the people in the universe to come. This is important, as there is likely a trade-off between the quality of life and the number of people in the (spatially and temporally) finite universe we are considering. A completely selfish person would probably choose 2x quality of life over increasing the number of correct parameters from 49 to 50. If the latter case means increasing the number of people tenfold, then the sum of utilities of persons is five times lower in the case we are choosing. Which is not only a reason not to create maximally many humans but also an argument against total utilitarianism.[3]It is so annoying when your nice argument for a rebuttal of a claim leads to a possible building block of utilitarian ethical theory with a hard to think about trade-off parameter.^Human brains change over time, and it might be that the person just wants their current brain state to appear at some point. Hence, having people live 10 times longer should be at least almost as useful as having 10 times more people. ^Here’s a way to get a very crude lower estimate for a number of possible human brain configurations capable of sentience: Take an adult human’s brain which has about 86 billion neurons. Choosing for each neuron x one neuron y out of the closest 1000 neurons to x and adding or deleting a connection from x to y results in = different configurations. If we assume that the initial brain is sentient, then probably pretty many of these new configurations will also be (one neuron has about 7000 connections to other neurons, so adding or removing one shouldn’t affect too much).^Maybe one could even reject Parfit’s Repugnant Conclusion with this argument, but this seems quite dubious, as we reason “I don’t like Repugnant Conclusion so I don’t choose a universe where that would come true”.Of course, in the finite case it can easily be the case that one cannot create enough agents to justify the quality of life going too low (when we measure our utility of the universe using the similarity + average utility method described). Also, even if the option were to have so many people that one of them would get the same neural structure as the angelic designer down to the last memory (through some quantum effects, say), they would pretty quickly notice the change in the surroundings and reason that they have no way to trust their previous memories, which probably wouldn’t lead to a very enjoyable life. So one cannot tempt the angel with this offer.Discuss ​Read More

​The idea that “The most fundamental right is the right to exist” seems to come from following the idea of expanding the moral circle: First, we step by step included humans in the moral circle. At some point we got to including animals, and now we are thinking about the moral status of AIs. The next natural extension is an extension in time, so that we consider the future people (and other sentient beings) as well.I think John Rawls’ “Original position” is a good way to ground many considerations. Here’s a description by Scott Alexander:So again, the question is – what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?My favorite heuristic for thinking about this is John Rawls’ “original position” – if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible?This kind of thinking is somewhat harder when considering animals or AIs. On the other hand, when thinking about the future, we also think about future humans, so I think this could work quite well there. This view for example tells that we should not rush to use almost all the possible resources in the next one billion years and condemn everyone else to scarcity for the trillions of years to come. Or at least that we shouldn’t condemn later people to suffering with only marginal gain to ourselves.However, this reasoning doesn’t work that well on the right to exist. The original position takes for granted that we will be one of the beings in the universe. The right to exist, on the other hand, seems to point to a probability of getting to exist. This leads to some difficulties.So, say that one is designing the universe. They might or might not appear in the universe depending on the specifications. One could think that by creating a universe with more sentient beings (or more humans) one would increase the chances of getting to appear in the universe. But what is the “99 % probability” like here? To get an exact copy of yourself, the universe would need a very large amount of humans in it, and so the difference between creating more people (or getting humans to live that much longer)[1] in option B than option A might only increase the chance from to .[2]In particular, if the most fundamental right is the right to exist, it seems to me that we are completely failing if we cannot create every possible sentient being.(If there’s a finite upper bound M on the number of sentient beings we ever get to create, then this divided by countable infinity is zero. Note that this is not a fundamental mathematical problem: If we solve entropy problems, we could just create all sentient beings in the order of simplicity. Continuing this indefinitely, every possible sentient being would get to exist at some point, so I’d say this would count as solving the problem. On the other hand, this solution highlights how (infinitely) far off we are if we only create more beings over a finite time span.)———————-You could try to salvage the situation by saying that you only want a person sufficiently similar to you to get to exist. This might at first look like useless goal to maximize: Increasing the count of subjective person-years by a factor of 10 would probably decrease the distance between your brain and the most similar brain state to come extremely little. On the other hand, quite much of a person’s personality can be expressed using a two-digit number of discrete parameters. Thus, creating ten times more people might get one more parameter right in the most similar person, which doesn’t sound that useless.So one can argue that the right to exist implies that we should have a very large amount of people!Before starting to optimize only the number of people it is good to note that the original-position-type argument tells more than this. In the same way as in the original application, the pre-incarnation angelic intelligences also want to consider the quality of life of the people in the universe to come. This is important, as there is likely a trade-off between the quality of life and the number of people in the (spatially and temporally) finite universe we are considering. A completely selfish person would probably choose 2x quality of life over increasing the number of correct parameters from 49 to 50. If the latter case means increasing the number of people tenfold, then the sum of utilities of persons is five times lower in the case we are choosing. Which is not only a reason not to create maximally many humans but also an argument against total utilitarianism.[3]It is so annoying when your nice argument for a rebuttal of a claim leads to a possible building block of utilitarian ethical theory with a hard to think about trade-off parameter.^Human brains change over time, and it might be that the person just wants their current brain state to appear at some point. Hence, having people live 10 times longer should be at least almost as useful as having 10 times more people. ^Here’s a way to get a very crude lower estimate for a number of possible human brain configurations capable of sentience: Take an adult human’s brain which has about 86 billion neurons. Choosing for each neuron x one neuron y out of the closest 1000 neurons to x and adding or deleting a connection from x to y results in = different configurations. If we assume that the initial brain is sentient, then probably pretty many of these new configurations will also be (one neuron has about 7000 connections to other neurons, so adding or removing one shouldn’t affect too much).^Maybe one could even reject Parfit’s Repugnant Conclusion with this argument, but this seems quite dubious, as we reason “I don’t like Repugnant Conclusion so I don’t choose a universe where that would come true”.Of course, in the finite case it can easily be the case that one cannot create enough agents to justify the quality of life going too low (when we measure our utility of the universe using the similarity + average utility method described). Also, even if the option were to have so many people that one of them would get the same neural structure as the angelic designer down to the last memory (through some quantum effects, say), they would pretty quickly notice the change in the surroundings and reason that they have no way to trust their previous memories, which probably wouldn’t lead to a very enjoyable life. So one cannot tempt the angel with this offer.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *