Opinion

Morality without Consciousness

​A recent post by @Linch, “The Fourth World,” gets at an important implication of consciousness— namely, that we ought to suspect further aspects of reality than we can today observe—but I’m not sure it arrives there the right way, or that it derives the correct conclusions.[1]The author makes two assumptions in his post which ought not to come for free. Firstly, the author assumes that we could not explain consciousness through physicalism. This is already a strong claim, one with which about half of surveyed philosophers disagree.[2] Secondly, the author assumes that consciousness is the basis for morality, or that our moral obligations are primarily due toward conscious beings.I think this second assumption is particularly common. At a recent meetup, I heard a participant complain that it is really hard to talk about consciousness, because we are always tying up consciousness with morality. I suspect this happens because we sneak in non-physicalist assumptions about consciousness into discussions about morality.So, I would like to defend the author’s claim that consciousness ought to cause us to suppose further unobserved aspects of reality (which could be very different from the reality we know), while also defending a purely physicalist explanation for consciousness, and decoupling consciousness from morality. I will continue to respond to specific arguments in “The Fourth World,” though my goal is to articulate my own, positive argument, rather than critique the post.Throughout, whenever I refer to “consciousness” without a qualifier, I am referring to phenomenal consciousness, or the “hard problem” of consciousness, or qualia. I’ll use words like “cognition” or “neurophysiology” to refer to non-phenomenal processes, which we typically understand as explainable by modern neuroscience, even if imperfectly. The distinction is important, as we otherwise backdoor capabilities like judgment and agency into consciousness without sufficient justification. I believe this is the crux of why so many of us intuit consciousness as a prerequisite for morality, when I think it is very possibly not the case.Dualism as SemanticsCheck out the basic case for physicalism if you haven’t already. I might repeat some arguments for physicalism through this post, but I won’t make a full, bottom-up argument.The author introduces us to his fourth world by supposing three distinct domains to reality: the physical world, the mathematical world, and consciousness. The author later claims that a non-conscious, robot civilization would never suspect consciousness exists.I surmise the author is pointing at some kind of dualism, with mathematics and logic being a third, metaphysical entity. The difficulty with dualist claims is that they either rely on a “mystical,” non-physical substance, or they rely on semantic qualifications for the physical world.For example, let us suppose that we learn consciousness is actually the emergence of a soul, dipping its head in from some heavenly plane of existence. We learn that all our physical laws are arbitrary and completely determined by the whim of God. In this extreme scenario, do we then accept dualism?Let’s reframe the question. Let us suppose instead that we learn consciousness is the emergence of higher dimensional reality within our own. That higher dimensional reality determines all the physical laws of our known universe. In this scenario, do we accept dualism, or merely expand our definition of physics to account for our new knowledge of reality?My point is that nearly all explanations for consciousness ought to collapse towards physicalism, because any fundamental discoveries required to explain consciousness also compel us to redefine physics accordingly. I will retain the term “mysticism” for any explanations that depart so far from a conventional understanding of reality as to render an expansion of the term “physicalism” meaningless, i.e. if consciousness is actually the soul. So, we could concede dualism for the first example, but we ought to stick to physicalism for the second.If we accept a physicalist explanation of phenomena as at least possible, then we should not assume a robot civilization could never discover consciousness. We have no way to observe or infer consciousness today, but if consciousness exists in physical reality, then it at least seems it might be possible. Without a scientific explanation for the hard problem of consciousness, it is premature to rule this out.Phenomena are IntrinsicAside from a physicalist or mystic explanation for consciousness, we could also suppose that phenomena do not exist at all, or that consciousness is an illusion.[3] Illusionism is a highly counterintuitive stance, given we all have the vivid impression of our own consciousness (how do you convince someone that they do not actually perceive the “redness of red”?). In some sense, our subjective experience is the only thing we know (cf. solipsism), so it seems really weird to claim it does not actually exist.Our experience of existence is phenomenal, so I don’t see a way to refute the hard problem without asserting we are all already zombies (I experience, yet sadly I do not exist). This is contrary to my own axiomatic belief that I experience existence, so I don’t see any way to proceed further with a strong illusionist argument. If you choose a different axiom, well, ok.But illusionism is right to question what and when something is actually phenomenal. In other words, I find it weird to refute qualia, but legitimate to debate the scope of qualia. Because when referring to consciousness, we are often actually referring to the reactive and evaluative machinery behind consciousness, e.g. cognition, and not to phenomena themselves.Consider our phenomenal experience of pain. We generally dislike pain, and indeed many ethical frameworks take for granted that we ought to minimize suffering. But why? When we say pain is bad, are we evaluating a distinct “quale of pain” as bad, or are we reacting negatively to the neurophysiological process of pain?I’ll refer to this as the difference between an extrinsic and intrinsic interpretation of consciousness. If we take a physicalist position of the hard problem of consciousness, then we can suppose phenomena are either intrinsic in neurophysiological processes in a way that we do not yet understand, or that phenomena are extrinsic to neurophysiological processes. In other words, is there a property of neurophysiology that we cannot yet quantify (like weight) or are there entities extrinsic to our neurophysiological makeup that we have not yet discovered (like a particle).My reading of the dominant theories of consciousness (e.g., IIT, Global Workspace Theory) is that they lean toward an intrinsic view. Essentially, there is something about our neurophysiology that just does induce phenomena. The two are inseparable, phenomena just are a property of some neurophysiological processes. We don’t know yet what something is, though it might involve weird fundamental physics. In this view, there is no sense in talking about a phenomenon of pain separate from the underlying neurophysiological activity, in the same way it does not make sense to discuss weight separate from gravity or mass.For an extrinsic example, imagine that there was an undetected particle of consciousness, the c-particle, which somehow interacted with neurophysiological processes, or were produced by them. Phenomena are actually composed of c-particles. We might further suppose that c-particles come in many flavors. There is a c-particle of pain, a c-particle of happiness, etc. Or perhaps they’re just different arrangements of c-particles, who knows! Mental states, as we understand them, are actually determined by c-particles. Unlike with an intrinsic view of phenomena, it actually is coherent to discuss pain separate from the underlying neurophysiology. We just need the right c-particles.I have admittedly chosen a silly example for an extrinsic view of consciousness; I imagine few would argue for an actual “c-particle.” But any extrinsic view will require a mysterious “something else” to explain consciousness, which must then causally interact with our neurophysiology, unless we are willing to accept consciousness as a mere epiphenomenal side-effect.It would be wrong to claim either of these theories are “correct.” As physicalists, we have to accept this as a question for science, and in reality, consciousness may not fit so neatly into my intrinsic/extrinsic divide. But I still find it much more likely that consciousness is somehow intrinsic to neurophysiology. An intrinsic theory requires less deviation from contemporary neuroscience, as we don’t have to posit that neuroscience is somehow insufficient to explain mental processes (no c-particles intervene), and we can neatly sidestep some of the problems of epiphenomenalism (certain mental states are intrinsically experiential).So, if we suppose an intrinsic, physicalist theory of consciousness, then our experience of pain is by definition inseparable from the pain reaction itself. It is, in fact, nonsensical to discuss an experience of pain distinct from the neurophysiological activity associated with pain. That is pain; the thing itself is the neurophysiology!I suspect illusionism is right that there is nothing like a c-particle. There is no independent experience or qualia of a good cup of coffee—there is only the experience of our own neurophysiology tasting the coffee and evaluating it as good. Consciousness itself is not responsible for judgment and evaluations—that work is all done by the mechanics of neurophysiology. Consciousness does not “intervene” to make its own decisions.[4]Generalizing Ethics with PreferencesI’d now like to consider the ethics downstream of a physicalist, intrinsic understanding of consciousness. Namely, if consciousness is intrinsic to neurophysiological activity, then why should ethics fix itself on the conscious aspects of that activity? Is this not an arbitrary distinction? Should we not rather respect the preferences those activities represent?I suspect that phenomena, as intrinsic properties of neurophysiology, are undifferentiated. A chair and an apple have different weights, but we don’t distinguish the weight of furniture from the weight of fruit. Similarly, our experience of pain and happiness may feel different, and may correspond to different biological states, but both experiences are ultimately “just” phenomena, distinguished thanks to our mental capacities. I would guess that the same is true even of sensory qualia, like the “redness of red.”If all this seems highly speculative, then let me simply state that no phenomenon seems to me inherently good or bad. Experiencing happiness is distinct from experiencing happiness as good. Now, there might be something about e.g. dopamine that causes me to want more happiness. But that is a consequence of my neurophysiology, it is one of the reasons I experience happiness as good. It’s not a quality of my experience of happiness (the phenomenon of happiness) itself.Perhaps one could object that there is no experience of happiness without wanting happiness to continue, or no experience of suffering without wanting suffering to stop; that the mental state cannot be differentiated from its reaction. But is this not just another reason to focus on the reaction, or the expressed preference, rather than the experience itself?The distinction I’m trying to make may be irrelevant when discussing humans, who have well understood preferences for different mental states. But we have to be very precise if we would like to generalize moral principles to non-human intelligence.Let’s suppose we are visited by a highly developed species of alien goo. We suppose the aliens are somehow biological, but it’s unclear! The aliens are very different from us. The aliens also have the strange habit of climbing to the tallest point in any room they enter. If for any reason the aliens cannot reach the tallest point, as in when they are restrained, the alien goo begins to vibrate. When released, the alien quickly proceeds to climb to the tallest point in the room.Now, imagine the aliens are observing your living room. The aliens are very bad at detecting electromagnetic radiation and mostly observe the world through touch and vibration. The aliens are very interested in your potted plant, which slowly adjusts its leaves over the course of the day. The aliens can’t tell that the plant is adjusting to the sunlight. They just observe that the plant always closes up its leaves for half the day.Question: Without any further information, do we have a moral obligation to allow the alien to climb to the highest point in our room? Does the alien have a moral obligation to allow the plant to close its leaves?Ultimately the answer will depend on your moral framework. But if you would agree that we have a moral obligation not to cause an alien pain, then I think you should say yes, we are both morally obliged to allow the observed entity to act according to their inferred preference.In both scenarios, the observed entity exhibits a clear preference. The alien always wants to climb as high as possible. The plant always wants to open and close its leaves. We don’t have any clear idea of whether the entities in question are conscious. But why would this matter? Consciousness is intrinsic to some unknown subset of entities in the universe. Neither we nor the alien can detect whether an entity is conscious. But we do know that consciousness has always corresponded with the expression of preferences.You might be thinking that the real difference is the plant’s response to light is entirely automatic, while we are deeply cognitive agents. You can’t compare our preferences to those of a plant! But I think this overlooks how dumbly reactive a lot of preferences are, even if they are unclearly expressed. If you poke me with a needle, I will want you to stop, even if I keep a straight face. I don’t have very much control over my pain response. It just happens!We might qualify that a preference should represent an entity’s interests counter to the second law of thermodynamics. A rolling stone does not represent a preference. However, if it began to roll uphill, it would!I anticipate several counterarguments to a naïve definition of preference, with potentially absurd conclusions. For example, suppose the alien also noticed your rotating fan. Wouldn’t the alien have to suppose a moral obligation to the fan as well? And how do we avoid moral equivalences between turning off a fan and trimming a plant and killing a person?These are legitimate lines of critique that a moral framework using preference as the qualifier for moral obligation would have to answer. But it’s easy to imagine different mechanisms to do this, such as by defining some heuristic for the “strength” of preference, similar to the way utilitarianism thinks about utility, or looking at the reversibility of decisions. I’m also not trying to argue for some sort of cosmic libertarianism. The practice of ethics is inevitably messy, with lots of confusing gray areas. A good follow-up would be to evaluate a full moral system based on preference, maybe trying to stress test a few repurposed moral imperatives in a system that made no assumptions about consciousness. However, I am optimistic that in every instance where one might be tempted to evaluate ethics on the basis of consciousness, one could instead insert preference.An Unobserved RealityI accept that you might find my argument for preferences insufficient, either because an ethical system for preference is not yet defined, or because you believe that consciousness itself is very special. Though a physicalist should wonder why consciousness is restricted to the mind, it’s certainly the popular consensus.What I question is how one can be confident that consciousness is unique? We already know there is at least one aspect to reality that is utterly unobservable to an outsider. We would have no concept of phenomena if it were not for our own minds. Is this not the best possible evidence that there might be other aspects of reality we cannot observe?This is what “The Fourth World” gets right. Though why limit ourselves to a “fourth” world? For all we know, there might be infinitely many aspects of reality that we cannot observe today, or which may be fundamentally unobservable. Like the author, I find this incredibly exciting.But these tremendous unknowns as to the nature of reality should give us pause before rating consciousness as unique and morally important.Let’s consider a speculative example, meant to demonstrate the logical possibility of alternatives to consciousness in a physicalist universe.Returning to our earlier alien scenario, let’s pretend we have uncovered the physical substrate for consciousness. In fact, it’s now relatively easy to purchase qualia counters, which, functioning similar to a Geiger counter, allow us to estimate the “amount” of phenomena in an entity. Holding our qualia counter up to the alien goo, we fail to detect any phenomenal consciousness!However, the aliens have their own device. The alien physiology somehow fails to produce consciousness, but it does involve some other, mysterious aspect of reality… say, “monads.”  And lowering a “monad” counter down to us, the aliens fail to detect a single one.If we accept there could be further aspects of reality that we cannot observe, not unlike consciousness, then we should not take for granted that consciousness is privileged among intelligent beings. I understand this is very weird to consider, but I do not think it is any weirder than the hard problem of consciousness is already. Our example is similar to Nagel’s famous “What it’s like to be a …” thought experiment, with the additional caveat that we swap out consciousness itself for some new, unknown aspect of reality. Rather than ask what it is like to experience the qualia of a bat with a bat’s cognition, we ask what it is like to have the “monads” of an alien goo with an alien goo’s cognition (where “monads” are strictly different from qualia).All the standard caveats that conceivability does not mean reality apply. But given the existing evidence for alternative paths to cognition from terrestrial neurophysiology (e.g. machine intelligence), we ought to consider seriously whether there might be alternatives to phenomena like consciousness, but substantively different.RecapTo summarize my argument:Physicalism is a popular and reasonable explanation of consciousness.A physicalist’s best guess should be that consciousness is somehow intrinsic to neurophysiology, otherwise we have to make strange ontological and scientific conclusions (like c-particles).Once we assume consciousness is physically intrinsic to the neurophysiological process itself, it is no longer necessary to assume moral obligations to one aspect of that process.We should instead assess moral obligations to the preferences exhibited by neurophysiology. Presumably, we also owe some obligations to any agent which exhibits preferences, but this is the responsibility of a moral framework to judge.One can try to defend an obligation to consciousness by asserting it is special.However, if we cannot observe consciousness, we ought to suppose there could be further aspects of reality we cannot observe, similar or dissimilar to consciousness in ways we do not yet understand.It is conceptually possible to imagine beings very different from ourselves whose cognition involves these unknown aspects of reality. So, it would be premature to determine that consciousness demands unique obligations.Obviously, these final conclusions are several steps removed from ground truth. However, I have tried to surface the implications of what I assess to be the most likely explanations for phenomenal consciousness. For better or worse, a lot of beliefs are downstream of our explanations for consciousness. While the hard problem remains unsolved, it would be good to continue exploring the ethical implications of different theories.^In the author’s defense, in a footnote they clarify that they are “not that interested in the difference between whether these worlds are truly different or just conceptually interesting ways to talk about things.” But the post’s arguments rely heavily on the reality of this difference, so I will address it anyways.^In the 2020 PhilPapers Survey, 51.93% of respondents accepted or leaned toward physicalism about the mind, while 32.08% favored non-physicalism. See Bourget, D. & Chalmers, D. J., “Philosophers on Philosophy: The 2020 PhilPapers Survey,” https://survey2020.philpeople.org.^A few days before I finalized this post, Tenobrus wrote an article on X that might serve as a popular example of this view. Daniel Dennett’s “Quining Qualia” is a foundational text here.^I suspect one reason some people cling to an extrinsic view of consciousness is that they want to retain a black box for “free will” to drive decisions.^My personal opinion is that illusionists, like Daniel Dennett, do sometimes rely on tricks of semantic distinctions to make claims against qualia, the same way Chalmers does to make non-physicalist explanations for consciousness.Discuss ​Read More

​A recent post by @Linch, “The Fourth World,” gets at an important implication of consciousness— namely, that we ought to suspect further aspects of reality than we can today observe—but I’m not sure it arrives there the right way, or that it derives the correct conclusions.[1]The author makes two assumptions in his post which ought not to come for free. Firstly, the author assumes that we could not explain consciousness through physicalism. This is already a strong claim, one with which about half of surveyed philosophers disagree.[2] Secondly, the author assumes that consciousness is the basis for morality, or that our moral obligations are primarily due toward conscious beings.I think this second assumption is particularly common. At a recent meetup, I heard a participant complain that it is really hard to talk about consciousness, because we are always tying up consciousness with morality. I suspect this happens because we sneak in non-physicalist assumptions about consciousness into discussions about morality.So, I would like to defend the author’s claim that consciousness ought to cause us to suppose further unobserved aspects of reality (which could be very different from the reality we know), while also defending a purely physicalist explanation for consciousness, and decoupling consciousness from morality. I will continue to respond to specific arguments in “The Fourth World,” though my goal is to articulate my own, positive argument, rather than critique the post.Throughout, whenever I refer to “consciousness” without a qualifier, I am referring to phenomenal consciousness, or the “hard problem” of consciousness, or qualia. I’ll use words like “cognition” or “neurophysiology” to refer to non-phenomenal processes, which we typically understand as explainable by modern neuroscience, even if imperfectly. The distinction is important, as we otherwise backdoor capabilities like judgment and agency into consciousness without sufficient justification. I believe this is the crux of why so many of us intuit consciousness as a prerequisite for morality, when I think it is very possibly not the case.Dualism as SemanticsCheck out the basic case for physicalism if you haven’t already. I might repeat some arguments for physicalism through this post, but I won’t make a full, bottom-up argument.The author introduces us to his fourth world by supposing three distinct domains to reality: the physical world, the mathematical world, and consciousness. The author later claims that a non-conscious, robot civilization would never suspect consciousness exists.I surmise the author is pointing at some kind of dualism, with mathematics and logic being a third, metaphysical entity. The difficulty with dualist claims is that they either rely on a “mystical,” non-physical substance, or they rely on semantic qualifications for the physical world.For example, let us suppose that we learn consciousness is actually the emergence of a soul, dipping its head in from some heavenly plane of existence. We learn that all our physical laws are arbitrary and completely determined by the whim of God. In this extreme scenario, do we then accept dualism?Let’s reframe the question. Let us suppose instead that we learn consciousness is the emergence of higher dimensional reality within our own. That higher dimensional reality determines all the physical laws of our known universe. In this scenario, do we accept dualism, or merely expand our definition of physics to account for our new knowledge of reality?My point is that nearly all explanations for consciousness ought to collapse towards physicalism, because any fundamental discoveries required to explain consciousness also compel us to redefine physics accordingly. I will retain the term “mysticism” for any explanations that depart so far from a conventional understanding of reality as to render an expansion of the term “physicalism” meaningless, i.e. if consciousness is actually the soul. So, we could concede dualism for the first example, but we ought to stick to physicalism for the second.If we accept a physicalist explanation of phenomena as at least possible, then we should not assume a robot civilization could never discover consciousness. We have no way to observe or infer consciousness today, but if consciousness exists in physical reality, then it at least seems it might be possible. Without a scientific explanation for the hard problem of consciousness, it is premature to rule this out.Phenomena are IntrinsicAside from a physicalist or mystic explanation for consciousness, we could also suppose that phenomena do not exist at all, or that consciousness is an illusion.[3] Illusionism is a highly counterintuitive stance, given we all have the vivid impression of our own consciousness (how do you convince someone that they do not actually perceive the “redness of red”?). In some sense, our subjective experience is the only thing we know (cf. solipsism), so it seems really weird to claim it does not actually exist.Our experience of existence is phenomenal, so I don’t see a way to refute the hard problem without asserting we are all already zombies (I experience, yet sadly I do not exist). This is contrary to my own axiomatic belief that I experience existence, so I don’t see any way to proceed further with a strong illusionist argument. If you choose a different axiom, well, ok.But illusionism is right to question what and when something is actually phenomenal. In other words, I find it weird to refute qualia, but legitimate to debate the scope of qualia. Because when referring to consciousness, we are often actually referring to the reactive and evaluative machinery behind consciousness, e.g. cognition, and not to phenomena themselves.Consider our phenomenal experience of pain. We generally dislike pain, and indeed many ethical frameworks take for granted that we ought to minimize suffering. But why? When we say pain is bad, are we evaluating a distinct “quale of pain” as bad, or are we reacting negatively to the neurophysiological process of pain?I’ll refer to this as the difference between an extrinsic and intrinsic interpretation of consciousness. If we take a physicalist position of the hard problem of consciousness, then we can suppose phenomena are either intrinsic in neurophysiological processes in a way that we do not yet understand, or that phenomena are extrinsic to neurophysiological processes. In other words, is there a property of neurophysiology that we cannot yet quantify (like weight) or are there entities extrinsic to our neurophysiological makeup that we have not yet discovered (like a particle).My reading of the dominant theories of consciousness (e.g., IIT, Global Workspace Theory) is that they lean toward an intrinsic view. Essentially, there is something about our neurophysiology that just does induce phenomena. The two are inseparable, phenomena just are a property of some neurophysiological processes. We don’t know yet what something is, though it might involve weird fundamental physics. In this view, there is no sense in talking about a phenomenon of pain separate from the underlying neurophysiological activity, in the same way it does not make sense to discuss weight separate from gravity or mass.For an extrinsic example, imagine that there was an undetected particle of consciousness, the c-particle, which somehow interacted with neurophysiological processes, or were produced by them. Phenomena are actually composed of c-particles. We might further suppose that c-particles come in many flavors. There is a c-particle of pain, a c-particle of happiness, etc. Or perhaps they’re just different arrangements of c-particles, who knows! Mental states, as we understand them, are actually determined by c-particles. Unlike with an intrinsic view of phenomena, it actually is coherent to discuss pain separate from the underlying neurophysiology. We just need the right c-particles.I have admittedly chosen a silly example for an extrinsic view of consciousness; I imagine few would argue for an actual “c-particle.” But any extrinsic view will require a mysterious “something else” to explain consciousness, which must then causally interact with our neurophysiology, unless we are willing to accept consciousness as a mere epiphenomenal side-effect.It would be wrong to claim either of these theories are “correct.” As physicalists, we have to accept this as a question for science, and in reality, consciousness may not fit so neatly into my intrinsic/extrinsic divide. But I still find it much more likely that consciousness is somehow intrinsic to neurophysiology. An intrinsic theory requires less deviation from contemporary neuroscience, as we don’t have to posit that neuroscience is somehow insufficient to explain mental processes (no c-particles intervene), and we can neatly sidestep some of the problems of epiphenomenalism (certain mental states are intrinsically experiential).So, if we suppose an intrinsic, physicalist theory of consciousness, then our experience of pain is by definition inseparable from the pain reaction itself. It is, in fact, nonsensical to discuss an experience of pain distinct from the neurophysiological activity associated with pain. That is pain; the thing itself is the neurophysiology!I suspect illusionism is right that there is nothing like a c-particle. There is no independent experience or qualia of a good cup of coffee—there is only the experience of our own neurophysiology tasting the coffee and evaluating it as good. Consciousness itself is not responsible for judgment and evaluations—that work is all done by the mechanics of neurophysiology. Consciousness does not “intervene” to make its own decisions.[4]Generalizing Ethics with PreferencesI’d now like to consider the ethics downstream of a physicalist, intrinsic understanding of consciousness. Namely, if consciousness is intrinsic to neurophysiological activity, then why should ethics fix itself on the conscious aspects of that activity? Is this not an arbitrary distinction? Should we not rather respect the preferences those activities represent?I suspect that phenomena, as intrinsic properties of neurophysiology, are undifferentiated. A chair and an apple have different weights, but we don’t distinguish the weight of furniture from the weight of fruit. Similarly, our experience of pain and happiness may feel different, and may correspond to different biological states, but both experiences are ultimately “just” phenomena, distinguished thanks to our mental capacities. I would guess that the same is true even of sensory qualia, like the “redness of red.”If all this seems highly speculative, then let me simply state that no phenomenon seems to me inherently good or bad. Experiencing happiness is distinct from experiencing happiness as good. Now, there might be something about e.g. dopamine that causes me to want more happiness. But that is a consequence of my neurophysiology, it is one of the reasons I experience happiness as good. It’s not a quality of my experience of happiness (the phenomenon of happiness) itself.Perhaps one could object that there is no experience of happiness without wanting happiness to continue, or no experience of suffering without wanting suffering to stop; that the mental state cannot be differentiated from its reaction. But is this not just another reason to focus on the reaction, or the expressed preference, rather than the experience itself?The distinction I’m trying to make may be irrelevant when discussing humans, who have well understood preferences for different mental states. But we have to be very precise if we would like to generalize moral principles to non-human intelligence.Let’s suppose we are visited by a highly developed species of alien goo. We suppose the aliens are somehow biological, but it’s unclear! The aliens are very different from us. The aliens also have the strange habit of climbing to the tallest point in any room they enter. If for any reason the aliens cannot reach the tallest point, as in when they are restrained, the alien goo begins to vibrate. When released, the alien quickly proceeds to climb to the tallest point in the room.Now, imagine the aliens are observing your living room. The aliens are very bad at detecting electromagnetic radiation and mostly observe the world through touch and vibration. The aliens are very interested in your potted plant, which slowly adjusts its leaves over the course of the day. The aliens can’t tell that the plant is adjusting to the sunlight. They just observe that the plant always closes up its leaves for half the day.Question: Without any further information, do we have a moral obligation to allow the alien to climb to the highest point in our room? Does the alien have a moral obligation to allow the plant to close its leaves?Ultimately the answer will depend on your moral framework. But if you would agree that we have a moral obligation not to cause an alien pain, then I think you should say yes, we are both morally obliged to allow the observed entity to act according to their inferred preference.In both scenarios, the observed entity exhibits a clear preference. The alien always wants to climb as high as possible. The plant always wants to open and close its leaves. We don’t have any clear idea of whether the entities in question are conscious. But why would this matter? Consciousness is intrinsic to some unknown subset of entities in the universe. Neither we nor the alien can detect whether an entity is conscious. But we do know that consciousness has always corresponded with the expression of preferences.You might be thinking that the real difference is the plant’s response to light is entirely automatic, while we are deeply cognitive agents. You can’t compare our preferences to those of a plant! But I think this overlooks how dumbly reactive a lot of preferences are, even if they are unclearly expressed. If you poke me with a needle, I will want you to stop, even if I keep a straight face. I don’t have very much control over my pain response. It just happens!We might qualify that a preference should represent an entity’s interests counter to the second law of thermodynamics. A rolling stone does not represent a preference. However, if it began to roll uphill, it would!I anticipate several counterarguments to a naïve definition of preference, with potentially absurd conclusions. For example, suppose the alien also noticed your rotating fan. Wouldn’t the alien have to suppose a moral obligation to the fan as well? And how do we avoid moral equivalences between turning off a fan and trimming a plant and killing a person?These are legitimate lines of critique that a moral framework using preference as the qualifier for moral obligation would have to answer. But it’s easy to imagine different mechanisms to do this, such as by defining some heuristic for the “strength” of preference, similar to the way utilitarianism thinks about utility, or looking at the reversibility of decisions. I’m also not trying to argue for some sort of cosmic libertarianism. The practice of ethics is inevitably messy, with lots of confusing gray areas. A good follow-up would be to evaluate a full moral system based on preference, maybe trying to stress test a few repurposed moral imperatives in a system that made no assumptions about consciousness. However, I am optimistic that in every instance where one might be tempted to evaluate ethics on the basis of consciousness, one could instead insert preference.An Unobserved RealityI accept that you might find my argument for preferences insufficient, either because an ethical system for preference is not yet defined, or because you believe that consciousness itself is very special. Though a physicalist should wonder why consciousness is restricted to the mind, it’s certainly the popular consensus.What I question is how one can be confident that consciousness is unique? We already know there is at least one aspect to reality that is utterly unobservable to an outsider. We would have no concept of phenomena if it were not for our own minds. Is this not the best possible evidence that there might be other aspects of reality we cannot observe?This is what “The Fourth World” gets right. Though why limit ourselves to a “fourth” world? For all we know, there might be infinitely many aspects of reality that we cannot observe today, or which may be fundamentally unobservable. Like the author, I find this incredibly exciting.But these tremendous unknowns as to the nature of reality should give us pause before rating consciousness as unique and morally important.Let’s consider a speculative example, meant to demonstrate the logical possibility of alternatives to consciousness in a physicalist universe.Returning to our earlier alien scenario, let’s pretend we have uncovered the physical substrate for consciousness. In fact, it’s now relatively easy to purchase qualia counters, which, functioning similar to a Geiger counter, allow us to estimate the “amount” of phenomena in an entity. Holding our qualia counter up to the alien goo, we fail to detect any phenomenal consciousness!However, the aliens have their own device. The alien physiology somehow fails to produce consciousness, but it does involve some other, mysterious aspect of reality… say, “monads.”  And lowering a “monad” counter down to us, the aliens fail to detect a single one.If we accept there could be further aspects of reality that we cannot observe, not unlike consciousness, then we should not take for granted that consciousness is privileged among intelligent beings. I understand this is very weird to consider, but I do not think it is any weirder than the hard problem of consciousness is already. Our example is similar to Nagel’s famous “What it’s like to be a …” thought experiment, with the additional caveat that we swap out consciousness itself for some new, unknown aspect of reality. Rather than ask what it is like to experience the qualia of a bat with a bat’s cognition, we ask what it is like to have the “monads” of an alien goo with an alien goo’s cognition (where “monads” are strictly different from qualia).All the standard caveats that conceivability does not mean reality apply. But given the existing evidence for alternative paths to cognition from terrestrial neurophysiology (e.g. machine intelligence), we ought to consider seriously whether there might be alternatives to phenomena like consciousness, but substantively different.RecapTo summarize my argument:Physicalism is a popular and reasonable explanation of consciousness.A physicalist’s best guess should be that consciousness is somehow intrinsic to neurophysiology, otherwise we have to make strange ontological and scientific conclusions (like c-particles).Once we assume consciousness is physically intrinsic to the neurophysiological process itself, it is no longer necessary to assume moral obligations to one aspect of that process.We should instead assess moral obligations to the preferences exhibited by neurophysiology. Presumably, we also owe some obligations to any agent which exhibits preferences, but this is the responsibility of a moral framework to judge.One can try to defend an obligation to consciousness by asserting it is special.However, if we cannot observe consciousness, we ought to suppose there could be further aspects of reality we cannot observe, similar or dissimilar to consciousness in ways we do not yet understand.It is conceptually possible to imagine beings very different from ourselves whose cognition involves these unknown aspects of reality. So, it would be premature to determine that consciousness demands unique obligations.Obviously, these final conclusions are several steps removed from ground truth. However, I have tried to surface the implications of what I assess to be the most likely explanations for phenomenal consciousness. For better or worse, a lot of beliefs are downstream of our explanations for consciousness. While the hard problem remains unsolved, it would be good to continue exploring the ethical implications of different theories.^In the author’s defense, in a footnote they clarify that they are “not that interested in the difference between whether these worlds are truly different or just conceptually interesting ways to talk about things.” But the post’s arguments rely heavily on the reality of this difference, so I will address it anyways.^In the 2020 PhilPapers Survey, 51.93% of respondents accepted or leaned toward physicalism about the mind, while 32.08% favored non-physicalism. See Bourget, D. & Chalmers, D. J., “Philosophers on Philosophy: The 2020 PhilPapers Survey,” https://survey2020.philpeople.org.^A few days before I finalized this post, Tenobrus wrote an article on X that might serve as a popular example of this view. Daniel Dennett’s “Quining Qualia” is a foundational text here.^I suspect one reason some people cling to an extrinsic view of consciousness is that they want to retain a black box for “free will” to drive decisions.^My personal opinion is that illusionists, like Daniel Dennett, do sometimes rely on tricks of semantic distinctions to make claims against qualia, the same way Chalmers does to make non-physicalist explanations for consciousness.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *