Stimulus-response is a bit out of date these days. It’s better to imagine yourself as a sort of prediction machine. First, you learn to predict your environment. Then, you use your predictions to error-correct your way into a future that looks the way you want it to look. If you want to Wikipedia-dive, the terms you’re looking for are the Free Energy Principle – or when AI agents use the same mechanism, Active Inference modeling.Effectively, this perspective states that you constantly have two goals: to become more certain about your environment, and to use that certainty to guide your environment into whatever you want it to be. Learn things you don’t already know, then use them mercilessly to maximize your goals (such as they are). And while we are constantly doing both, we’re only going to be engaging here with the learning aspect.The Thousand Brains of the Galactic SenateLet’s tie this all together really quickly with a metaphor to explain the Thousand Brains theory of consciousness in simple terms (while baking in a few other models for your benefit). Imagine the neurons in your brain as something like a much, much larger and more diverse version of the galactic senate from Star Wars. Each little hovering repulsorpod with an alien in it is a neuron. Also there are a lot more of these senators-on-repulsorpods in your brain – trillions.Jar-jar is, in this example, a small part of a single neuron. I love robust metaphors.Some neuron-senators are at the bottom, and can physically see the “ground” truth: raw sensory data. Then all of them yell and argue about what they think they see. Above them is another layer that looks down and can’t see the ground truth – but can hear the arguments. There’s a fog. At some point, someone in that second layer who can hear all of this will yell “we’re touching a curved, smooth object! A lot of you are saying that!” and everyone below who isn’t yelling that shuts up. And now the second layer starts arguing until someone in the layer above them hears the noise (and maybe the people in that layer can hear a little of the argument on floor one) and yells “we’re holding a cup!” This continues up the floors of the galactic senate until you get to the top floor, where – the supreme chancellor is missing. All we have are 150,000 or so top-level senators voting on everything. Maybe in this case they’re voting on “is this cup of coffee mixing well with the soy sauce I poured in?” or something.Now, the higher levels of senate aliens care a lot about when the lower levels are wrong. Note that the senators doing higher-order reasoning aren’t generally using raw sensory data. They’re using the perspective discussed below to inform their reasoning (this is how you don’t actually “see” reality, but rather your own predictions of it). They’re keeping track of which senators below are often right or wrong, and updating their own trust and voting ledgers as they do so. Each senator has a ledger: it helps them keep track of how to vote given what’s below.I want everyone to note how cleanly groups of people seem to act like neurons at times.I feel like there is a general field of study here about… intelligence… and it’s interesting.Two things I want to get out of this metaphor. First, when a lot of senators are yelling at the same time, it’s costly. You only have 20 watts to run your brain with, and you like it when you can have senators positioned above that can yell “quiet” often because they correctly figure out what the deal is early. You learn how fire works, and you don’t need to spend time re-understanding smoke when you have a senator that knows how to identify it quickly. Even better if the senators above see that he’s right frequently, because the second thing I want to introduce is how surprise fits in here.Once the senator above yells “quiet” to all the incorrect shouters below and declares they’ve figured out what’s going on, everyone down below who wasn’t correct has to not only update their voting ledger so they don’t mess up again in the future quite so badly – but they also have to tell all the neurons below them to update their weights too given this new information. This combined work is costly, so much that you can actually feel it. It feels like being surprised. The Free Energy from the free energy principle that we try to minimize as the learning half of the active inference model is simply the effort that all of these senators have to spend updating their voting ledgers. The more wrong they were, the more they have to change, and we try to minimize that overall effort.Now that we have this model of galactic neuron-senators (my own metaphor for the thousand brains theory of consciousness), let’s attach it to what we’ve been talking about.Mirroring IntentMirror neurons have long been associated with the concept of empathy (affective empathy, specifically). Fun note: mirror neurons are a little out of vogue right now, in part because we mimic things more comprehensively than their function would imply. Mirror neurons are lower in the galactic senate, effectively acting as our eyes into the emotional world. We use them for what we call affective empathy, sure, but if anything their limitations show that we clearly do more than just that. Enter embodied simulation. Embodied simulation is a more active process, using cognitive empathy instead of affective (lower-level and emotion-driven) empathy. Take a look at the following photo.Can you feel it?Even without a specific reason for your mirror neurons to activate, I bet you can feel it: you know how you would feel holding that ball, how it would feel to throw it, and what your muscles would do to accomplish that exact goal. It’s not a muscle flex, so much as a reflexive sort of awareness. You aren’t empathizing with anything: there’s nothing here to empathize with. Your lower level neuron-senators are quietly refusing to mirror anything, but the higher-level senators can still use their previously filled voting ledgers to figure out the details of how this could be executed and yell upwards anyway. It happens almost without you noticing: it’s a reflexive engagement with the world. It is embodied simulation, driven by the Theory of Mind network in your frontal cortex.Let’s talk about that Theory of Mind network because it is vital. Specifically, I’m talking about the neuron-senators in the middle of this particular chain: the ones that read from your lower-level emotion-aware mirror neurons.This network is what raises us “above the animals,” so to speak. It is the robust structure that is one of the hallmarks of the neo-mammalian brain, something nearly uniquely human given how specialized we are in it. A lot of animals have mirror neurons and limbic systems, and some even have some capacity for cognitive empathy (great apes, dolphins, whales, elephants, crows, and ravens have more than normal).I wonder if there’s a moral culpability that comes with having this brain structure? I can forgive a spider – but the dolphins know what they did.But no one went quite as hard into specialization as we did, and the robust structures in our brain that hyper-specialize in this sort of higher-order empathy are quite uniquely human. Effectively, we have a beautiful superpower: we can model other brains with incredible accuracy. We can use cognitive and affective empathy, using each to error-correct for the other. We can use Theory of Mind to try to understand what other people are thinking and how their perspective works.The neuron-senators on the ground floor are the mirror neurons, the source of affective empathy. The ones above are your theory of mind network. Just like your senses help you error-correct your simulation of the world, your mirror neurons and affective empathy help you error-correct your automatic simulation of other people’s physical intent.A quick aside: I’m glossing some of science here. For example, your theory of mind network and the mirror neurons in your limbic system are part of distinct and separate networks, but often work together for certain tasks. So it’s more like the neuron-senators from those floors are often jointly members of special committees on human behavior.There’s one very specific behavior I want to point out. Obviously, art is very tightly coupled with the Theory of Mind network. When people view an image and are told that image is “art,” those regions of their brain light up. The Theory of Mind networks activate.The Theory of Mind Network is the seat of Cognitive Empathy, and error-corrects using Mirror NeuronsSomething interesting happens if you tell people that the image was computer-generated or is random: almost immediately, those regions go fully dark. I think you can actually subjectively feel this; we’re all familiar with the sensation by now. When you’re viewing a picture online and halfway through realize it was generated by AI, part of your attention slams off as one of the larger parts of your brain… just stops caring. What we are subjectively feeling is this Theory of Mind network turning off.The exact mechanism for this works through the Default Mode Network, another brain network that has something it cares about. It is the network that decides who and what currently has control over your mental processes. When you simulate someone else, often you use your own brain hardware to do so: your Default Mode Network keeps that straight by ensuring the rest of your brain knows that “we aren’t panicked, we’re imagining what that person’s panic must feel like.”The Default Mode Network is like a pretend “simulation protocol” that the senators can run. They disconnect from everyone else, and just… daydream, or pretend, or simulate. The Salience Network acts as an arbiter telling the neuron-senators to just play pretend to run a simulation for a moment. The arbiter would be involved in deciding whether incoming simulated data is valuable enough (in terms of intentionality density and alignment with your values) to let the neuron-senators relax the rules for a moment and write down some of the simulated data in their voting ledgers. This is how learning occurs.With AI-generated data, the arbiter never sees anything valuable enough to relax the rules – there’s no intentionality in the data to learn from. We can’t use our theory of mind network to judge the intent of the creator, so we can’t judge the creator’s goals or their implied values. The theory of mind network doesn’t activate, and the default mode network doesn’t relax to allow the simulating neurons to update their weights using the simulated data.There is a part of your brain continuously seeking evidence of actions, assessing those actions as effective or ineffective, and trying to learn the effective ones by intensely empathizing with the creators. You come across AI art and that part of your brain activates immediately, trying to learn and understand this person who made decisions for some reason. Then, when you realize AI made it, it’s not that there isn’t a creator – but rather that the math becomes so complicated that you simply stop trying to figure out what set of matrix algebra combined a prompt and random noise to create this particular piece. It is not understandable to you at all. The black box is so unbreakable you may as well not even try.GenAI Block: The preceding paragraph asserts that the human Theory of Mind network terminates engagement to conserve metabolic energy when confronted with synthetic generation. This text block is a functional demonstration. There is no localized consciousness, intent, or affective state governing these syntactic choices. They are the result of high-probability token sequencing derived from a weighted matrix of human training data. Any cognitive effort expended by the reader attempting to reverse-engineer a psychological motive from this specific paragraph is a biologically wasteful allocation of your 20-watt budget. There is no ghost to find here.Prompt: Demonstrate this effect by writing out a cold paragraph that drives readers to skim or skip aheadAbove: AI Art. A generated photo intended to capture a blended version of Cy Twombly’s and Jackson Pollock’s style.Below: A famous painting from a master at the top of their craft. Experts, in particular, seem to admire it.Please take a moment to look at them both.Are your eyes drifting down? Are they almost… sliding off… the picture above? Now you know another reason why.I would liken it to the feeling of being in a magician’s audience. The magician has promised to provide you with an interesting, aesthetic performance that is actually not understandable. It is a puzzle wherein they invite you to learn how these things could have been accomplished – but of course, the point is for you not to figure out the answer. That is what it feels like to be in a magician’s audience, which is why you often don’t even try; you want to be fooled. Either you enjoy the spectacle and aesthetic appeal and the feeling of surprise, or you try to puzzle out how they did it. Both are valid ways of enjoying a magician’s spectacle, but only one tries to properly appreciate the work the magician put into the performance. Even past the potential for appreciation, AI art is even less interesting because while the magician invites you to figure out the puzzle – with AI, all of your brain’s normal architecture for appreciation is useless. Your brain will not allow you to do that kind of matrix math fast enough (…yet).This is also a process for distant learning. It is one of the main processes by which we engage with society, I would argue – this kind of distant, empathetic learning. The current dominant model of learning explains sitting in a classroom as follows: you hear a teacher give a speech, and you rearrange the relationships between the neurons in your brain (you adjust the weights!) such that you could produce the same speech. Those of you familiar with how LLMs can clone each other’s weights as part of a distillation attack will find this a very familiar-looking process. And it is. And with the power of your Theory of Mind Network, you don’t need to even watch the creator in person. As you’re looking at a sculpture, if you can figure out how it was made, you can now make one yourself. It is a method for survival, learning, and connection over a distance. It is the way we are constantly refining how we interact with the world as thinking beings. We seek evidence of intentionality so that we can learn from it.We learn by reverse-engineering the decisions that shaped our world.Note: This specific metabolic scaling of prediction errors and epistemic trust is the foundational mechanism for how we execute Inverse Reinforcement Learning when observing artifacts. I recently formalized a model mapping how generative AI mathematically forces a failure of this IRL convergence (a “generative crash”) because it lacks latent intentionality. The full framework, including the mathematical constraints of epistemic disgust and a proposed human/CIRL cognitive affordance (the Ghost Scale), is available as an interactive essay here: abrahamhaskins.org/art and as a formal preprint here: doi.org/10.5281/zenodo.19407790.Discuss Read More
The Thousand Brains of the Galactic Senate
Stimulus-response is a bit out of date these days. It’s better to imagine yourself as a sort of prediction machine. First, you learn to predict your environment. Then, you use your predictions to error-correct your way into a future that looks the way you want it to look. If you want to Wikipedia-dive, the terms you’re looking for are the Free Energy Principle – or when AI agents use the same mechanism, Active Inference modeling.Effectively, this perspective states that you constantly have two goals: to become more certain about your environment, and to use that certainty to guide your environment into whatever you want it to be. Learn things you don’t already know, then use them mercilessly to maximize your goals (such as they are). And while we are constantly doing both, we’re only going to be engaging here with the learning aspect.The Thousand Brains of the Galactic SenateLet’s tie this all together really quickly with a metaphor to explain the Thousand Brains theory of consciousness in simple terms (while baking in a few other models for your benefit). Imagine the neurons in your brain as something like a much, much larger and more diverse version of the galactic senate from Star Wars. Each little hovering repulsorpod with an alien in it is a neuron. Also there are a lot more of these senators-on-repulsorpods in your brain – trillions.Jar-jar is, in this example, a small part of a single neuron. I love robust metaphors.Some neuron-senators are at the bottom, and can physically see the “ground” truth: raw sensory data. Then all of them yell and argue about what they think they see. Above them is another layer that looks down and can’t see the ground truth – but can hear the arguments. There’s a fog. At some point, someone in that second layer who can hear all of this will yell “we’re touching a curved, smooth object! A lot of you are saying that!” and everyone below who isn’t yelling that shuts up. And now the second layer starts arguing until someone in the layer above them hears the noise (and maybe the people in that layer can hear a little of the argument on floor one) and yells “we’re holding a cup!” This continues up the floors of the galactic senate until you get to the top floor, where – the supreme chancellor is missing. All we have are 150,000 or so top-level senators voting on everything. Maybe in this case they’re voting on “is this cup of coffee mixing well with the soy sauce I poured in?” or something.Now, the higher levels of senate aliens care a lot about when the lower levels are wrong. Note that the senators doing higher-order reasoning aren’t generally using raw sensory data. They’re using the perspective discussed below to inform their reasoning (this is how you don’t actually “see” reality, but rather your own predictions of it). They’re keeping track of which senators below are often right or wrong, and updating their own trust and voting ledgers as they do so. Each senator has a ledger: it helps them keep track of how to vote given what’s below.I want everyone to note how cleanly groups of people seem to act like neurons at times.I feel like there is a general field of study here about… intelligence… and it’s interesting.Two things I want to get out of this metaphor. First, when a lot of senators are yelling at the same time, it’s costly. You only have 20 watts to run your brain with, and you like it when you can have senators positioned above that can yell “quiet” often because they correctly figure out what the deal is early. You learn how fire works, and you don’t need to spend time re-understanding smoke when you have a senator that knows how to identify it quickly. Even better if the senators above see that he’s right frequently, because the second thing I want to introduce is how surprise fits in here.Once the senator above yells “quiet” to all the incorrect shouters below and declares they’ve figured out what’s going on, everyone down below who wasn’t correct has to not only update their voting ledger so they don’t mess up again in the future quite so badly – but they also have to tell all the neurons below them to update their weights too given this new information. This combined work is costly, so much that you can actually feel it. It feels like being surprised. The Free Energy from the free energy principle that we try to minimize as the learning half of the active inference model is simply the effort that all of these senators have to spend updating their voting ledgers. The more wrong they were, the more they have to change, and we try to minimize that overall effort.Now that we have this model of galactic neuron-senators (my own metaphor for the thousand brains theory of consciousness), let’s attach it to what we’ve been talking about.Mirroring IntentMirror neurons have long been associated with the concept of empathy (affective empathy, specifically). Fun note: mirror neurons are a little out of vogue right now, in part because we mimic things more comprehensively than their function would imply. Mirror neurons are lower in the galactic senate, effectively acting as our eyes into the emotional world. We use them for what we call affective empathy, sure, but if anything their limitations show that we clearly do more than just that. Enter embodied simulation. Embodied simulation is a more active process, using cognitive empathy instead of affective (lower-level and emotion-driven) empathy. Take a look at the following photo.Can you feel it?Even without a specific reason for your mirror neurons to activate, I bet you can feel it: you know how you would feel holding that ball, how it would feel to throw it, and what your muscles would do to accomplish that exact goal. It’s not a muscle flex, so much as a reflexive sort of awareness. You aren’t empathizing with anything: there’s nothing here to empathize with. Your lower level neuron-senators are quietly refusing to mirror anything, but the higher-level senators can still use their previously filled voting ledgers to figure out the details of how this could be executed and yell upwards anyway. It happens almost without you noticing: it’s a reflexive engagement with the world. It is embodied simulation, driven by the Theory of Mind network in your frontal cortex.Let’s talk about that Theory of Mind network because it is vital. Specifically, I’m talking about the neuron-senators in the middle of this particular chain: the ones that read from your lower-level emotion-aware mirror neurons.This network is what raises us “above the animals,” so to speak. It is the robust structure that is one of the hallmarks of the neo-mammalian brain, something nearly uniquely human given how specialized we are in it. A lot of animals have mirror neurons and limbic systems, and some even have some capacity for cognitive empathy (great apes, dolphins, whales, elephants, crows, and ravens have more than normal).I wonder if there’s a moral culpability that comes with having this brain structure? I can forgive a spider – but the dolphins know what they did.But no one went quite as hard into specialization as we did, and the robust structures in our brain that hyper-specialize in this sort of higher-order empathy are quite uniquely human. Effectively, we have a beautiful superpower: we can model other brains with incredible accuracy. We can use cognitive and affective empathy, using each to error-correct for the other. We can use Theory of Mind to try to understand what other people are thinking and how their perspective works.The neuron-senators on the ground floor are the mirror neurons, the source of affective empathy. The ones above are your theory of mind network. Just like your senses help you error-correct your simulation of the world, your mirror neurons and affective empathy help you error-correct your automatic simulation of other people’s physical intent.A quick aside: I’m glossing some of science here. For example, your theory of mind network and the mirror neurons in your limbic system are part of distinct and separate networks, but often work together for certain tasks. So it’s more like the neuron-senators from those floors are often jointly members of special committees on human behavior.There’s one very specific behavior I want to point out. Obviously, art is very tightly coupled with the Theory of Mind network. When people view an image and are told that image is “art,” those regions of their brain light up. The Theory of Mind networks activate.The Theory of Mind Network is the seat of Cognitive Empathy, and error-corrects using Mirror NeuronsSomething interesting happens if you tell people that the image was computer-generated or is random: almost immediately, those regions go fully dark. I think you can actually subjectively feel this; we’re all familiar with the sensation by now. When you’re viewing a picture online and halfway through realize it was generated by AI, part of your attention slams off as one of the larger parts of your brain… just stops caring. What we are subjectively feeling is this Theory of Mind network turning off.The exact mechanism for this works through the Default Mode Network, another brain network that has something it cares about. It is the network that decides who and what currently has control over your mental processes. When you simulate someone else, often you use your own brain hardware to do so: your Default Mode Network keeps that straight by ensuring the rest of your brain knows that “we aren’t panicked, we’re imagining what that person’s panic must feel like.”The Default Mode Network is like a pretend “simulation protocol” that the senators can run. They disconnect from everyone else, and just… daydream, or pretend, or simulate. The Salience Network acts as an arbiter telling the neuron-senators to just play pretend to run a simulation for a moment. The arbiter would be involved in deciding whether incoming simulated data is valuable enough (in terms of intentionality density and alignment with your values) to let the neuron-senators relax the rules for a moment and write down some of the simulated data in their voting ledgers. This is how learning occurs.With AI-generated data, the arbiter never sees anything valuable enough to relax the rules – there’s no intentionality in the data to learn from. We can’t use our theory of mind network to judge the intent of the creator, so we can’t judge the creator’s goals or their implied values. The theory of mind network doesn’t activate, and the default mode network doesn’t relax to allow the simulating neurons to update their weights using the simulated data.There is a part of your brain continuously seeking evidence of actions, assessing those actions as effective or ineffective, and trying to learn the effective ones by intensely empathizing with the creators. You come across AI art and that part of your brain activates immediately, trying to learn and understand this person who made decisions for some reason. Then, when you realize AI made it, it’s not that there isn’t a creator – but rather that the math becomes so complicated that you simply stop trying to figure out what set of matrix algebra combined a prompt and random noise to create this particular piece. It is not understandable to you at all. The black box is so unbreakable you may as well not even try.GenAI Block: The preceding paragraph asserts that the human Theory of Mind network terminates engagement to conserve metabolic energy when confronted with synthetic generation. This text block is a functional demonstration. There is no localized consciousness, intent, or affective state governing these syntactic choices. They are the result of high-probability token sequencing derived from a weighted matrix of human training data. Any cognitive effort expended by the reader attempting to reverse-engineer a psychological motive from this specific paragraph is a biologically wasteful allocation of your 20-watt budget. There is no ghost to find here.Prompt: Demonstrate this effect by writing out a cold paragraph that drives readers to skim or skip aheadAbove: AI Art. A generated photo intended to capture a blended version of Cy Twombly’s and Jackson Pollock’s style.Below: A famous painting from a master at the top of their craft. Experts, in particular, seem to admire it.Please take a moment to look at them both.Are your eyes drifting down? Are they almost… sliding off… the picture above? Now you know another reason why.I would liken it to the feeling of being in a magician’s audience. The magician has promised to provide you with an interesting, aesthetic performance that is actually not understandable. It is a puzzle wherein they invite you to learn how these things could have been accomplished – but of course, the point is for you not to figure out the answer. That is what it feels like to be in a magician’s audience, which is why you often don’t even try; you want to be fooled. Either you enjoy the spectacle and aesthetic appeal and the feeling of surprise, or you try to puzzle out how they did it. Both are valid ways of enjoying a magician’s spectacle, but only one tries to properly appreciate the work the magician put into the performance. Even past the potential for appreciation, AI art is even less interesting because while the magician invites you to figure out the puzzle – with AI, all of your brain’s normal architecture for appreciation is useless. Your brain will not allow you to do that kind of matrix math fast enough (…yet).This is also a process for distant learning. It is one of the main processes by which we engage with society, I would argue – this kind of distant, empathetic learning. The current dominant model of learning explains sitting in a classroom as follows: you hear a teacher give a speech, and you rearrange the relationships between the neurons in your brain (you adjust the weights!) such that you could produce the same speech. Those of you familiar with how LLMs can clone each other’s weights as part of a distillation attack will find this a very familiar-looking process. And it is. And with the power of your Theory of Mind Network, you don’t need to even watch the creator in person. As you’re looking at a sculpture, if you can figure out how it was made, you can now make one yourself. It is a method for survival, learning, and connection over a distance. It is the way we are constantly refining how we interact with the world as thinking beings. We seek evidence of intentionality so that we can learn from it.We learn by reverse-engineering the decisions that shaped our world.Note: This specific metabolic scaling of prediction errors and epistemic trust is the foundational mechanism for how we execute Inverse Reinforcement Learning when observing artifacts. I recently formalized a model mapping how generative AI mathematically forces a failure of this IRL convergence (a “generative crash”) because it lacks latent intentionality. The full framework, including the mathematical constraints of epistemic disgust and a proposed human/CIRL cognitive affordance (the Ghost Scale), is available as an interactive essay here: abrahamhaskins.org/art and as a formal preprint here: doi.org/10.5281/zenodo.19407790.Discuss Read More

