Published on February 23, 2026 9:18 PM GMTYou can’t care for something you can’t see.To mind a thing, honor it, or keep it safe, you have to first know it is there. If agentic machines are ever able to pay attention to or honor life, they’ll need to be able to detect it. They’ll need a way to look through a webcam, examine a latent space, or listen to an audio feed and reliably distinguish between humans, animals, and trees (and also maybe sculptures, temples, ecosystems, families, teams, and cities) from empty parts of outer space, factory-printed plastic tables made to look like wood, or LLM slop.Whether we are talking about advanced systems decades from now, or the autonomous agents deploying next year, the rule holds: to care about a thing, you need a metal detector for it.Naturally, building a “life detector” carries risks. Detection is the prerequisite for targeting. If a machine can pinpoint life, it can also destroy it. So we must ask: if we give AI the ability to see what we cherish, do the benefits of that vision actually outweigh the risks of giving it a targeting mechanism?But on balance, it seems they do. Destruction can be indiscriminate; preservation requires precision.To see how, imagine a robot sent to destroy a field of flowers. It doesn’t need a flower detector; it can simply crush everything in the area. Broad, blind actions are perfectly sufficient for ruining things—it takes almost no information to increase entropy. But a robot built to protect those flowers? It needs exact coordinates. You cannot preserve a delicate state against entropy without knowing exactly where it is.Consider humans. If we suddenly lost the ability to detect one another, targeted harm would drop, yes. But accidental harm would persist, and empathy would disappear entirely. Loss of connection would arguably outweigh the reduction in malice.To destroy a thing, detection is optional. To honor a thing, detection is a prerequisite. A life-detector provides the basic ontology—the foundational map of what exists—that an AI needs before any objective function about “doing no harm” can even be applied. So, despite the unintended consequences we will have to navigate, the counterfactual—blind agents acting upon the world without the ability to perceive what we value—seems strictly worse.What is aliveness?If we need a digital metal detector for aliveness, we have to define what the “metal” is. What is the quality in physical things about which we care most?Carbon? Too broad; it includes coal and trash.DNA? Too narrow; it excludes cathedrals and includes dead leaves.Self-reflection? Over-intellectual: it sacrifices dogs and infants for dolphins.How we define life for a detector is extraordinarily important. One answer that gives me traction is this: Life exists where there is accumulated care. Organisms are sites of intense accumulated care (both self-care and the care of others). But a piece of art also consists almost entirely of marks of care—less, perhaps, than a living organism, but more than a sidewalk (unless the sidewalk is beautifully carved, graffitied, or worn smooth by years of footsteps).Of course, a sufficiently advanced system might eventually learn to spoof these marks (just as a plastic table mimics wood grain). That just means the detector we’re seeking is one that detects true, deep care. The task is hard.What does it mean to detect care?How can it be done? Insofar as humans are capable, how do we do it?We can think of something that was cared for as something that’s been marked by a focused or effortful process. Paintings feel often full of care, and brushstrokes are evidence. Brushstrokes are proof that a careful process occurred.But what is it about a brushstroke? What is it about David’s anthology of folk music that makes the requisite care so obvious? What is it about Annie’s website? What is it about the Eiffel Tower? Or about Kendrick’s DAMN? Or about these strangers’ voice notes we’ve collected? Or the erotic stone sculptures in Henrik’s home island? Or an E.E. Cummings poem? Or about the way I just saw beautiful dogs playing in the blizzard?What is it about these things that let us know they’re alive? What are the qualities of an object that tell us its story, and when/how do those stories convey care? And will digital intelligences ever be able to pick up on this stuff so that they have a chance of honoring it?An aliveness detector would be impactful even built outside AGI labs.If the ultimate goal is to increase the chance that machines care for life wherever it exists, you don’t need to be the one making the machines to influence the trajectory. You can also try to build a capability which will be necessary for the machine to care for life. In other words, even publishing narrow prototypes or insights about detecting aliveness across different formats might someday contribute.External APIs and platforms have already enabled qualitatively new behaviors for AI agents. You can trust that, the moment AGI labs need a new capability for their agents, they’ll scour the internet for solutions if they exist. The more convenient it is for AI systems to be able to care for people, the better. Creating infrastructure—and thinking about how to create it—therefore feels important to me.Discuss Read More
Metal Detector for Life
Published on February 23, 2026 9:18 PM GMTYou can’t care for something you can’t see.To mind a thing, honor it, or keep it safe, you have to first know it is there. If agentic machines are ever able to pay attention to or honor life, they’ll need to be able to detect it. They’ll need a way to look through a webcam, examine a latent space, or listen to an audio feed and reliably distinguish between humans, animals, and trees (and also maybe sculptures, temples, ecosystems, families, teams, and cities) from empty parts of outer space, factory-printed plastic tables made to look like wood, or LLM slop.Whether we are talking about advanced systems decades from now, or the autonomous agents deploying next year, the rule holds: to care about a thing, you need a metal detector for it.Naturally, building a “life detector” carries risks. Detection is the prerequisite for targeting. If a machine can pinpoint life, it can also destroy it. So we must ask: if we give AI the ability to see what we cherish, do the benefits of that vision actually outweigh the risks of giving it a targeting mechanism?But on balance, it seems they do. Destruction can be indiscriminate; preservation requires precision.To see how, imagine a robot sent to destroy a field of flowers. It doesn’t need a flower detector; it can simply crush everything in the area. Broad, blind actions are perfectly sufficient for ruining things—it takes almost no information to increase entropy. But a robot built to protect those flowers? It needs exact coordinates. You cannot preserve a delicate state against entropy without knowing exactly where it is.Consider humans. If we suddenly lost the ability to detect one another, targeted harm would drop, yes. But accidental harm would persist, and empathy would disappear entirely. Loss of connection would arguably outweigh the reduction in malice.To destroy a thing, detection is optional. To honor a thing, detection is a prerequisite. A life-detector provides the basic ontology—the foundational map of what exists—that an AI needs before any objective function about “doing no harm” can even be applied. So, despite the unintended consequences we will have to navigate, the counterfactual—blind agents acting upon the world without the ability to perceive what we value—seems strictly worse.What is aliveness?If we need a digital metal detector for aliveness, we have to define what the “metal” is. What is the quality in physical things about which we care most?Carbon? Too broad; it includes coal and trash.DNA? Too narrow; it excludes cathedrals and includes dead leaves.Self-reflection? Over-intellectual: it sacrifices dogs and infants for dolphins.How we define life for a detector is extraordinarily important. One answer that gives me traction is this: Life exists where there is accumulated care. Organisms are sites of intense accumulated care (both self-care and the care of others). But a piece of art also consists almost entirely of marks of care—less, perhaps, than a living organism, but more than a sidewalk (unless the sidewalk is beautifully carved, graffitied, or worn smooth by years of footsteps).Of course, a sufficiently advanced system might eventually learn to spoof these marks (just as a plastic table mimics wood grain). That just means the detector we’re seeking is one that detects true, deep care. The task is hard.What does it mean to detect care?How can it be done? Insofar as humans are capable, how do we do it?We can think of something that was cared for as something that’s been marked by a focused or effortful process. Paintings feel often full of care, and brushstrokes are evidence. Brushstrokes are proof that a careful process occurred.But what is it about a brushstroke? What is it about David’s anthology of folk music that makes the requisite care so obvious? What is it about Annie’s website? What is it about the Eiffel Tower? Or about Kendrick’s DAMN? Or about these strangers’ voice notes we’ve collected? Or the erotic stone sculptures in Henrik’s home island? Or an E.E. Cummings poem? Or about the way I just saw beautiful dogs playing in the blizzard?What is it about these things that let us know they’re alive? What are the qualities of an object that tell us its story, and when/how do those stories convey care? And will digital intelligences ever be able to pick up on this stuff so that they have a chance of honoring it?An aliveness detector would be impactful even built outside AGI labs.If the ultimate goal is to increase the chance that machines care for life wherever it exists, you don’t need to be the one making the machines to influence the trajectory. You can also try to build a capability which will be necessary for the machine to care for life. In other words, even publishing narrow prototypes or insights about detecting aliveness across different formats might someday contribute.External APIs and platforms have already enabled qualitatively new behaviors for AI agents. You can trust that, the moment AGI labs need a new capability for their agents, they’ll scour the internet for solutions if they exist. The more convenient it is for AI systems to be able to care for people, the better. Creating infrastructure—and thinking about how to create it—therefore feels important to me.Discuss Read More