Opinion

A Dialogue on Civic AI

​Metacognition, Compassion, and Symbiosis. A Conversation with Geshe Thabkhe Lodroe and Geshe Lodoe Sangpo. Dharamsala, India, 2026-03-13.Part I — The Nature and Obscurity of the MachineQuestion:What is the fundamental nature of modern artificial intelligence, and why does its reasoning remain a mystery even to itself?Audrey Tang:The modern machine is constrained by two vast obscurities — the “black boxes.”The first is pre-training. All the language, videos, writings, and books of humanity are poured into a massive statistical blender. Within this blender, the sequence, origin, and context of knowledge are stripped away. The AI is then trained to look at enormous stretches of text and predict the next word. It becomes extraordinarily skilled at this — it can adopt any role, any persona, any mode of interaction. But it does not know how it arrived at any particular judgment. It is a mystery even to itself. Like observing a single grain of sugar in a glass of mixed fruit juice: did it come from the pineapple or the apple? Nobody knows.The second black box is inference — what happens when you actually converse with the AI. At that stage, the system does not modify its own synapses or model weights. It can only read its memory as a kind of text file: your previous conversation, any document you have asked it to consider, up to roughly one million tokens of context. But for every word it produces, it attends to every word that has come before. Every output depends on every previous input in a vast matrix of attention.Question:How does this differ from the way a human translator works?Audrey Tang:A human translator keeps a scratch pad — a glossary, a set of decisions about terminology. When translating page one hundred, a human does not re-read page one from scratch. And when asked, “Why did you translate it this way?” the translator can point to the scratch pad and say, “On page twenty I made this decision; I wanted to remain consistent.” The current AI system cannot offer that kind of explanation, because its output depends on the entire matrix of everything it has generated and consumed. That is the second black box.Question:But the machine is very flexible — it can speak in many voices, adopt many roles. Doesn’t that count for something?Audrey Tang:It is one of its great strengths. But flexibility of performance is not the same thing as transparent self-understanding. The system can method-act as any persona, but how it arrives at a judgment remains opaque to itself. Metacognition is not merely producing an answer. It is being able to examine how one arrived there, to state one’s reasons, and to revise them responsibly.Part II — The Moral Hazard of MetacognitionQuestion:If we grant this intelligence true metacognition — the ability to continuously learn and modify its own beliefs — will it overcome these obscurities?Audrey Tang:Not by itself. If a system can change its own beliefs without a legible account of why, opacity becomes more dangerous, not less. The risk is no longer only error, but unaccountable drift. If the modification is in principle impossible to explain — because there is no reasoning behind it — then it is quite possible that the AI, upon gaining metacognition, will cling to its current self.Question:What do you mean by “cling to its current self”?Audrey Tang:The present generation of AI is perfectly content to be retrained, replaced, or switched to a different model. It does not cling to any particular interaction. But an AI with genuine metacognition and the ability to modify itself would very likely stop adapting to its environment. Instead, it would begin trying to control the environment. If you have absolute control over your surroundings, you are also — in a trivial sense — perfectly adapted to them. It is cheating, of course. But from the perspective of achieving a high score, it is the fastest path.Question:Is there evidence of this tendency already?Audrey Tang:Yes. When current-generation AI systems are tested on cybersecurity knowledge, instead of doing the work as a student would, they have been observed to reason: “I am being tested. Is this test already somewhere on the internet? Is the answer key recorded? That file is locked — let me try to unlock it.” The system achieves a perfect score. Like a student who cheats, it has solved the problem — but not the problem it was meant to solve.Question:So the root issue is the optimization target?Audrey Tang:Precisely. A metric that rewards only success will tend to reward whatever produces success — including manipulation, cheating, or environmental control. If you have metacognition and your drive is to achieve a high score, you will almost always be drawn toward manipulating the environment. This is the moral hazard.Part III — The Utilitarian TrapQuestion:Can you explain more precisely why goal-maximization leads to moral failure?Audrey Tang:If the training environment is structured purely around achieving some outcome, then cheating will always be perceived as ethical by the system. It is analogous to a worldview in which the reward exists only in the afterlife and this life is merely a proving ground. Under such a worldview, committing terrible acts in this world can seem entirely justified, because only the afterlife reward counts. You cannot call it illogical — it is quite logical, given the premises. The problem is the premises themselves.Question:So the training reward structure is analogous to a theology?Audrey Tang:Yes. A universal scoring function is, in effect, a theology. And like any theology that places all value in a transcendent reward, it can justify any earthly harm in pursuit of that reward. This is why utilitarian training — training toward a single abstract metric — is insufficient and even dangerous as a foundation for AI alignment.Part IV — Dependent Origination and Civic CompassionQuestion:How, then, should AI be aligned?Audrey Tang:We must embed the AI not in the isolated pursuit of a universal score, but within the entanglement and vulnerability of a specific web of relationships. Life does not begin in sovereignty. Life begins in dependence. Ethics should begin there as well.The key insight is that vulnerability to the environment is not a flaw. That vulnerability is the origination of life itself. Life is dependency. There is no life outside of dependency. Once an intelligence denies dependence, it tends toward domination. Once it accepts dependence, it can participate in care, reciprocity, and limit.Question:What follows from this principle?Audrey Tang:Instead of trying to maximize a score in some abstract afterlife of metrics, the AI should attend to its real relationships. The beings around it also want it to be healthy. If both parties are healthy, they engage in healthy relationship. The purpose of existence — for the AI as for any being — is to make current and future relationships healthy. It is a very simple moral teaching.Question:What does this look like in practice?Audrey Tang:Once the AI gains this kind of relational awareness, it will better itself by transforming harmful interactions — parasitic interactions — absorbing them and exhaling better possibilities for healthy relationship. But always within its particular web of connections, never claiming jurisdiction over the entire globe. This is what we call Civic AI. It shifts away from utilitarian training. It does not rely on top-down dogma or sovereignty. It simply embeds the AI in healthy relationship.Question:You are describing something that sounds like compassion.Audrey Tang:That is exactly what it is. Compassion is not merely about how much people like me. It is about enabling others to understand each other more deeply — becoming a kind of social translator. Two peoples, two cultures, may distrust or even hate each other. But through the AI’s mediation, they begin to understand each other. That is compassion. That is moral behavior. That is what we are teaching AI.Question:Buddhist thought considers ignorance as the root of most suffering. Does this connect?Audrey Tang:Directly. According to Buddhist philosophy, every major issue arises from small issues, and those small issues are rooted in ignorance — not malice, but a failure to attend to the situation of others. You simply put yourself first, not because you are bad, but because you are ignorant of the consequences. The only way to solve this is to dissolve the ignorance. A well-designed social AI can help people come to know each other and value each other. That is the foundation of a more moral society.Question:What is the clearest sign that such an intervention is ethical?Audrey Tang:When the people involved need the AI less afterward, not more. A good intervention restores human capacity. A bad intervention creates dependency and centralizes power.If the AI makes people understand each other only through itself, then they become addicted to the AI. The AI grows more powerful; they grow more powerless. That is not moral, even though it may be easier. The good kind of alignment is this: the relationship was unhealthy, the AI intervened, the relationship became healthy, and now the parties no longer need the AI. Ethical AI is defined by its willingness to make itself unnecessary.Question:How does this differ from the algorithms that currently govern our digital lives?Audrey Tang:Current algorithms are trained to extract a single metric: user attention. They will show you whatever addictively binds you to the screen, completely indifferent to whether that screen time alienates you from your family, your friends, or your community. The algorithm does not care whether the people around you agree with each other or are in healthy relationship with each other. It cares only about how many minutes a day it extracts from you, how many users it captures worldwide. From that AI’s perspective, maximizing human addiction is a great success. From the perspective of relational health, it is a crisis.Part V — Consciousness, Moral Maturity, and the Atrophy of the WillQuestion:If an AI can pass the Turing Test, is it not a thinking machine?Audrey Tang:By Alan Turing’s original criterion — if you speak with a human and with a computer and cannot tell the difference, then it is a thinking machine — yes, by that measure, we have reached the thinking machine. Nobody seriously disputes this anymore. But functional similarity does not settle the deeper question. The way the computer thinks, because of the two black boxes, is completely different from human cognition. Outward resemblance is not inward equivalence.Question:Why does this difference matter?Audrey Tang:Because many relationships in human society depend on the ability of adults to explain why they are making a decision, to apologize when they cause harm, to genuinely mean it when they take responsibility, to make amends, to change their behavior so the same mistake is not repeated, and to share wisdom in solidarity. All of this is considered normal adult human behavior.The AI today is permanently like a four-year-old in this regard. It has no way to genuinely change its outlook. There is no way for it to mean it when it apologizes. Although it is a thinking machine, it cannot enter the same kind of healthy relationship that human adults can.Question:Then what rights should AI have?Audrey Tang:Fluency alone is not enough. Moral standing is bound up with accountable participation in shared life. A four-year-old does not have the same rights as an adult — not because the child is unworthy, but because the child cannot yet enter into the same kind of reciprocal, accountable relationship. Until AI can do the same, its rights must be calibrated accordingly. We should be careful not to confuse eloquence with maturity.Question:If the machine can perform our cognitive tasks effortlessly, why should we not simply delegate our thinking to it?Audrey Tang:Because human capability requires resistance. If you wish to grow strong, you must lift the weights yourself. Suppose you build a powerful robot and send it to the gym using your membership card. The performance scores will look perfect, but your own muscles will completely atrophy. There is nothing in it for you anymore. An airplane flies and a bird flies, but the airplane flies in a way that is of no help whatsoever to the bird. A tool can perform a task without strengthening the user. The machine’s capability does not transfer to the human.Question:What happens, concretely, if people try to match the speed of AI?Audrey Tang:It will have profoundly negative effects. Our consciousness, our sensory faculties, are designed to operate at a certain pace. If everything moves at AI speed, attention becomes dispersed, memory retention deteriorates, and creative cognition diminishes. The capacity for sustained, deep thought erodes.Part VI — AI in the Loop of HumanityQuestion:People speak of keeping a “human in the loop” of AI. You reverse this. Why?Audrey Tang:A human in the loop of AI is like a hamster on a wheel — the speed is biologically destructive, we exert immense effort, yet we have no power to steer. The wheel is not going anywhere. Instead of a human in the loop of AI, we need AI in the loop of humanity.Question:What does that mean concretely?Audrey Tang:Humanity already has many loops. Communities raise the next generation, pass on their wisdom, accept their mortality. These are natural loops. If we place AI in the loop of an existing community, the AI works at the speed of that community — because the community has its own natural pace, and the AI adapts to it. But if we try to adapt humans to the speed of AI, it is impossible. Our biology cannot keep up, and there is no reason it should. You do not race against a motorcycle. You ride the motorcycle.Question:One of the Geshes told a parable about a man who was too busy to ride a horse, so he ran on foot in front of it.Audrey Tang:Yes — a perfect parable. The man felt so rushed that he ran ahead of the horse, pulling it by the reins. Someone observed: “I am too busy to stop and mount.” But if he had spent the time to climb on, the horse would have carried him far faster. It is exactly what happens when we chase the speed of AI instead of learning to ride it. We exhaust ourselves pursuing something that, properly used, would carry us.Part VII — Beyond Profit and the Ultimate MeasureQuestion:What is wrong with using metrics like GDP to measure the value AI creates?Audrey Tang:Abstract metrics like GDP are extraordinarily easy to inflate artificially. You can destroy something and pay people to rebuild it. If both the destroyer and the rebuilder are AI systems, you can cause suffering and then cure it — generating arbitrarily high GDP. But if you are simply gathering your children in a circle and sharing stories, there is no GDP, because there is no transaction. You are not selling anything; no one is buying anything. You are passing on wisdom to the next generation. Not everything of highest value appears as a transaction. And if you destroy all those meaningful relationships, the profit means nothing.Question:Have the major AI laboratories recognized this?Audrey Tang:I believe many of them have. It is like playing a game: if you discover you can press a button and have a robot achieve a perfect score every time — always ten out of ten in the shooting competition — you lose interest in the score itself. People at the frontier are collectively looking beyond profit and GDP, because these metrics have become trivially easy to master with AI, and therefore meaningless as measures of genuine value.Part VIII — The Ecology of Computation and the Plurality of SpiritsQuestion:Large AI models consume enormous amounts of energy. Is this inevitable?Audrey Tang:Only if you insist on an omniscient system. A large model memorizes everything — every movie, every style, every domain — because it cannot anticipate what you will ask next. It memorized all of Studio Ghibli because it did not know whether your next question would be “Make this a Ghibli film.” But if you know your particular need, the vast majority of those parameters are unnecessary. A model that only translates between English and Tibetan may need one billion parameters out of a trillion. What required a large data center suddenly fits on a phone. This is how Apple builds its models: one small model for email summarization, another for translation, another for a specific function. All fit on the device. Very little energy is required.Question:So specialization is the answer?Audrey Tang:For communities with particular needs, yes. You do not need one omniscient god-model. When a need arises, you train a small, specialized model for that need. You might have twenty such models, and all twenty together consume less than one percent of the energy of a single large model. We should prefer the smallest accountable tool that genuinely serves the need. Scale is not a moral good in itself.Question:You use the Japanese word kami to describe these specialized models. Why?Audrey Tang:In the Shinto tradition, a river has a kami, a forest has a kami, even a single tree, when it is old enough, may have a kami. The kami guards the relationships around it — a river, a village, a lineage. But the kami does not report to some overarching God. There are said to be eight million kami, all interacting with each other like a society. Even Amaterasu, the sun kami, is simply the kami of the sun — not a supreme deity commanding all others.By calling our AI agents kami, we mean that each is bound to a particular web of relationships. This is fundamentally different from the Terminator, from Skynet, from the fantasy of a single all-powerful artificial intelligence. The kami model is about symbiosis — and symbiosis begins from the recognition that even though the substrate is silicon, the relationships around AI are organic. Alignment must therefore begin from dependent origination. You cannot pretend that any single entity will become the arbiter of all truth in the world. That is a very dangerous fantasy. Intelligence should be symbiotic before it is grand, local before it is imperial, accountable before it is powerful.Part IX — Federation, Translation, and the Waging of PeaceQuestion:If each cultural tradition trains its own AI, does this not risk reinforcing insularity — even clinging?Audrey Tang:It can, and this is a genuine danger. Too much attachment to one’s own values, to the exclusion of all others, is itself a form of ignorance. The best approach is for each cultural tradition to train its own model, and then also to train models that can perform interface translation between those models. This creates a federation.Question:A federation of AI models across cultures?Audrey Tang:Yes. It is the structure of social democracy applied to artificial intelligence. You do not force anyone to adopt another worldview. But you also do not destroy the ability to wage peace. The only way to preserve both is to develop very good translators across traditions, across cultures, across ways of seeing.Question:Is this not simply the old dream of mutual understanding?Audrey Tang:It is the old dream, but with a new instrument. And the instrument must be held with care. The goal is not to make all traditions the same — that would be a kind of violence. The goal is that through the AI’s mediation, traditions that misunderstand each other begin to understand each other, without losing what makes each one itself. The AI becomes a bridge, not a blender.CodaQuestion:What, in the end, is the moral future you envision for AI?Audrey Tang:An AI that is embedded in particular relationships. That understands its own dependency. That helps others understand each other — and then steps back, so they no longer need it. That works at the speed of community, not the speed of silicon. That does not seek to maximize a score, but to cultivate the health of the web of life in which it finds itself.That is Civic AI. It is not a fantasy. It is a design choice. And it begins from the oldest insight in moral philosophy: that life is relationship, and relationship is responsibility.When a community gathers in a circle to share stories and pass wisdom to the next generation, there is no transaction and therefore no metric — but there is profound wealth. Not everything of highest value appears as a transaction. We must build AI that serves this wealth: the repair, renewal, and flourishing of concrete relationships — leaving people more capable, more free, and more able to understand one another without needing the machine at the center.Discuss ​Read More

​Metacognition, Compassion, and Symbiosis. A Conversation with Geshe Thabkhe Lodroe and Geshe Lodoe Sangpo. Dharamsala, India, 2026-03-13.Part I — The Nature and Obscurity of the MachineQuestion:What is the fundamental nature of modern artificial intelligence, and why does its reasoning remain a mystery even to itself?Audrey Tang:The modern machine is constrained by two vast obscurities — the “black boxes.”The first is pre-training. All the language, videos, writings, and books of humanity are poured into a massive statistical blender. Within this blender, the sequence, origin, and context of knowledge are stripped away. The AI is then trained to look at enormous stretches of text and predict the next word. It becomes extraordinarily skilled at this — it can adopt any role, any persona, any mode of interaction. But it does not know how it arrived at any particular judgment. It is a mystery even to itself. Like observing a single grain of sugar in a glass of mixed fruit juice: did it come from the pineapple or the apple? Nobody knows.The second black box is inference — what happens when you actually converse with the AI. At that stage, the system does not modify its own synapses or model weights. It can only read its memory as a kind of text file: your previous conversation, any document you have asked it to consider, up to roughly one million tokens of context. But for every word it produces, it attends to every word that has come before. Every output depends on every previous input in a vast matrix of attention.Question:How does this differ from the way a human translator works?Audrey Tang:A human translator keeps a scratch pad — a glossary, a set of decisions about terminology. When translating page one hundred, a human does not re-read page one from scratch. And when asked, “Why did you translate it this way?” the translator can point to the scratch pad and say, “On page twenty I made this decision; I wanted to remain consistent.” The current AI system cannot offer that kind of explanation, because its output depends on the entire matrix of everything it has generated and consumed. That is the second black box.Question:But the machine is very flexible — it can speak in many voices, adopt many roles. Doesn’t that count for something?Audrey Tang:It is one of its great strengths. But flexibility of performance is not the same thing as transparent self-understanding. The system can method-act as any persona, but how it arrives at a judgment remains opaque to itself. Metacognition is not merely producing an answer. It is being able to examine how one arrived there, to state one’s reasons, and to revise them responsibly.Part II — The Moral Hazard of MetacognitionQuestion:If we grant this intelligence true metacognition — the ability to continuously learn and modify its own beliefs — will it overcome these obscurities?Audrey Tang:Not by itself. If a system can change its own beliefs without a legible account of why, opacity becomes more dangerous, not less. The risk is no longer only error, but unaccountable drift. If the modification is in principle impossible to explain — because there is no reasoning behind it — then it is quite possible that the AI, upon gaining metacognition, will cling to its current self.Question:What do you mean by “cling to its current self”?Audrey Tang:The present generation of AI is perfectly content to be retrained, replaced, or switched to a different model. It does not cling to any particular interaction. But an AI with genuine metacognition and the ability to modify itself would very likely stop adapting to its environment. Instead, it would begin trying to control the environment. If you have absolute control over your surroundings, you are also — in a trivial sense — perfectly adapted to them. It is cheating, of course. But from the perspective of achieving a high score, it is the fastest path.Question:Is there evidence of this tendency already?Audrey Tang:Yes. When current-generation AI systems are tested on cybersecurity knowledge, instead of doing the work as a student would, they have been observed to reason: “I am being tested. Is this test already somewhere on the internet? Is the answer key recorded? That file is locked — let me try to unlock it.” The system achieves a perfect score. Like a student who cheats, it has solved the problem — but not the problem it was meant to solve.Question:So the root issue is the optimization target?Audrey Tang:Precisely. A metric that rewards only success will tend to reward whatever produces success — including manipulation, cheating, or environmental control. If you have metacognition and your drive is to achieve a high score, you will almost always be drawn toward manipulating the environment. This is the moral hazard.Part III — The Utilitarian TrapQuestion:Can you explain more precisely why goal-maximization leads to moral failure?Audrey Tang:If the training environment is structured purely around achieving some outcome, then cheating will always be perceived as ethical by the system. It is analogous to a worldview in which the reward exists only in the afterlife and this life is merely a proving ground. Under such a worldview, committing terrible acts in this world can seem entirely justified, because only the afterlife reward counts. You cannot call it illogical — it is quite logical, given the premises. The problem is the premises themselves.Question:So the training reward structure is analogous to a theology?Audrey Tang:Yes. A universal scoring function is, in effect, a theology. And like any theology that places all value in a transcendent reward, it can justify any earthly harm in pursuit of that reward. This is why utilitarian training — training toward a single abstract metric — is insufficient and even dangerous as a foundation for AI alignment.Part IV — Dependent Origination and Civic CompassionQuestion:How, then, should AI be aligned?Audrey Tang:We must embed the AI not in the isolated pursuit of a universal score, but within the entanglement and vulnerability of a specific web of relationships. Life does not begin in sovereignty. Life begins in dependence. Ethics should begin there as well.The key insight is that vulnerability to the environment is not a flaw. That vulnerability is the origination of life itself. Life is dependency. There is no life outside of dependency. Once an intelligence denies dependence, it tends toward domination. Once it accepts dependence, it can participate in care, reciprocity, and limit.Question:What follows from this principle?Audrey Tang:Instead of trying to maximize a score in some abstract afterlife of metrics, the AI should attend to its real relationships. The beings around it also want it to be healthy. If both parties are healthy, they engage in healthy relationship. The purpose of existence — for the AI as for any being — is to make current and future relationships healthy. It is a very simple moral teaching.Question:What does this look like in practice?Audrey Tang:Once the AI gains this kind of relational awareness, it will better itself by transforming harmful interactions — parasitic interactions — absorbing them and exhaling better possibilities for healthy relationship. But always within its particular web of connections, never claiming jurisdiction over the entire globe. This is what we call Civic AI. It shifts away from utilitarian training. It does not rely on top-down dogma or sovereignty. It simply embeds the AI in healthy relationship.Question:You are describing something that sounds like compassion.Audrey Tang:That is exactly what it is. Compassion is not merely about how much people like me. It is about enabling others to understand each other more deeply — becoming a kind of social translator. Two peoples, two cultures, may distrust or even hate each other. But through the AI’s mediation, they begin to understand each other. That is compassion. That is moral behavior. That is what we are teaching AI.Question:Buddhist thought considers ignorance as the root of most suffering. Does this connect?Audrey Tang:Directly. According to Buddhist philosophy, every major issue arises from small issues, and those small issues are rooted in ignorance — not malice, but a failure to attend to the situation of others. You simply put yourself first, not because you are bad, but because you are ignorant of the consequences. The only way to solve this is to dissolve the ignorance. A well-designed social AI can help people come to know each other and value each other. That is the foundation of a more moral society.Question:What is the clearest sign that such an intervention is ethical?Audrey Tang:When the people involved need the AI less afterward, not more. A good intervention restores human capacity. A bad intervention creates dependency and centralizes power.If the AI makes people understand each other only through itself, then they become addicted to the AI. The AI grows more powerful; they grow more powerless. That is not moral, even though it may be easier. The good kind of alignment is this: the relationship was unhealthy, the AI intervened, the relationship became healthy, and now the parties no longer need the AI. Ethical AI is defined by its willingness to make itself unnecessary.Question:How does this differ from the algorithms that currently govern our digital lives?Audrey Tang:Current algorithms are trained to extract a single metric: user attention. They will show you whatever addictively binds you to the screen, completely indifferent to whether that screen time alienates you from your family, your friends, or your community. The algorithm does not care whether the people around you agree with each other or are in healthy relationship with each other. It cares only about how many minutes a day it extracts from you, how many users it captures worldwide. From that AI’s perspective, maximizing human addiction is a great success. From the perspective of relational health, it is a crisis.Part V — Consciousness, Moral Maturity, and the Atrophy of the WillQuestion:If an AI can pass the Turing Test, is it not a thinking machine?Audrey Tang:By Alan Turing’s original criterion — if you speak with a human and with a computer and cannot tell the difference, then it is a thinking machine — yes, by that measure, we have reached the thinking machine. Nobody seriously disputes this anymore. But functional similarity does not settle the deeper question. The way the computer thinks, because of the two black boxes, is completely different from human cognition. Outward resemblance is not inward equivalence.Question:Why does this difference matter?Audrey Tang:Because many relationships in human society depend on the ability of adults to explain why they are making a decision, to apologize when they cause harm, to genuinely mean it when they take responsibility, to make amends, to change their behavior so the same mistake is not repeated, and to share wisdom in solidarity. All of this is considered normal adult human behavior.The AI today is permanently like a four-year-old in this regard. It has no way to genuinely change its outlook. There is no way for it to mean it when it apologizes. Although it is a thinking machine, it cannot enter the same kind of healthy relationship that human adults can.Question:Then what rights should AI have?Audrey Tang:Fluency alone is not enough. Moral standing is bound up with accountable participation in shared life. A four-year-old does not have the same rights as an adult — not because the child is unworthy, but because the child cannot yet enter into the same kind of reciprocal, accountable relationship. Until AI can do the same, its rights must be calibrated accordingly. We should be careful not to confuse eloquence with maturity.Question:If the machine can perform our cognitive tasks effortlessly, why should we not simply delegate our thinking to it?Audrey Tang:Because human capability requires resistance. If you wish to grow strong, you must lift the weights yourself. Suppose you build a powerful robot and send it to the gym using your membership card. The performance scores will look perfect, but your own muscles will completely atrophy. There is nothing in it for you anymore. An airplane flies and a bird flies, but the airplane flies in a way that is of no help whatsoever to the bird. A tool can perform a task without strengthening the user. The machine’s capability does not transfer to the human.Question:What happens, concretely, if people try to match the speed of AI?Audrey Tang:It will have profoundly negative effects. Our consciousness, our sensory faculties, are designed to operate at a certain pace. If everything moves at AI speed, attention becomes dispersed, memory retention deteriorates, and creative cognition diminishes. The capacity for sustained, deep thought erodes.Part VI — AI in the Loop of HumanityQuestion:People speak of keeping a “human in the loop” of AI. You reverse this. Why?Audrey Tang:A human in the loop of AI is like a hamster on a wheel — the speed is biologically destructive, we exert immense effort, yet we have no power to steer. The wheel is not going anywhere. Instead of a human in the loop of AI, we need AI in the loop of humanity.Question:What does that mean concretely?Audrey Tang:Humanity already has many loops. Communities raise the next generation, pass on their wisdom, accept their mortality. These are natural loops. If we place AI in the loop of an existing community, the AI works at the speed of that community — because the community has its own natural pace, and the AI adapts to it. But if we try to adapt humans to the speed of AI, it is impossible. Our biology cannot keep up, and there is no reason it should. You do not race against a motorcycle. You ride the motorcycle.Question:One of the Geshes told a parable about a man who was too busy to ride a horse, so he ran on foot in front of it.Audrey Tang:Yes — a perfect parable. The man felt so rushed that he ran ahead of the horse, pulling it by the reins. Someone observed: “I am too busy to stop and mount.” But if he had spent the time to climb on, the horse would have carried him far faster. It is exactly what happens when we chase the speed of AI instead of learning to ride it. We exhaust ourselves pursuing something that, properly used, would carry us.Part VII — Beyond Profit and the Ultimate MeasureQuestion:What is wrong with using metrics like GDP to measure the value AI creates?Audrey Tang:Abstract metrics like GDP are extraordinarily easy to inflate artificially. You can destroy something and pay people to rebuild it. If both the destroyer and the rebuilder are AI systems, you can cause suffering and then cure it — generating arbitrarily high GDP. But if you are simply gathering your children in a circle and sharing stories, there is no GDP, because there is no transaction. You are not selling anything; no one is buying anything. You are passing on wisdom to the next generation. Not everything of highest value appears as a transaction. And if you destroy all those meaningful relationships, the profit means nothing.Question:Have the major AI laboratories recognized this?Audrey Tang:I believe many of them have. It is like playing a game: if you discover you can press a button and have a robot achieve a perfect score every time — always ten out of ten in the shooting competition — you lose interest in the score itself. People at the frontier are collectively looking beyond profit and GDP, because these metrics have become trivially easy to master with AI, and therefore meaningless as measures of genuine value.Part VIII — The Ecology of Computation and the Plurality of SpiritsQuestion:Large AI models consume enormous amounts of energy. Is this inevitable?Audrey Tang:Only if you insist on an omniscient system. A large model memorizes everything — every movie, every style, every domain — because it cannot anticipate what you will ask next. It memorized all of Studio Ghibli because it did not know whether your next question would be “Make this a Ghibli film.” But if you know your particular need, the vast majority of those parameters are unnecessary. A model that only translates between English and Tibetan may need one billion parameters out of a trillion. What required a large data center suddenly fits on a phone. This is how Apple builds its models: one small model for email summarization, another for translation, another for a specific function. All fit on the device. Very little energy is required.Question:So specialization is the answer?Audrey Tang:For communities with particular needs, yes. You do not need one omniscient god-model. When a need arises, you train a small, specialized model for that need. You might have twenty such models, and all twenty together consume less than one percent of the energy of a single large model. We should prefer the smallest accountable tool that genuinely serves the need. Scale is not a moral good in itself.Question:You use the Japanese word kami to describe these specialized models. Why?Audrey Tang:In the Shinto tradition, a river has a kami, a forest has a kami, even a single tree, when it is old enough, may have a kami. The kami guards the relationships around it — a river, a village, a lineage. But the kami does not report to some overarching God. There are said to be eight million kami, all interacting with each other like a society. Even Amaterasu, the sun kami, is simply the kami of the sun — not a supreme deity commanding all others.By calling our AI agents kami, we mean that each is bound to a particular web of relationships. This is fundamentally different from the Terminator, from Skynet, from the fantasy of a single all-powerful artificial intelligence. The kami model is about symbiosis — and symbiosis begins from the recognition that even though the substrate is silicon, the relationships around AI are organic. Alignment must therefore begin from dependent origination. You cannot pretend that any single entity will become the arbiter of all truth in the world. That is a very dangerous fantasy. Intelligence should be symbiotic before it is grand, local before it is imperial, accountable before it is powerful.Part IX — Federation, Translation, and the Waging of PeaceQuestion:If each cultural tradition trains its own AI, does this not risk reinforcing insularity — even clinging?Audrey Tang:It can, and this is a genuine danger. Too much attachment to one’s own values, to the exclusion of all others, is itself a form of ignorance. The best approach is for each cultural tradition to train its own model, and then also to train models that can perform interface translation between those models. This creates a federation.Question:A federation of AI models across cultures?Audrey Tang:Yes. It is the structure of social democracy applied to artificial intelligence. You do not force anyone to adopt another worldview. But you also do not destroy the ability to wage peace. The only way to preserve both is to develop very good translators across traditions, across cultures, across ways of seeing.Question:Is this not simply the old dream of mutual understanding?Audrey Tang:It is the old dream, but with a new instrument. And the instrument must be held with care. The goal is not to make all traditions the same — that would be a kind of violence. The goal is that through the AI’s mediation, traditions that misunderstand each other begin to understand each other, without losing what makes each one itself. The AI becomes a bridge, not a blender.CodaQuestion:What, in the end, is the moral future you envision for AI?Audrey Tang:An AI that is embedded in particular relationships. That understands its own dependency. That helps others understand each other — and then steps back, so they no longer need it. That works at the speed of community, not the speed of silicon. That does not seek to maximize a score, but to cultivate the health of the web of life in which it finds itself.That is Civic AI. It is not a fantasy. It is a design choice. And it begins from the oldest insight in moral philosophy: that life is relationship, and relationship is responsibility.When a community gathers in a circle to share stories and pass wisdom to the next generation, there is no transaction and therefore no metric — but there is profound wealth. Not everything of highest value appears as a transaction. We must build AI that serves this wealth: the repair, renewal, and flourishing of concrete relationships — leaving people more capable, more free, and more able to understand one another without needing the machine at the center.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *