The world was fair, the mountains tall,In Elder Days before the fallOf mighty kings in NargothrondAnd Gondolin, who now beyondThe Western Seas have passed away:The world was fair in Durin’s Day.J.R.R. TolkienI was never meant to work on AI safety. I was never designed to think about superintelligences and try to steer, influence, or change them. I never particularly enjoyed studying the peculiarities of matrix operations, cracking the assumptions of decision theories, or even coding.I know, of course, that at the very bottom, bits and atoms are all the same — causal laws and information processing.And yet, part of me, the most romantic and naive part of me, thinks, metaphorically, that we abandoned cells for computers, and this is our punishment.I was meant, as I saw it, to bring about the glorious transhuman future, in its classical sense. Genetic engineering, neurodevices, DIY biolabs — going hard on biology, going hard on it with extraordinary effort, hubristically, being, you know, awestruck by “endless forms most beautiful” and motivated by the great cosmic destiny of humanity, pushing the proud frontiersman spirit and all that stuff.I was meant, in other words, to push the singularity of the biotech type. It was more fun, it wasn’t lethal with high probability, and it wasn’t leaving me and other fellow humans aside. On the contrary, we were going to ride that wave and rise with it.That feeling — that as technology advances, your agency will only be amplified, that the universe with time will pay more and more attention to your metapreferences — is the one I miss the most.All of that is now like a memory of a distant, careless childhood.I check old friends on social media. Longevity folks still work on their longevity thing — a relic of a more civilized age, as if our life expectancy wasn’t measured in single-digit years. They serve as yet another contrastive reminder of the sheer scale of the difference between our current state and our dream.When did everything go wrong, exactly? Was it 2019, when COVID pushed everyone deeper into social media and we gradually transitioned into pre-singularity mode after the attention paper? Of course, it should be something before that, as the law of earlier failure states.Was it the rise of the internet and social media, which made it far easier and more rewarding to build virtual worlds than to engineer physical ones and which also destroyed human cognitive skills?Was it 1971, the year when the real wages decoupled from productivity and the entire trajectory of broad-based material progress bent downward?Was it lead poisoning, when an entire generation’s cognitive capacity quietly degraded by tetraethyllead in gasoline, producing a civilizational wound whose full consequences we don’t even know?Was it the totalitarian regimes of the twentieth century, whose atrocities taught humanity a visceral lesson: never try to undertake big projects, because ambition on that scale leads to horror?Or maybe we, apes from the savannas, were simply never meant to colonize superclusters, and the progress we observed was a random short-lived upward fluctuation, a spark of reason rather than a flame?A decade ago, in my late teenage years, I was giving lectures on neurotech and CRISPR. Little did I know!A decade ago, I read HPMOR, knew about the rationalists, and tried to optimize my thinking accordingly, but I didn’t particularly care about the grand program of AI alignment.Artificial superintelligence, for me back then, was not an urgent practical problem that needed to be solved, and even less so one that needed to be solved by me. It was just another beautiful story — a resident of a separate abstract Realm of Cool Transhumanist Things and Concepts, alongside the abolition of aging, neural interfaces, space colonization, geoengineering, and genetic augmentation.Of course, knowing everything I knew, having taken step one, I could have taken step two as well, but the state of blissful technophilia is a powerful attractor. Purely intellectually, it may not be that hard to transition from classical transhumanism and traditional rationality into the problem of alignment, but it is hard to do it as a human being, when an aura of positivity forms around technology, when the most interesting and successful people hold these views, when you don’t want to look strange in the eyes of people you respect — top scientists, tech entrepreneurs, and even the AI developers themselves. It was not a warm bath but rather a golden pool.Also, it seems that back then, it felt to me like the question “which transhumanist things should I work on?” could, or should, be resolved aesthetically. And aesthetically, biotech was closer to my heart.I was discussing Kurzweil’s forecasts. However, it is clear now, although it wasn’t clear back in the day, that my brain wasn’t perceiving it as a really, actually real thing. Now that my brain does, I totally see the difference.Of course, even ten years ago it was already too late. Even then, I wasn’t living in the transhuman timeline, but I thought I was, and although this belief was much more a fact about my youthful naivety than about the surrounding reality, the feeling was pleasant.The first trivial lesson I drew from this: you can be more right than 99.9% of people and still be fatally wrong.At twenty, I had read Bostrom and Vinge. I was giving lectures about the singularity, and I had enough intellect and nonconformism not to bend under social pressure and to honestly talk about the importance of this topic and the fact that it could all become reality soon. But, great cosmos, I did not understand what I was talking about! I was a child, really. I was almost entirely missing a number of critical points — partly from an insufficiently serious approach to analysis, partly from ignorance, and partly because certain things were simply impossible to grasp at the level of normal human intelligence. And so, for all my openness to the ideas of radical technological progress, a full-blown singularity with superintelligence still seemed somewhat in the realm of science fiction. Apparently, for every transhumanist there is a rate of change which is too much.However, there were two even more significant lessons.The first one is about how the history of technology works.Planes are not modified birds, just as cars are not improved horses. It was silly to expect the opposite with intelligence. And yet, there was hope, and the hope was not totally meaningless. It was conjectured that intelligence would be something much more complex to design from scratch than physical labor devices, and thus we would need to rely on what was already created by evolution, working on top of it. This doesn’t sound insane even now. It’s just that reality had the right to choose differently, and did so.And the second lesson is about how real defeats work.Dinosaurs lost to other animals, not to, say, bacteria. Apes lost to other primates, not to reptiles or birds. Native Americans lost to other humans, not to local predators. European empires lost to other European empires, not to the peoples they colonized. And transhumanists lost to other progressivists — that is, to AI accelerationists — not to traditionalists or conservatives.All the complaints about conservatives who fear GMOs and cyber-modifications never made sense from the very beginning. From the very beginning, they were never capable of stopping anything. The most dangerous enemies are found among the most powerful agents, not the most ideologically distant ones. Each successive battle is fought among the previous round’s winners, and it never replays the prior distribution of sides.In retrospect, this seems obvious, but how non-obvious it was just five years ago! Well, at least for me.The evening blooms with spring scents — this always makes me feel younger. Yet another reason to recall 2015. I look at the stars.We were meant to colonize them. The ghosts of our immeasurable possible great-grandchildren look from there at me. They are still possible, and yet they look not with hope or approval, but with fear and contempt.Even now, it is possible, or rather it is not prohibited by the laws of physics, that we turn back toward the future. We could repurpose talent, compute, and funding to solve biology, and there will be hope, and pride of human spirit, and the future will look existing once more.I want to go home.Discuss Read More
Requiem for a Transhuman Timeline
The world was fair, the mountains tall,In Elder Days before the fallOf mighty kings in NargothrondAnd Gondolin, who now beyondThe Western Seas have passed away:The world was fair in Durin’s Day.J.R.R. TolkienI was never meant to work on AI safety. I was never designed to think about superintelligences and try to steer, influence, or change them. I never particularly enjoyed studying the peculiarities of matrix operations, cracking the assumptions of decision theories, or even coding.I know, of course, that at the very bottom, bits and atoms are all the same — causal laws and information processing.And yet, part of me, the most romantic and naive part of me, thinks, metaphorically, that we abandoned cells for computers, and this is our punishment.I was meant, as I saw it, to bring about the glorious transhuman future, in its classical sense. Genetic engineering, neurodevices, DIY biolabs — going hard on biology, going hard on it with extraordinary effort, hubristically, being, you know, awestruck by “endless forms most beautiful” and motivated by the great cosmic destiny of humanity, pushing the proud frontiersman spirit and all that stuff.I was meant, in other words, to push the singularity of the biotech type. It was more fun, it wasn’t lethal with high probability, and it wasn’t leaving me and other fellow humans aside. On the contrary, we were going to ride that wave and rise with it.That feeling — that as technology advances, your agency will only be amplified, that the universe with time will pay more and more attention to your metapreferences — is the one I miss the most.All of that is now like a memory of a distant, careless childhood.I check old friends on social media. Longevity folks still work on their longevity thing — a relic of a more civilized age, as if our life expectancy wasn’t measured in single-digit years. They serve as yet another contrastive reminder of the sheer scale of the difference between our current state and our dream.When did everything go wrong, exactly? Was it 2019, when COVID pushed everyone deeper into social media and we gradually transitioned into pre-singularity mode after the attention paper? Of course, it should be something before that, as the law of earlier failure states.Was it the rise of the internet and social media, which made it far easier and more rewarding to build virtual worlds than to engineer physical ones and which also destroyed human cognitive skills?Was it 1971, the year when the real wages decoupled from productivity and the entire trajectory of broad-based material progress bent downward?Was it lead poisoning, when an entire generation’s cognitive capacity quietly degraded by tetraethyllead in gasoline, producing a civilizational wound whose full consequences we don’t even know?Was it the totalitarian regimes of the twentieth century, whose atrocities taught humanity a visceral lesson: never try to undertake big projects, because ambition on that scale leads to horror?Or maybe we, apes from the savannas, were simply never meant to colonize superclusters, and the progress we observed was a random short-lived upward fluctuation, a spark of reason rather than a flame?A decade ago, in my late teenage years, I was giving lectures on neurotech and CRISPR. Little did I know!A decade ago, I read HPMOR, knew about the rationalists, and tried to optimize my thinking accordingly, but I didn’t particularly care about the grand program of AI alignment.Artificial superintelligence, for me back then, was not an urgent practical problem that needed to be solved, and even less so one that needed to be solved by me. It was just another beautiful story — a resident of a separate abstract Realm of Cool Transhumanist Things and Concepts, alongside the abolition of aging, neural interfaces, space colonization, geoengineering, and genetic augmentation.Of course, knowing everything I knew, having taken step one, I could have taken step two as well, but the state of blissful technophilia is a powerful attractor. Purely intellectually, it may not be that hard to transition from classical transhumanism and traditional rationality into the problem of alignment, but it is hard to do it as a human being, when an aura of positivity forms around technology, when the most interesting and successful people hold these views, when you don’t want to look strange in the eyes of people you respect — top scientists, tech entrepreneurs, and even the AI developers themselves. It was not a warm bath but rather a golden pool.Also, it seems that back then, it felt to me like the question “which transhumanist things should I work on?” could, or should, be resolved aesthetically. And aesthetically, biotech was closer to my heart.I was discussing Kurzweil’s forecasts. However, it is clear now, although it wasn’t clear back in the day, that my brain wasn’t perceiving it as a really, actually real thing. Now that my brain does, I totally see the difference.Of course, even ten years ago it was already too late. Even then, I wasn’t living in the transhuman timeline, but I thought I was, and although this belief was much more a fact about my youthful naivety than about the surrounding reality, the feeling was pleasant.The first trivial lesson I drew from this: you can be more right than 99.9% of people and still be fatally wrong.At twenty, I had read Bostrom and Vinge. I was giving lectures about the singularity, and I had enough intellect and nonconformism not to bend under social pressure and to honestly talk about the importance of this topic and the fact that it could all become reality soon. But, great cosmos, I did not understand what I was talking about! I was a child, really. I was almost entirely missing a number of critical points — partly from an insufficiently serious approach to analysis, partly from ignorance, and partly because certain things were simply impossible to grasp at the level of normal human intelligence. And so, for all my openness to the ideas of radical technological progress, a full-blown singularity with superintelligence still seemed somewhat in the realm of science fiction. Apparently, for every transhumanist there is a rate of change which is too much.However, there were two even more significant lessons.The first one is about how the history of technology works.Planes are not modified birds, just as cars are not improved horses. It was silly to expect the opposite with intelligence. And yet, there was hope, and the hope was not totally meaningless. It was conjectured that intelligence would be something much more complex to design from scratch than physical labor devices, and thus we would need to rely on what was already created by evolution, working on top of it. This doesn’t sound insane even now. It’s just that reality had the right to choose differently, and did so.And the second lesson is about how real defeats work.Dinosaurs lost to other animals, not to, say, bacteria. Apes lost to other primates, not to reptiles or birds. Native Americans lost to other humans, not to local predators. European empires lost to other European empires, not to the peoples they colonized. And transhumanists lost to other progressivists — that is, to AI accelerationists — not to traditionalists or conservatives.All the complaints about conservatives who fear GMOs and cyber-modifications never made sense from the very beginning. From the very beginning, they were never capable of stopping anything. The most dangerous enemies are found among the most powerful agents, not the most ideologically distant ones. Each successive battle is fought among the previous round’s winners, and it never replays the prior distribution of sides.In retrospect, this seems obvious, but how non-obvious it was just five years ago! Well, at least for me.The evening blooms with spring scents — this always makes me feel younger. Yet another reason to recall 2015. I look at the stars.We were meant to colonize them. The ghosts of our immeasurable possible great-grandchildren look from there at me. They are still possible, and yet they look not with hope or approval, but with fear and contempt.Even now, it is possible, or rather it is not prohibited by the laws of physics, that we turn back toward the future. We could repurpose talent, compute, and funding to solve biology, and there will be hope, and pride of human spirit, and the future will look existing once more.I want to go home.Discuss Read More

