Opinion

The Anti-Singularity

​Heuristic solution to Doom generated by GPT-5.4In his blog post, Jiayi Weng proposes “the next paradigm” for Machine Learning: rather than trying to find beautiful abstractions for general-purpose-learning, we simply take advantage of LLM’s ability to tirelessly iterate on complex designs to build heuristics that can solve whatever task-at-hand we are dealing with.I do not know whether this next paradigm is indeed the future or not (indeed I hope not) but I think it is at least worth considering the ramifications if it is.The SingularityIf you are reading this post, you are undoubtedly already familiar with the concept of the Singularity . As computers become better at learning, they eventually reach a level known as General Purpose AI (GAI) where they are able to perform all of the intellectual tasks that humans can do, but faster and cheaper. This leads to Recursive-Self-Improvement (RSI) where the AI improves itself, since one of the things humans are able to do is build GAI. Eventually RSI leads to Super-Intelligent AI (SAI), a single AI with godlike powers able to solve any conceivable problem, finally and truly defeat Moloch ushering in an age of unprecedented wealth and prosperity, and a sort of golden-age for Humankind where having finally completed our last invention we can finally relax and enjoy the fruits of our labors.While believers in the Singularity will frequently warn of its perils –if we don’t seed the SAI correctly it may turn us all into paperclips– belief in the Singularity is fundamentally a Utopian vision. Even a world turned into paperclips is perfectly turned into paperclips. Intelligence is a single, measurable concept and once it reaches its final form all will be arrayed beneath its command. The Anti-SingularityThe anti-singularity exists in a future where the utopian visions of singularity theorists do not merely end badly (as with a paperclip maximizer) but where they cannot come to pass because they are based on shaky philosophical foundations.In the anti-singularity, there is no such thing as a General Purpose Intelligence. Or, more precisely, the only GAI that is possible is the blind watchmaker of Darwin. In the world of the anti-singularity there is no deeper underlying theory of intelligence. Things just happen because they happen. The optimal form for for surviving on Earth circa 1mya by some amount of luck also happens to be able to make transistors and rocket-ships but never reaches beyond that to the true depths of cosmic knowledge. We don’t need to wonder what a world of anti-singularity will look like, we already have two examples at hand: biology and discrete-mathematics.BiologyBiology is a notoriously difficult field to work in. Despite the fact that we have a readily available corpus to learn from (all of nature) and the gobsmakingly huge amounts of wealth available for whoever claims the prize (self-replicating solar farms, immortality, computation too cheap to meter) progress in the field has been slow and uneven. the field of biology continues to be dominated by high stakes trial-and-errorIn a world where AI startups are now worth trillions of dollars, comparable biology startups are barely a blip on the radar. Drug trials can easily end in billion-dollar failures. We still have yet to replicate even the simplest organisms digitally. The reason is computers were designed by humans to be easy to understand and control. Biology, by contrast, is the result of billions of years of purposeless evolution. This slow accumulation of “what works” via tinkering results in systems that are incredibly resistant to the methods of modern science.This is not to say there has been no progress (we have made great strides) but progress is hard-won, piece-by-piece and rarely generalizes.Discrete MathematicsRule 30 is an example of a simple system with complex behaviorNearly half-a-century-ago, Stephen Wolfram described an astonishing phenomena. Many of the patterns that we see in nature are the product of neither intentional design, nor of mere-optimization. Rather these patterns arise at a fundamental level from simple rules in the world of discrete mathematics. Wolfram predicted that this discovery would usher in a New Kind of Science. By understanding the basic patterns underlying all of nature, Wolfram claimed, we would be able to reduce all of science to computation based on a simple set of rules.Simple rules give rise to computational-irreducibility Unfortunately, Wolfram’s views have yet to have the impact on science that he anticipated. Physics has not yet been reduced to a simple set of rules. More concerningly, for most questions of the type Wolfram is interested in, there is no efficient general-purpose solution. This is due to a phenomena called computational-irreducibility.from A New Kind of ScienceAs with biology, there are simply too many details with no overarching structure and so the best we can do is simple trial-and-error to see what happens.So what?Suppose we live not in the world of the Singularity –where a single SAI brings order to the universe– but the world of the Anti-Singularity –where almost all systems are described by a complex set of interactions that can only be understood via trial-and-error. What does this mean?It does not mean that AI will not be powerful. Indeed, if the best we can do is to try many possibilities, then the fact that AI can try millions of possibilities in the time it takes for a human to try one will make it exceedingly powerful.It does mean that the shape of the AI-Alignment problem is quite different. Rather than building a single SAI and trusting our future –good or bad– to it, we instead will find ourselves tending to a diverse garden of different AIs, each optimized to a different environment. In this future, Humanity does not simply build a Last Invention and then enjoy a golden retirement. Rather, the future is filled with an endless series of new and unique challenges that we must adapt –and dare I say evolve– in response to. The problem becomes less: “we chose the wrong optimization function and now the SAI turned the whole universe into paperclips” and more “Agent58adc9862bd08b56284eadb6bede52c1a033b03306b4333105b07435e55b7339 is producing an anomalously high number of paperclips, somebody needs to go down and figure out what went wrong.” Humans become gardeners atop a new wild ecosystem of heuristic optimizers. These heuristic optimizers are less dangerous –because they are adapted to a particular local set of circumstances– but less predicable –because computational irreducibility says there’s no simple explanation of what goes wrong.How Likely is this to happen?I don’t know.Personally, I am optimistic. I think p(good singularity) > p(anti-singularity) > p(bad singularity) What do you think?I’m worried, what should I do about this?The good news is: there’s nothing you can do. Whether we live in the world of the Singularity or the Anti-Singularity is a fact about the base reality in which we live, not something that can be influenced by human actions.There may be things that you can do to prepare yourself in the case that the anti-singularity comes to pass. For example strategies like “I should spend all of my money before the singularity because –good or bad– money won’t matter after the singularity” might not apply if the anti-singularity comes to pass. By contrast, the unique set of heuristics that humans have accumulated via 3.5 billion years of evolution may prove more valuable in the world of the anti-singularity. In the world of the anti-singularity diversity, robustness and adaptability matter more than getting it right the first time.Questions? Comments?Questions I particularly would like answered: What metrics can we use to tell us ahead of time whether we are in the world of the Singularity or Anti-Singularity? What actions are beneficial in both futures? Only in one of them? Are there any AI-alignment techniques that are obviously applicable in one future but not the other? What does all of this have to do with the current mess in mathematics?Discuss ​Read More

The Anti-Singularity

​Heuristic solution to Doom generated by GPT-5.4In his blog post, Jiayi Weng proposes “the next paradigm” for Machine Learning: rather than trying to find beautiful abstractions for general-purpose-learning, we simply take advantage of LLM’s ability to tirelessly iterate on complex designs to build heuristics that can solve whatever task-at-hand we are dealing with.I do not know whether this next paradigm is indeed the future or not (indeed I hope not) but I think it is at least worth considering the ramifications if it is.The SingularityIf you are reading this post, you are undoubtedly already familiar with the concept of the Singularity . As computers become better at learning, they eventually reach a level known as General Purpose AI (GAI) where they are able to perform all of the intellectual tasks that humans can do, but faster and cheaper. This leads to Recursive-Self-Improvement (RSI) where the AI improves itself, since one of the things humans are able to do is build GAI. Eventually RSI leads to Super-Intelligent AI (SAI), a single AI with godlike powers able to solve any conceivable problem, finally and truly defeat Moloch ushering in an age of unprecedented wealth and prosperity, and a sort of golden-age for Humankind where having finally completed our last invention we can finally relax and enjoy the fruits of our labors.While believers in the Singularity will frequently warn of its perils –if we don’t seed the SAI correctly it may turn us all into paperclips– belief in the Singularity is fundamentally a Utopian vision. Even a world turned into paperclips is perfectly turned into paperclips. Intelligence is a single, measurable concept and once it reaches its final form all will be arrayed beneath its command. The Anti-SingularityThe anti-singularity exists in a future where the utopian visions of singularity theorists do not merely end badly (as with a paperclip maximizer) but where they cannot come to pass because they are based on shaky philosophical foundations.In the anti-singularity, there is no such thing as a General Purpose Intelligence. Or, more precisely, the only GAI that is possible is the blind watchmaker of Darwin. In the world of the anti-singularity there is no deeper underlying theory of intelligence. Things just happen because they happen. The optimal form for for surviving on Earth circa 1mya by some amount of luck also happens to be able to make transistors and rocket-ships but never reaches beyond that to the true depths of cosmic knowledge. We don’t need to wonder what a world of anti-singularity will look like, we already have two examples at hand: biology and discrete-mathematics.BiologyBiology is a notoriously difficult field to work in. Despite the fact that we have a readily available corpus to learn from (all of nature) and the gobsmakingly huge amounts of wealth available for whoever claims the prize (self-replicating solar farms, immortality, computation too cheap to meter) progress in the field has been slow and uneven. the field of biology continues to be dominated by high stakes trial-and-errorIn a world where AI startups are now worth trillions of dollars, comparable biology startups are barely a blip on the radar. Drug trials can easily end in billion-dollar failures. We still have yet to replicate even the simplest organisms digitally. The reason is computers were designed by humans to be easy to understand and control. Biology, by contrast, is the result of billions of years of purposeless evolution. This slow accumulation of “what works” via tinkering results in systems that are incredibly resistant to the methods of modern science.This is not to say there has been no progress (we have made great strides) but progress is hard-won, piece-by-piece and rarely generalizes.Discrete MathematicsRule 30 is an example of a simple system with complex behaviorNearly half-a-century-ago, Stephen Wolfram described an astonishing phenomena. Many of the patterns that we see in nature are the product of neither intentional design, nor of mere-optimization. Rather these patterns arise at a fundamental level from simple rules in the world of discrete mathematics. Wolfram predicted that this discovery would usher in a New Kind of Science. By understanding the basic patterns underlying all of nature, Wolfram claimed, we would be able to reduce all of science to computation based on a simple set of rules.Simple rules give rise to computational-irreducibility Unfortunately, Wolfram’s views have yet to have the impact on science that he anticipated. Physics has not yet been reduced to a simple set of rules. More concerningly, for most questions of the type Wolfram is interested in, there is no efficient general-purpose solution. This is due to a phenomena called computational-irreducibility.from A New Kind of ScienceAs with biology, there are simply too many details with no overarching structure and so the best we can do is simple trial-and-error to see what happens.So what?Suppose we live not in the world of the Singularity –where a single SAI brings order to the universe– but the world of the Anti-Singularity –where almost all systems are described by a complex set of interactions that can only be understood via trial-and-error. What does this mean?It does not mean that AI will not be powerful. Indeed, if the best we can do is to try many possibilities, then the fact that AI can try millions of possibilities in the time it takes for a human to try one will make it exceedingly powerful.It does mean that the shape of the AI-Alignment problem is quite different. Rather than building a single SAI and trusting our future –good or bad– to it, we instead will find ourselves tending to a diverse garden of different AIs, each optimized to a different environment. In this future, Humanity does not simply build a Last Invention and then enjoy a golden retirement. Rather, the future is filled with an endless series of new and unique challenges that we must adapt –and dare I say evolve– in response to. The problem becomes less: “we chose the wrong optimization function and now the SAI turned the whole universe into paperclips” and more “Agent58adc9862bd08b56284eadb6bede52c1a033b03306b4333105b07435e55b7339 is producing an anomalously high number of paperclips, somebody needs to go down and figure out what went wrong.” Humans become gardeners atop a new wild ecosystem of heuristic optimizers. These heuristic optimizers are less dangerous –because they are adapted to a particular local set of circumstances– but less predicable –because computational irreducibility says there’s no simple explanation of what goes wrong.How Likely is this to happen?I don’t know.Personally, I am optimistic. I think p(good singularity) > p(anti-singularity) > p(bad singularity) What do you think?I’m worried, what should I do about this?The good news is: there’s nothing you can do. Whether we live in the world of the Singularity or the Anti-Singularity is a fact about the base reality in which we live, not something that can be influenced by human actions.There may be things that you can do to prepare yourself in the case that the anti-singularity comes to pass. For example strategies like “I should spend all of my money before the singularity because –good or bad– money won’t matter after the singularity” might not apply if the anti-singularity comes to pass. By contrast, the unique set of heuristics that humans have accumulated via 3.5 billion years of evolution may prove more valuable in the world of the anti-singularity. In the world of the anti-singularity diversity, robustness and adaptability matter more than getting it right the first time.Questions? Comments?Questions I particularly would like answered: What metrics can we use to tell us ahead of time whether we are in the world of the Singularity or Anti-Singularity? What actions are beneficial in both futures? Only in one of them? Are there any AI-alignment techniques that are obviously applicable in one future but not the other? What does all of this have to do with the current mess in mathematics?Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *