Published on October 20, 2025 5:00 AM GMTWe often assume that if every individual part of a system is defensible, the system as a whole must be sound. This logic fails spectacularly when confronted with emergence, where complex interactions create outcomes far removed from the intent of any single part. We don’t realize the danger until it’s already here.While the mechanism isn’t a simple linear ‘slippery slope,’ the dynamic is just as treacherous. It’s a tyranny of small decisions: each interaction seems locally rational and harmless, yet the cumulative result is catastrophic. We discount the accumulating risk. At what point does the accumulation of “okay” inputs generate a “screwed up” reality? How do we evaluate where to place a Schelling Fence—a bright line we agree not to cross—when the landscape itself is shifting? This brings us to a modern, widespread example of this trap: the emergent effects of social media algorithms on our perception of reality.The difficulty in answering this lies in the fact that many of the most influential forces today are not centralized manipulators, but non-malevolent emergent systems. My canonical example is Instagram (specifically, the Reels/short-form video format).This is NOT a social media hate post, but rather an attempt to understand the negative emergent characteristics better and spark discussions regarding its solutions.Let’s break down the mechanism.PS1: This analysis presumes good faith from all actors to demonstrate that the issue persists even without malicious intent. The introduction of malevolent actors would only exacerbate the situation.PS2: This analysis is true not only for short form content, but rather most social media, however short form content simply makes the issue much worse.Part 1: The Fuel (Shock, Assertion, and Selection Bias)At its core, Instagram is a platform for free expression. It allows millions of creators to share their work, opinions, and daily lives. This is, for the most part, a net positive. It enables connection, creativity, and the free exchange of ideas. Within this framework, even a “hot take” on a political figure or an uneducated opinion is just a form of speech. The platform itself doesn’t judge the content; it merely provides the stage.In an attention economy, the most shocking, emotionally resonant, and polarizing content wins.Nuance takes time to explain; shock is instantaneous. The algorithm, blind to the truth value of the content, sees the spike in engagement and promotes the shock and the assertion. This creates a massive selection bias where the most extreme or shocking anecdotes are preferentially selected because they garner the most engagement.Consider the content that often surfaces: videos of random road rages, news anchors delivering hateful commentary (often without evidence), or an influencer giving a confident, uneducated “hot take” on complex geopolitical issues.Most such posts are what I call “observatory/awareness posts.” The idea is to inform about something bad in the world in order to muster social muscle for online activism and/or to spread awareness about the issues. Both of these are actually great causes and advantages of social media!Part 2: The Optimization Landscape (The Algorithm and the Brain)The fundamental rule governing what you see on social media is simple: The algorithm optimizes for attention.It observes what you watch, what you linger on, and what you engage with, and then it gives you more of the same. This isn’t inherently malicious; it’s a logical business metric. If you spend more time on the app, you see more ads. The algorithm equates your attention with your preference, which is a reasonable, if flawed, assumption.We are familiar with the arguments about how short-form video hijacks dopamine circuits and reduces attention spans. But a deeper, more insidious mechanism concerns me: the systemic way these platforms distort our Bayesian priors.Part 3: The Cognitive Update Mechanism (Predictive Coding on Autopilot)How do we form beliefs? A useful model is predictive coding (related to the Free Energy Principle). In simple terms, our brain is constantly making predictions about the world and then updating its internal model based on new sensory information. When we encounter new information, we can either:Modify the world (or our perception of it) to fit our prior beliefs.Modify our prior beliefs to fit the new information.When scrolling through a feed, we are bombarded with a stream of novel “observatory posts” —these are all data points. Since we can’t act on this information directly, we default to the second option: we update our priors. Each “innocent observation” subtly shifts our model of the world.In the context of Instagram Reels, the velocity and volume of information make critical analysis nearly impossible. The brain is wired to minimize surprise (prediction error) in the most efficient way possible. Engaging critical thought (System 2) is metabolically expensive and slow. In a high-velocity, low-friction environment like Reels, the least expensive way to resolve prediction error is to passively accept the observation and update the prior (System 1)If you see a viral video of “a Kannada auto driver slapping girl” it is presented as an observation and perhaps a call for activism. Because critical evaluation is too slow, the brain defaults to the efficient path: slightly updating its model of the world to accommodate this new “fact.”Part 4: The Emergent Cascade: A Machine for Base Rate NeglectThis is where the system becomes lethal. The interaction between the attention-optimizing algorithm and the cognitive update mechanism creates a devastating feedback loop.Let’s trace the cascade using the previous example:The Initial Exposure: You see the video of the auto driver and the girl. You linger on it because it is shocking or because perhaps you are a north Indian who is about to shift to Bangalore (city in south India) for their job (like me)!The Algorithm Observes: The algorithm notes your engagement. Operationally, you “liked” this content.The Feedback Loop: The algorithm, optimizing for attention, shows you more content related to supposed regional conflict in Bangalore.The Cognitive Update: You see five more videos showing similar conflicts. Your brain, minimizing surprise, stops seeing these as isolated anecdotes and starts seeing a pattern. Your prior shifts: “There is significant hostility between locals and North Indians in Bangalore.”Polarization: You are now primed to engage more with this topic, reinforcing the algorithm’s decision, and pulling you further into an echo chamber.Not only did you yourself create a polarization which perhaps didn’t exist (you might now be hostile towards locals in Bangalore!), but also you will now exhibit the Baader-Meinhof Phenomenon or frequency illusion (you start to notice something more frequently after you’ve become aware of it for the first time), which essentially feeds your confirmation biasBoy that’s a vicious cycle with an innocent, well-meaning start!You are falling prey to a specific form of availability bias, sometimes related to the Chinese Robber Fallacy.The Chinese Robber Fallacy (or more generally, base rate neglect) occurs when one focuses on the absolute number of occurrences rather than the percentage or base rate. You have seen six videos of conflict (absolute number), which feels significant. But you have no visibility into the millions of peaceful interactions that occur daily (the base rate). The denominator is just hidden from you, making the numerator seem overwhelmingly important.Conclusion: The Innocence of the Parts, The Danger of the WholeIndividually, every part of this system seems defensible:Users are just seeking entertainment and information.Creators are exercising free speech and trying to build an audience within the rules of the platform.The Platform is pursuing a rational business model by optimizing for user attention.Yet the emergent outcome is profoundly dangerous. The resulting crisis demands intervention, but placing a Schelling Fence requires navigating fundamental constraints where local incentives conflict with global epistemic health.We face several intractable trade-offs:The Velocity Trap (Friction vs. Engagement): The high velocity and volume of short-form content bypass critical evaluation, forcing rapid, passive updating. The necessary countermeasure is cognitive friction. However, friction directly contradicts the platform’s core optimization target: maximizing frictionless engagement.The Salience Gap (Numerator vs. Denominator): The algorithm maximizes the visibility of shocking anecdotes (the numerator) while hiding the base rate (the denominator). How do we make statistical reality more salient than emotional resonance? The denominator is inherently less engaging than the numerator.The Agency Paradox (Stated vs. Revealed Preference): Restoring user agency seems intuitive, but optimization exploits the gap between stated preferences (what we say we want) and revealed preferences (what we actually watch). How do we empower user agency when the optimization engine leverages the very cognitive biases users might wish to avoid?These constraints ensure that individual actors are caught in a classic multipolar trap, or Moloch. This is the Molochian Dynamic: If any single platform unilaterally shifts its optimization target away from raw attention, they risk being rapidly outcompeted by those who do not. Escaping this race to the bottom requires solving the underlying coordination problem.The “innocence” of the parts becomes irrelevant because the system itself incentivizes and demands this behavior. If an emergent system, despite the innocence of its components, predictably leads to the degradation of our ability to perceive reality accurately, where do we draw the line? How do we design systems that introduce necessary friction and epistemic transparency while navigating these constraints? What are your ideas?(I realise my irony that this post perhaps is yet another humble but hopefully slightly educated “in my opinion” post -_-)See you in the comments!Discuss Read More
A Bayesian nightmare: Instagram and Sampling bias
Published on October 20, 2025 5:00 AM GMTWe often assume that if every individual part of a system is defensible, the system as a whole must be sound. This logic fails spectacularly when confronted with emergence, where complex interactions create outcomes far removed from the intent of any single part. We don’t realize the danger until it’s already here.While the mechanism isn’t a simple linear ‘slippery slope,’ the dynamic is just as treacherous. It’s a tyranny of small decisions: each interaction seems locally rational and harmless, yet the cumulative result is catastrophic. We discount the accumulating risk. At what point does the accumulation of “okay” inputs generate a “screwed up” reality? How do we evaluate where to place a Schelling Fence—a bright line we agree not to cross—when the landscape itself is shifting? This brings us to a modern, widespread example of this trap: the emergent effects of social media algorithms on our perception of reality.The difficulty in answering this lies in the fact that many of the most influential forces today are not centralized manipulators, but non-malevolent emergent systems. My canonical example is Instagram (specifically, the Reels/short-form video format).This is NOT a social media hate post, but rather an attempt to understand the negative emergent characteristics better and spark discussions regarding its solutions.Let’s break down the mechanism.PS1: This analysis presumes good faith from all actors to demonstrate that the issue persists even without malicious intent. The introduction of malevolent actors would only exacerbate the situation.PS2: This analysis is true not only for short form content, but rather most social media, however short form content simply makes the issue much worse.Part 1: The Fuel (Shock, Assertion, and Selection Bias)At its core, Instagram is a platform for free expression. It allows millions of creators to share their work, opinions, and daily lives. This is, for the most part, a net positive. It enables connection, creativity, and the free exchange of ideas. Within this framework, even a “hot take” on a political figure or an uneducated opinion is just a form of speech. The platform itself doesn’t judge the content; it merely provides the stage.In an attention economy, the most shocking, emotionally resonant, and polarizing content wins.Nuance takes time to explain; shock is instantaneous. The algorithm, blind to the truth value of the content, sees the spike in engagement and promotes the shock and the assertion. This creates a massive selection bias where the most extreme or shocking anecdotes are preferentially selected because they garner the most engagement.Consider the content that often surfaces: videos of random road rages, news anchors delivering hateful commentary (often without evidence), or an influencer giving a confident, uneducated “hot take” on complex geopolitical issues.Most such posts are what I call “observatory/awareness posts.” The idea is to inform about something bad in the world in order to muster social muscle for online activism and/or to spread awareness about the issues. Both of these are actually great causes and advantages of social media!Part 2: The Optimization Landscape (The Algorithm and the Brain)The fundamental rule governing what you see on social media is simple: The algorithm optimizes for attention.It observes what you watch, what you linger on, and what you engage with, and then it gives you more of the same. This isn’t inherently malicious; it’s a logical business metric. If you spend more time on the app, you see more ads. The algorithm equates your attention with your preference, which is a reasonable, if flawed, assumption.We are familiar with the arguments about how short-form video hijacks dopamine circuits and reduces attention spans. But a deeper, more insidious mechanism concerns me: the systemic way these platforms distort our Bayesian priors.Part 3: The Cognitive Update Mechanism (Predictive Coding on Autopilot)How do we form beliefs? A useful model is predictive coding (related to the Free Energy Principle). In simple terms, our brain is constantly making predictions about the world and then updating its internal model based on new sensory information. When we encounter new information, we can either:Modify the world (or our perception of it) to fit our prior beliefs.Modify our prior beliefs to fit the new information.When scrolling through a feed, we are bombarded with a stream of novel “observatory posts” —these are all data points. Since we can’t act on this information directly, we default to the second option: we update our priors. Each “innocent observation” subtly shifts our model of the world.In the context of Instagram Reels, the velocity and volume of information make critical analysis nearly impossible. The brain is wired to minimize surprise (prediction error) in the most efficient way possible. Engaging critical thought (System 2) is metabolically expensive and slow. In a high-velocity, low-friction environment like Reels, the least expensive way to resolve prediction error is to passively accept the observation and update the prior (System 1)If you see a viral video of “a Kannada auto driver slapping girl” it is presented as an observation and perhaps a call for activism. Because critical evaluation is too slow, the brain defaults to the efficient path: slightly updating its model of the world to accommodate this new “fact.”Part 4: The Emergent Cascade: A Machine for Base Rate NeglectThis is where the system becomes lethal. The interaction between the attention-optimizing algorithm and the cognitive update mechanism creates a devastating feedback loop.Let’s trace the cascade using the previous example:The Initial Exposure: You see the video of the auto driver and the girl. You linger on it because it is shocking or because perhaps you are a north Indian who is about to shift to Bangalore (city in south India) for their job (like me)!The Algorithm Observes: The algorithm notes your engagement. Operationally, you “liked” this content.The Feedback Loop: The algorithm, optimizing for attention, shows you more content related to supposed regional conflict in Bangalore.The Cognitive Update: You see five more videos showing similar conflicts. Your brain, minimizing surprise, stops seeing these as isolated anecdotes and starts seeing a pattern. Your prior shifts: “There is significant hostility between locals and North Indians in Bangalore.”Polarization: You are now primed to engage more with this topic, reinforcing the algorithm’s decision, and pulling you further into an echo chamber.Not only did you yourself create a polarization which perhaps didn’t exist (you might now be hostile towards locals in Bangalore!), but also you will now exhibit the Baader-Meinhof Phenomenon or frequency illusion (you start to notice something more frequently after you’ve become aware of it for the first time), which essentially feeds your confirmation biasBoy that’s a vicious cycle with an innocent, well-meaning start!You are falling prey to a specific form of availability bias, sometimes related to the Chinese Robber Fallacy.The Chinese Robber Fallacy (or more generally, base rate neglect) occurs when one focuses on the absolute number of occurrences rather than the percentage or base rate. You have seen six videos of conflict (absolute number), which feels significant. But you have no visibility into the millions of peaceful interactions that occur daily (the base rate). The denominator is just hidden from you, making the numerator seem overwhelmingly important.Conclusion: The Innocence of the Parts, The Danger of the WholeIndividually, every part of this system seems defensible:Users are just seeking entertainment and information.Creators are exercising free speech and trying to build an audience within the rules of the platform.The Platform is pursuing a rational business model by optimizing for user attention.Yet the emergent outcome is profoundly dangerous. The resulting crisis demands intervention, but placing a Schelling Fence requires navigating fundamental constraints where local incentives conflict with global epistemic health.We face several intractable trade-offs:The Velocity Trap (Friction vs. Engagement): The high velocity and volume of short-form content bypass critical evaluation, forcing rapid, passive updating. The necessary countermeasure is cognitive friction. However, friction directly contradicts the platform’s core optimization target: maximizing frictionless engagement.The Salience Gap (Numerator vs. Denominator): The algorithm maximizes the visibility of shocking anecdotes (the numerator) while hiding the base rate (the denominator). How do we make statistical reality more salient than emotional resonance? The denominator is inherently less engaging than the numerator.The Agency Paradox (Stated vs. Revealed Preference): Restoring user agency seems intuitive, but optimization exploits the gap between stated preferences (what we say we want) and revealed preferences (what we actually watch). How do we empower user agency when the optimization engine leverages the very cognitive biases users might wish to avoid?These constraints ensure that individual actors are caught in a classic multipolar trap, or Moloch. This is the Molochian Dynamic: If any single platform unilaterally shifts its optimization target away from raw attention, they risk being rapidly outcompeted by those who do not. Escaping this race to the bottom requires solving the underlying coordination problem.The “innocence” of the parts becomes irrelevant because the system itself incentivizes and demands this behavior. If an emergent system, despite the innocence of its components, predictably leads to the degradation of our ability to perceive reality accurately, where do we draw the line? How do we design systems that introduce necessary friction and epistemic transparency while navigating these constraints? What are your ideas?(I realise my irony that this post perhaps is yet another humble but hopefully slightly educated “in my opinion” post -_-)See you in the comments!Discuss Read More