Abstract: This post is an introduction to a concept I call “tech extensity”—when a company, product, or tool becomes so deeply integrated across multiple system layers that removal becomes practically impossible. Tech extensity doesn’t require a monopoly, or even superior performance. Unlike classical monopolies (which dominate single markets), extensive systems achieve lock-in through spread rather than mastery. I argue this creates a coordination problem: individual actors (governments, users) face high switching costs and regulatory burdens while the companies themselves face low expansion costs, leading to a ratchet effect where tech power accumulates irreversibly.[1]Examples of this include Google (82% of the market in search, 66% of the market in web browsers, and 45% of the market in email), SpaceX (85% of US space launches), and X/TikTok (identity lock-in despite clear quality degradation). Anthropic Claude is getting there (I discuss this here), and Amazon / Flock are trying (here).This represents a “too big to govern” failure mode distinct from “too big to fail.”Related: Robert Greene’s 48 Laws of Power (abebooks | bol.com)Scott Alexander’s Meditations on Moloch (https://slatestarcodex.com/2014/07/30/meditations-on-moloch/Andrew Critch’s Tech Company Singularities (https://www.lesswrong.com/posts/ezGYBHTxiRgmMRpWK/tech-company-singularities-and-steering-them-to-reduce-x)H.R. Giger, Bio-mechanical Landscape (1976), acrylic on paper, 200 x 100 cm, © Estate of H.R. GigerIntensity vs. ExtensityFirst, we need to start with some definitions, which are crucial to getting to the heart of my thesis, namely Intensity vs. Extensity.[2] Intensity occurs when a company or product becomes indispensable or necessary based on its quality or uniqueness, or, in the case of a person, their deep mastery or skill in a subject or field. Michelangelo was considered intensive—his mastery across different artistic domains made him a sought-after artisan during the Renaissance.Famous three Michelin-star restaurants like The French Laundry in the US or Noma in Denmark, also have intensity. Their uniqueness explains why it’s nearly impossible to get a reservation unless you plan months in advance. Or take Shohei Ohtani, who has the rare quality of being both a phenomenal pitcher and batter. That gives Ohtani a ton of leverage within the realm of baseball, just as Michelangelo had in the world of art.The thing with intensive systems is that usually they’re impermanent. Athletes retire. Chefs hang up their whites. Technology improves, and of course, products and services enshittify. So, while intensity may allow a company or person to become temporarily dominant and powerful within a market, that power is often short-lived. By contrast, I argue that extensity is where the real power is at.[3]Extensity describes something broad in size or scope, that becomes deeply entrenched in a system. Unlike intensity, extensity is about spread, not mastery. You become extensive not necessarily by being the best, but by spreading out and becoming indispensable to the system itself. In 48 Laws, Henry Kissinger was cited as being an extensive force in geopolitics, diplomacy, and international relations. He was a fixture across administrations, and remained a power broker long after he left politics. Here’s Greene’s take in Law 11:Henry Kissinger managed to survive the many bloodlettings that went on in the Nixon White House not because he was the best diplomat Nixon could find—there were other fine negotiators, and not because the two men got along so well: They did not. Nor did they share their beliefs and politics. Kissinger survived because he entrenched himself in so many areas of the political structure that to do away with him would lead to chaos.Some of you might be thinking to yourself: ‘Hey idiot, none of this is new, we’ve already got a term for this when it comes to businesses: monopoly.’ After all, a monopoly represents complete control or dominance within a market. Horizontal monopolies give organizations the power to set the price of goods or services, dictate what is made available to customers, and create barriers to entry for potential competitors.But horizontal monopolies, like extensive humans, aren’t necessarily guaranteed. Case in point: throughout most of the 20th Century, AT&T held a near-monopoly in telecommunications, cable television, and related professional services, before it was broken up in 1982. Microsoft was extensive in the browser, office productivity, and operating system market (they still are, to a lesser degree with Windows and Office365), so much so that the US government attempted (and failed) to pull an AT&T Part 2 in the 90s.Still, when I think about technological extensity, it feels bigger than even a traditional monopoly. For one, I don’t think it necessarily requires that a company reach technical “monopoly” status at all. All that extensity needs is deeply rooted integration within the system in such a way that removal becomes effectively impossible without leaving major gaps behind. When I say “the system” I’m referring not just to software, networks, and infrastructure, or financial institutions and governments, but everything we come to depend on that helps keep society functioning.This idea first materialized in the financial sector with the bailouts during the 2008 financial crisis. If a bank is “too big to fail,” that’s just a catchier way of saying that bank has become entrenched in the financial system.We humans rarely learn from our mistakes, and so, we’re starting to see this more and more with Big Tech. Take Google, for example: Google commands 82% of the market in search, 66% of the market in web browsers, and 45% of the market in email, despite loads of competition in each product area.[4] Yet, they’ve successfully dodged the monopoly moniker because legitimate competitors still exist.And yet, people have been lamenting the continual decline of Google Search for years, and regularly complain that Chrome is a bloated, ad-laden, data vampire.[5] Most everyone I know has a Gmail account, even if they loudly proclaim that they hate Google. To me, this indicates that we’ve come to rely on these products through a combination of network effects, habituation, and inertia, to the point that they’re part of the internet itself.Source: Business of Apps: Google Statistics 2026I’m also noticing this trend start to develop at a literal planetary scale when it comes to SpaceX’s reach. SpaceX’s evolution from a cool space company to potential “everything company” for Elon Musk, should freak people out way more than it does, and yet, it doesn’t. SpaceX was responsible 85% of all space launches in the United States. This one company launched almost twice as many orbital missions as China did in 2025. Starlink (which is part of SpaceX) alone made up 123 of SpaceX’s 165 launches in 2025, and lofted more than 3,000 Starlink satellites into orbit as part of the company’s massive 11,000 satellite mega-constellation. That’s 11,000 satellites out of a total 15,644 man-made objects in space right now.[6]Meanwhile, over the span of what seemed like a long weekend, Musk managed to merge SpaceX with his AI firm xAI with nary a raised eyebrow by regulators. Musk’s other company, Tesla, invested $2bn in xAI in January. This is all part of his larger efforts to put data centers in space and colonies on Mars and to usher in an era of “amazing abundance”.Now, I can’t predict whether Musk will ultimately be successful, but what his X-empire (xAI, SpaceX, Tesla) may very well succeed at is finding newer, bigger, and bolder ways to make Musk and his companies vital and necessary parts of everything.This means that one company, nay, one man, who has an estimated net worth somewhere in the neighborhood of $690-852bn has, and continues to amass enough power, connections, resources, and wealth that he can not only ignore consequences, regulatory or otherwise, but also affect geopolitical outcomes by taking his toys away, or cajoling governments to cut-off funds to programs he doesn’t like or find value in. Don’t take my word for it—ask the Ukrainians whose Starlink access Musk has repeatedly restricted during the war, or the 550,000 children Musk and DOGE may have indirectly killed by defunding USAID.Too Big to Fail?Here’s a question: What happens when extensive tools or companies fail? What happens to society if we lose access to Gmail, or Starlink, if AWS or Azure die, or if the AI bubble bursts abruptly? How easy will it be for us to collectively recover now? What if we keep building these tools into more of our lives?To answer this question, we need to talk about lock-ins.And no, I’m not talking about the fun kind at pubs in Dublin. I’m talking about vendor & collective lock-ins.Vendor lock-in is easy to see: So much of our lives are built around using technical tools supplied by a handful of companies to communicate. For many reasons (familiarity, habit, self-interest, and in my case, marital harmony) I’m primarily a Google user—I use an Android phone, Gmail, Google Calendar, and Google Drive. Many of my clients use Google Workspace. I even use Gemini and Notebook LM (though not exclusively). These tools have crept into my life and I’ve grown incredibly reliant upon them all working together. I’m reliant not because there aren’t options, but because the very act of switching creates friction and like a diet, can be extremely hard to maintain over time.Last year for example, I tried moving all of my documents over to Proton Drive, because Google Drive isn’t end-to-end encrypted. Plus, I wanted to see if I could. The migration was painful and incomplete. Many files were only accessible in Google. I also had to give up after a few months because I was limited in what I could do in Proton Drive. Want to access a document shared on Drive by someone? Good luck with that—you’ll need a Google account. Trying to save that document on Proton? Fat chance—Proton can’t read (or even store!) .gdoc files. And you can forget about cross-platform collaboration. Some of this was due to Proton Drive being painful to use, but most of it was due to the fact that everybody else uses Google.And that leads to the second type of lock-in: collective, or identity lock-in. The cost of leaving Google (or Apple, or Meta, etc.) isn’t just inconvenience, it’s also about shattering the identity, friendships and connections that has evolved around ‘being online’. This is most often cited in relation to social media, but it’s starting to creep up in terms of AI. Resistance is increasingly becoming, to quote the Borg, futile.And there are social costs. For example, during the pandemic I tried to actively stop using WhatsApp, but found it was essentially impossible in Ireland (where I was living at the time), because WhatsApp and Facebook had at some point become the de-facto messaging platforms and communications channels in the whole of the country. Partly this is because the state of SMS and MMS in Ireland is abysmal, but the root cause is irrelevant. It’s hard to fight Big Tech when you’re isolated in your house during the pandemic and can’t talk to most of your friends because of network effects.Our tech tools, and the algorithms that drive them, have helped to define who we are. Platform-mediated reality is creating incompatible epistemic communities and belief systems, which is to say, people are increasingly likely to interpret the same event wildly differently based on where they interact online. We all know that more of what we read and who we follow is being decided for us by recommendation engines and opaque algorithms.But it’s not just that: research reveals striking differences in opinion about major news events based on a user’s platform-of-choice (X, cable TV, Facebook, podcasts, etc.), while charitable giving studies show how fundamentally different priorities across political ideologies have intensified. Americans in particular, increasingly inhabit entirely different informational spheres, which in turn, shape individual identities.AI, of course, isn’t helping any of this. For example, a recent Syracuse University study found that 27% of users formed deep emotional bonds with OpenAI’s GPT-4o, with some people literally in mourning OpenAI retired the chatbot earlier this year. This kind of psychological entrenchment leads me to worry that the biggest companies are not only too big to fail, but also that they’re increasingly becoming too big to govern.Too Big to Govern?We’ve already seen a hint of this when it came to the TikTok ownership drama. First there was the 14-hour ban in January 2025, which led to such a backlash by users (and politicians who use TikTok) that the Trump administration hit the pause button on a policy choice the administration had championed in his first term. And while it’s true that OG TikTok is now effectively dead, users can’t seem to quit the reanimated, Oracle controlled zombie that replaced it. Here’s CNBC’s take:Survey data from market intelligence firm Sensor Tower show that, despite a surge in deletions following the announcement of TikTok’s U.S. joint venture on Jan. 23, the average number of TikTok’s daily active users in the U.S. remains around 95% of its usership compared to the week of Jan. 19-25.SimilarWeb data indicates even fewer defections. According to their January 2026 data, TikTok shed only 0.76% of its US user-base between November 2025 and the end of January 2026.Now, I’ll concede that losing anywhere between 1-5% of active users is still losing, it’s still indicative of a larger trend: most people are happy to stick around no matter who’s calling the shots. They’ve built at least some part of their identity and habits around TikTok, no matter which billionaires actually run the show. So, the government might be able to change who “owns” TikTok (though ByteDance still maintains a 20% stake), but they can’t change what TikTok is or break its hold on users. That’s the difference between regulating a monopoly and trying to govern an extensive system.Oh, and apropos of nothing in particular.To me, this is extensity in action.Moloch, Agency, and the Race to the BottomI recently read Scott Alexander’s Meditations on Moloch. Alexander attributes our broken, deeply dysfunctional system to Moloch—the Carthaginian demon god who doubles as the personification of industrialization in Allen Ginsburg’s famous work Howl and Other Poems. Why is the system so bad? they ask. Moloch!The implicit question is – if everyone hates the current system, who perpetuates it? And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.Alexander later reminds us that Moloch is essentially us. The agency isn’t the system, but it’s what’s built into the systems we create. And even though he wrote this in the pre-GPT ancient times (2014), the system-as-agent metaphor is even more relevant when applied to the literal AI agents of today. But this agency, and the modern-day Moloch we’re up against is also embodied in the Big Tech race-to-the-bottom mentality, and the willingness to sacrifice values, morals, and accountability, like the Punics sacrificed so many children. It’s in the mindset of taking any risk just to be first, damn the consequences, and the willingness of governments, regulators, and people with power to sit by and just let it happen.Once one agent learns how to become more competitive by sacrificing a common value, all its competitors must also sacrifice that value or be outcompeted and replaced by the less scrupulous.Now, Scott was referring to agents in the classical sense here: entities or individuals who act, exert power, or produce independent effects, usually (but not exclusively) on behalf of another.But there’s nothing that restricts this to human or even corporate agents. To me, it seems entirely plausible that some of the technical systems we develop today are themselves becoming agentic, by producing effects and exerting some degree of power over us on behalf of someone else. I’m not quite at the level of asserting (as my learned friend Mahdi Assan has) that “algorithms” generally have this property, but I don’t think he’s wrong if one considers “algorithms” collectively, i.e., as part of a larger system or set of systems and tools working to accomplish goals on behalf of their creators.[7]In a normal, healthy capitalist system, customers, shareholders, and regulators decide with their wallets and their rules who lives and who dies. Fit, beneficial, lawful, and productive companies survive, unfit, unlawful, or unproductive companies go bankrupt or otherwise cease to operate.[8] And historically, this has mostly been true. Millions of bad companies have gone bust. A smaller number of firms were broken up, forced to restructure, or otherwise regulated into changing their behavior.But we’ve never faced capitalism in a world where a handful of companies have managed to amass the level of power and wealth that exist today, with the ability to engineer systems that are so intertwined and spread across so much of our lives. The technology on the market today is becoming too big to control.Right now, there are no real barriers—no meaningful bulwarks or disincentives to stop what appear to be a handful of men from essentially owning all of us. Musk’s dream of “amazing abundance” fails to answer an important question: amazing abundance for whom?There’s also no accountability either, because everyone with the power to actually do something is too busy using the tools they’ve sworn they’ll regulate. Yes, we’ll get a few token fines, or threatened actions here and there, but that’s part of the theatre. Yes, the companies might pretend to be chastened for a time, but that will only teach them to be less obvious about their intentions.There will always be talk about content moderation or banning Facebook, or X, or TikTok, or regulating Google, Apple, Amazon, or maybe even SpaceX, but nothing meaningful is likely to come of it, because why would it? How could it? In truth, regulatory responses seem to fall into four camps:YOLO, let the planet burn (the US)pearl-clutching and regulating by press release through a handful of token fines that sound impressive but aren’t, because the regulators fear the consequences (the EU, Brazil)developing government-run corporate counterparts (China), orquietly ignoring the problem and hoping a bigger power will fix it (most of the rest of the world).Some of you may respond, “But there is enforcement against big tech — just look at Europe and the GDPR.”Fun Fact: Ireland has levied over €4.04 billion in fines against Big Tech companies over the last six years, primarily against Meta. Of that total, just €20 million has been collected according to a January 2026 FOI disclosure filed by Ken Foxe. Most of that holdup related to a court case brought by Meta and its subsidiary, WhatsApp, who sought to annul the fines.Fun Fact #2: The EU Court of Justice sided with Meta, who challenged a €225 million penalty levied by the European Data Protection Board and the Irish DPC.[9] Fines only work if they’re enforced and collected against, but if the companies have captured the enforcement mechanisms (or can tie things up in litigation for long enough), they’re little more than theatre and bluster.Now ask yourself, what will this situation look like if someone like Musk or Bezos actually succeeds and takes this whole affair interplanetary?We’re already seeing how Big Tech influences governments and shapes narratives. But just imagine this in five or ten years. Imagine a multi-trillion-dollar SpaceX, Google, Amazon, Meta, Oracle, or Microsoft (or a consortium of them), bolstered by super-intelligent AI systems, effectively acting like nation-states. It’s all well and good to have laws, but if a handful of corporations become effective states unto themselves—suppliers of the information, infrastructure, energy, technology, supply chains, and even the money— what even are laws at that point?And while the US is arguably a lost cause (and will continue to be so for some time), over here in the EU, regulators are still framing things in the context of classical monopolies and anti-competitive behavior. We’re still trying to impose old rules on entities that are increasingly becoming so integrated into the system that they are effectively ungovernable. We’re all still using Microsoft, Google, Apple, Facebook, Instagram, X, and OpenAI because Europe has few options to replace them.See, unlike the AT&Ts and Standard Oils of the past, a handful of companies are controlling the informational substrate—the algorithms and engines that shape what we see, who we talk to, how we understand reality. SpaceX, Amazon, Microsoft, Nvidia, Oracle, and Google control the infrastructure that props up the internet. OpenAI, Anthropic, Google, and Meta control the AI. Most of these companies + Oracle/TikTok control the media. Together, they’re integrated into our identities in ways that make them fundamentally harder to disentangle from. We’re all worried about some super-sentient AI coming around the corner and putting us out of work, and that’s probably a valid concern. Meanwhile, we’re (un)happily trusting a handful of companies with everything and giving them lots of opportunity to create further extensive reach. The US, and to a large extent, Big Tech is leading a race to the bottom, and the leaders of the world are basically shrugging and going along with it, hoping someone else will fix the problem.Right now, we still have a choice. But 10 years from now? I’m not so sure.Open QuestionsReversibility: Are there examples of successfully removing extensive tech systems? China’s Great Firewall suggests national-scale alternatives are possible, but at what cost to interoperability, fundamental rights and freedoms?Threshold effects: At what point does extensity become irreversible? Is there a measurable tipping point (market share + integration depth + time)?AI acceleration: How does AI change extensity dynamics? Will it accelerate lock-in (personalization, learned behaviors, recommendation engines, cognitive atrophy) or enable competition (lower switching costs via automation, user-created custom software)?Governance mechanisms: What interventions could work *before* extensity reaches “ungovernable” status? Interoperability mandates? Data portability? Public infrastructure alternatives?Measurement: How do we quantify extensity vs. classical monopoly power? I continue to think that market share misses the integration depth and isn’t accounting for race-to-the-bottom conditions between competitors. These factors make removal costly.^AI usage statement: I used Claude primarily as a sparring/truth-seeking partner. Claude forced me to address certain ‘obvious-to-me-but-not-to-others’ assumptions (e.g., is this actually a bad thing if it helps people? How is this different from a classical monopoly? Am I being paranoid?). Claude also helped me trim this down and encouraged me to include direct quantifiable evidence. The piece is written and edited by me, warts and all. ^This concept was initially discussed in Robert Greene’s 48 Laws of Power, specifically Law 11 (Learn to Keep People Dependent on You) and Law 23 (Concentrate Your Forces). Greene’s book was written in the late 90s, and he was primarily discussing extensity in the context of individuals, not corporations. ^Greene actually argues the opposite point in Law 23: “You gain more by finding a rich mine and mining it deeper, than by flitting from one shallow mine to another. Intensity defeats extensity every time.”^See: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers^Full disclosure: My husband works for Google. I also consult for a rival search and browser company. I have very mixed and complicated feelings about Google’s search quality & other legitimate concerns raised about Google’s power, which is why I usually avoid including them in things I write. My point isn’t to get into the merits of Google per se, so much as to point out what I see as a larger trend across Google-like firms.^Stats: orbit.ing-now.com. Of the 11,000 Starlink satellites, around 1,100 are in re-entry, orbital decay, or are otherwise inactive.^To put a finer point on this: It’s the distinction between the ‘show us the algorithm’ concept that a lot of lawyers/policymakers have, versus asking questions about systems, networks, and how the individual pieces of the puzzle work together. In short, there is no singular algorithm that makes up Google, or Meta, or TikTok: It’s a complicated web of algorithms, learning models, databases, individual functions, and systems. This is why engineers tend to roll their eyes when politicians continue to ask for ‘the algorithm’ during the various showboat hearings.^I avoided including ‘harmful’ in that list, because well, harm is at best, a weak moderating force in the face of capitalism. cf: smoking, guns, alcohol, gambling, prediction markets, crypto…^Needless to say, the next time someone says ‘BUT FINES’ to me, I’m going to just send this link without commentary.Discuss Read More
How Big Tech Becomes Ungovernable
Abstract: This post is an introduction to a concept I call “tech extensity”—when a company, product, or tool becomes so deeply integrated across multiple system layers that removal becomes practically impossible. Tech extensity doesn’t require a monopoly, or even superior performance. Unlike classical monopolies (which dominate single markets), extensive systems achieve lock-in through spread rather than mastery. I argue this creates a coordination problem: individual actors (governments, users) face high switching costs and regulatory burdens while the companies themselves face low expansion costs, leading to a ratchet effect where tech power accumulates irreversibly.[1]Examples of this include Google (82% of the market in search, 66% of the market in web browsers, and 45% of the market in email), SpaceX (85% of US space launches), and X/TikTok (identity lock-in despite clear quality degradation). Anthropic Claude is getting there (I discuss this here), and Amazon / Flock are trying (here).This represents a “too big to govern” failure mode distinct from “too big to fail.”Related: Robert Greene’s 48 Laws of Power (abebooks | bol.com)Scott Alexander’s Meditations on Moloch (https://slatestarcodex.com/2014/07/30/meditations-on-moloch/Andrew Critch’s Tech Company Singularities (https://www.lesswrong.com/posts/ezGYBHTxiRgmMRpWK/tech-company-singularities-and-steering-them-to-reduce-x)H.R. Giger, Bio-mechanical Landscape (1976), acrylic on paper, 200 x 100 cm, © Estate of H.R. GigerIntensity vs. ExtensityFirst, we need to start with some definitions, which are crucial to getting to the heart of my thesis, namely Intensity vs. Extensity.[2] Intensity occurs when a company or product becomes indispensable or necessary based on its quality or uniqueness, or, in the case of a person, their deep mastery or skill in a subject or field. Michelangelo was considered intensive—his mastery across different artistic domains made him a sought-after artisan during the Renaissance.Famous three Michelin-star restaurants like The French Laundry in the US or Noma in Denmark, also have intensity. Their uniqueness explains why it’s nearly impossible to get a reservation unless you plan months in advance. Or take Shohei Ohtani, who has the rare quality of being both a phenomenal pitcher and batter. That gives Ohtani a ton of leverage within the realm of baseball, just as Michelangelo had in the world of art.The thing with intensive systems is that usually they’re impermanent. Athletes retire. Chefs hang up their whites. Technology improves, and of course, products and services enshittify. So, while intensity may allow a company or person to become temporarily dominant and powerful within a market, that power is often short-lived. By contrast, I argue that extensity is where the real power is at.[3]Extensity describes something broad in size or scope, that becomes deeply entrenched in a system. Unlike intensity, extensity is about spread, not mastery. You become extensive not necessarily by being the best, but by spreading out and becoming indispensable to the system itself. In 48 Laws, Henry Kissinger was cited as being an extensive force in geopolitics, diplomacy, and international relations. He was a fixture across administrations, and remained a power broker long after he left politics. Here’s Greene’s take in Law 11:Henry Kissinger managed to survive the many bloodlettings that went on in the Nixon White House not because he was the best diplomat Nixon could find—there were other fine negotiators, and not because the two men got along so well: They did not. Nor did they share their beliefs and politics. Kissinger survived because he entrenched himself in so many areas of the political structure that to do away with him would lead to chaos.Some of you might be thinking to yourself: ‘Hey idiot, none of this is new, we’ve already got a term for this when it comes to businesses: monopoly.’ After all, a monopoly represents complete control or dominance within a market. Horizontal monopolies give organizations the power to set the price of goods or services, dictate what is made available to customers, and create barriers to entry for potential competitors.But horizontal monopolies, like extensive humans, aren’t necessarily guaranteed. Case in point: throughout most of the 20th Century, AT&T held a near-monopoly in telecommunications, cable television, and related professional services, before it was broken up in 1982. Microsoft was extensive in the browser, office productivity, and operating system market (they still are, to a lesser degree with Windows and Office365), so much so that the US government attempted (and failed) to pull an AT&T Part 2 in the 90s.Still, when I think about technological extensity, it feels bigger than even a traditional monopoly. For one, I don’t think it necessarily requires that a company reach technical “monopoly” status at all. All that extensity needs is deeply rooted integration within the system in such a way that removal becomes effectively impossible without leaving major gaps behind. When I say “the system” I’m referring not just to software, networks, and infrastructure, or financial institutions and governments, but everything we come to depend on that helps keep society functioning.This idea first materialized in the financial sector with the bailouts during the 2008 financial crisis. If a bank is “too big to fail,” that’s just a catchier way of saying that bank has become entrenched in the financial system.We humans rarely learn from our mistakes, and so, we’re starting to see this more and more with Big Tech. Take Google, for example: Google commands 82% of the market in search, 66% of the market in web browsers, and 45% of the market in email, despite loads of competition in each product area.[4] Yet, they’ve successfully dodged the monopoly moniker because legitimate competitors still exist.And yet, people have been lamenting the continual decline of Google Search for years, and regularly complain that Chrome is a bloated, ad-laden, data vampire.[5] Most everyone I know has a Gmail account, even if they loudly proclaim that they hate Google. To me, this indicates that we’ve come to rely on these products through a combination of network effects, habituation, and inertia, to the point that they’re part of the internet itself.Source: Business of Apps: Google Statistics 2026I’m also noticing this trend start to develop at a literal planetary scale when it comes to SpaceX’s reach. SpaceX’s evolution from a cool space company to potential “everything company” for Elon Musk, should freak people out way more than it does, and yet, it doesn’t. SpaceX was responsible 85% of all space launches in the United States. This one company launched almost twice as many orbital missions as China did in 2025. Starlink (which is part of SpaceX) alone made up 123 of SpaceX’s 165 launches in 2025, and lofted more than 3,000 Starlink satellites into orbit as part of the company’s massive 11,000 satellite mega-constellation. That’s 11,000 satellites out of a total 15,644 man-made objects in space right now.[6]Meanwhile, over the span of what seemed like a long weekend, Musk managed to merge SpaceX with his AI firm xAI with nary a raised eyebrow by regulators. Musk’s other company, Tesla, invested $2bn in xAI in January. This is all part of his larger efforts to put data centers in space and colonies on Mars and to usher in an era of “amazing abundance”.Now, I can’t predict whether Musk will ultimately be successful, but what his X-empire (xAI, SpaceX, Tesla) may very well succeed at is finding newer, bigger, and bolder ways to make Musk and his companies vital and necessary parts of everything.This means that one company, nay, one man, who has an estimated net worth somewhere in the neighborhood of $690-852bn has, and continues to amass enough power, connections, resources, and wealth that he can not only ignore consequences, regulatory or otherwise, but also affect geopolitical outcomes by taking his toys away, or cajoling governments to cut-off funds to programs he doesn’t like or find value in. Don’t take my word for it—ask the Ukrainians whose Starlink access Musk has repeatedly restricted during the war, or the 550,000 children Musk and DOGE may have indirectly killed by defunding USAID.Too Big to Fail?Here’s a question: What happens when extensive tools or companies fail? What happens to society if we lose access to Gmail, or Starlink, if AWS or Azure die, or if the AI bubble bursts abruptly? How easy will it be for us to collectively recover now? What if we keep building these tools into more of our lives?To answer this question, we need to talk about lock-ins.And no, I’m not talking about the fun kind at pubs in Dublin. I’m talking about vendor & collective lock-ins.Vendor lock-in is easy to see: So much of our lives are built around using technical tools supplied by a handful of companies to communicate. For many reasons (familiarity, habit, self-interest, and in my case, marital harmony) I’m primarily a Google user—I use an Android phone, Gmail, Google Calendar, and Google Drive. Many of my clients use Google Workspace. I even use Gemini and Notebook LM (though not exclusively). These tools have crept into my life and I’ve grown incredibly reliant upon them all working together. I’m reliant not because there aren’t options, but because the very act of switching creates friction and like a diet, can be extremely hard to maintain over time.Last year for example, I tried moving all of my documents over to Proton Drive, because Google Drive isn’t end-to-end encrypted. Plus, I wanted to see if I could. The migration was painful and incomplete. Many files were only accessible in Google. I also had to give up after a few months because I was limited in what I could do in Proton Drive. Want to access a document shared on Drive by someone? Good luck with that—you’ll need a Google account. Trying to save that document on Proton? Fat chance—Proton can’t read (or even store!) .gdoc files. And you can forget about cross-platform collaboration. Some of this was due to Proton Drive being painful to use, but most of it was due to the fact that everybody else uses Google.And that leads to the second type of lock-in: collective, or identity lock-in. The cost of leaving Google (or Apple, or Meta, etc.) isn’t just inconvenience, it’s also about shattering the identity, friendships and connections that has evolved around ‘being online’. This is most often cited in relation to social media, but it’s starting to creep up in terms of AI. Resistance is increasingly becoming, to quote the Borg, futile.And there are social costs. For example, during the pandemic I tried to actively stop using WhatsApp, but found it was essentially impossible in Ireland (where I was living at the time), because WhatsApp and Facebook had at some point become the de-facto messaging platforms and communications channels in the whole of the country. Partly this is because the state of SMS and MMS in Ireland is abysmal, but the root cause is irrelevant. It’s hard to fight Big Tech when you’re isolated in your house during the pandemic and can’t talk to most of your friends because of network effects.Our tech tools, and the algorithms that drive them, have helped to define who we are. Platform-mediated reality is creating incompatible epistemic communities and belief systems, which is to say, people are increasingly likely to interpret the same event wildly differently based on where they interact online. We all know that more of what we read and who we follow is being decided for us by recommendation engines and opaque algorithms.But it’s not just that: research reveals striking differences in opinion about major news events based on a user’s platform-of-choice (X, cable TV, Facebook, podcasts, etc.), while charitable giving studies show how fundamentally different priorities across political ideologies have intensified. Americans in particular, increasingly inhabit entirely different informational spheres, which in turn, shape individual identities.AI, of course, isn’t helping any of this. For example, a recent Syracuse University study found that 27% of users formed deep emotional bonds with OpenAI’s GPT-4o, with some people literally in mourning OpenAI retired the chatbot earlier this year. This kind of psychological entrenchment leads me to worry that the biggest companies are not only too big to fail, but also that they’re increasingly becoming too big to govern.Too Big to Govern?We’ve already seen a hint of this when it came to the TikTok ownership drama. First there was the 14-hour ban in January 2025, which led to such a backlash by users (and politicians who use TikTok) that the Trump administration hit the pause button on a policy choice the administration had championed in his first term. And while it’s true that OG TikTok is now effectively dead, users can’t seem to quit the reanimated, Oracle controlled zombie that replaced it. Here’s CNBC’s take:Survey data from market intelligence firm Sensor Tower show that, despite a surge in deletions following the announcement of TikTok’s U.S. joint venture on Jan. 23, the average number of TikTok’s daily active users in the U.S. remains around 95% of its usership compared to the week of Jan. 19-25.SimilarWeb data indicates even fewer defections. According to their January 2026 data, TikTok shed only 0.76% of its US user-base between November 2025 and the end of January 2026.Now, I’ll concede that losing anywhere between 1-5% of active users is still losing, it’s still indicative of a larger trend: most people are happy to stick around no matter who’s calling the shots. They’ve built at least some part of their identity and habits around TikTok, no matter which billionaires actually run the show. So, the government might be able to change who “owns” TikTok (though ByteDance still maintains a 20% stake), but they can’t change what TikTok is or break its hold on users. That’s the difference between regulating a monopoly and trying to govern an extensive system.Oh, and apropos of nothing in particular.To me, this is extensity in action.Moloch, Agency, and the Race to the BottomI recently read Scott Alexander’s Meditations on Moloch. Alexander attributes our broken, deeply dysfunctional system to Moloch—the Carthaginian demon god who doubles as the personification of industrialization in Allen Ginsburg’s famous work Howl and Other Poems. Why is the system so bad? they ask. Moloch!The implicit question is – if everyone hates the current system, who perpetuates it? And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.Alexander later reminds us that Moloch is essentially us. The agency isn’t the system, but it’s what’s built into the systems we create. And even though he wrote this in the pre-GPT ancient times (2014), the system-as-agent metaphor is even more relevant when applied to the literal AI agents of today. But this agency, and the modern-day Moloch we’re up against is also embodied in the Big Tech race-to-the-bottom mentality, and the willingness to sacrifice values, morals, and accountability, like the Punics sacrificed so many children. It’s in the mindset of taking any risk just to be first, damn the consequences, and the willingness of governments, regulators, and people with power to sit by and just let it happen.Once one agent learns how to become more competitive by sacrificing a common value, all its competitors must also sacrifice that value or be outcompeted and replaced by the less scrupulous.Now, Scott was referring to agents in the classical sense here: entities or individuals who act, exert power, or produce independent effects, usually (but not exclusively) on behalf of another.But there’s nothing that restricts this to human or even corporate agents. To me, it seems entirely plausible that some of the technical systems we develop today are themselves becoming agentic, by producing effects and exerting some degree of power over us on behalf of someone else. I’m not quite at the level of asserting (as my learned friend Mahdi Assan has) that “algorithms” generally have this property, but I don’t think he’s wrong if one considers “algorithms” collectively, i.e., as part of a larger system or set of systems and tools working to accomplish goals on behalf of their creators.[7]In a normal, healthy capitalist system, customers, shareholders, and regulators decide with their wallets and their rules who lives and who dies. Fit, beneficial, lawful, and productive companies survive, unfit, unlawful, or unproductive companies go bankrupt or otherwise cease to operate.[8] And historically, this has mostly been true. Millions of bad companies have gone bust. A smaller number of firms were broken up, forced to restructure, or otherwise regulated into changing their behavior.But we’ve never faced capitalism in a world where a handful of companies have managed to amass the level of power and wealth that exist today, with the ability to engineer systems that are so intertwined and spread across so much of our lives. The technology on the market today is becoming too big to control.Right now, there are no real barriers—no meaningful bulwarks or disincentives to stop what appear to be a handful of men from essentially owning all of us. Musk’s dream of “amazing abundance” fails to answer an important question: amazing abundance for whom?There’s also no accountability either, because everyone with the power to actually do something is too busy using the tools they’ve sworn they’ll regulate. Yes, we’ll get a few token fines, or threatened actions here and there, but that’s part of the theatre. Yes, the companies might pretend to be chastened for a time, but that will only teach them to be less obvious about their intentions.There will always be talk about content moderation or banning Facebook, or X, or TikTok, or regulating Google, Apple, Amazon, or maybe even SpaceX, but nothing meaningful is likely to come of it, because why would it? How could it? In truth, regulatory responses seem to fall into four camps:YOLO, let the planet burn (the US)pearl-clutching and regulating by press release through a handful of token fines that sound impressive but aren’t, because the regulators fear the consequences (the EU, Brazil)developing government-run corporate counterparts (China), orquietly ignoring the problem and hoping a bigger power will fix it (most of the rest of the world).Some of you may respond, “But there is enforcement against big tech — just look at Europe and the GDPR.”Fun Fact: Ireland has levied over €4.04 billion in fines against Big Tech companies over the last six years, primarily against Meta. Of that total, just €20 million has been collected according to a January 2026 FOI disclosure filed by Ken Foxe. Most of that holdup related to a court case brought by Meta and its subsidiary, WhatsApp, who sought to annul the fines.Fun Fact #2: The EU Court of Justice sided with Meta, who challenged a €225 million penalty levied by the European Data Protection Board and the Irish DPC.[9] Fines only work if they’re enforced and collected against, but if the companies have captured the enforcement mechanisms (or can tie things up in litigation for long enough), they’re little more than theatre and bluster.Now ask yourself, what will this situation look like if someone like Musk or Bezos actually succeeds and takes this whole affair interplanetary?We’re already seeing how Big Tech influences governments and shapes narratives. But just imagine this in five or ten years. Imagine a multi-trillion-dollar SpaceX, Google, Amazon, Meta, Oracle, or Microsoft (or a consortium of them), bolstered by super-intelligent AI systems, effectively acting like nation-states. It’s all well and good to have laws, but if a handful of corporations become effective states unto themselves—suppliers of the information, infrastructure, energy, technology, supply chains, and even the money— what even are laws at that point?And while the US is arguably a lost cause (and will continue to be so for some time), over here in the EU, regulators are still framing things in the context of classical monopolies and anti-competitive behavior. We’re still trying to impose old rules on entities that are increasingly becoming so integrated into the system that they are effectively ungovernable. We’re all still using Microsoft, Google, Apple, Facebook, Instagram, X, and OpenAI because Europe has few options to replace them.See, unlike the AT&Ts and Standard Oils of the past, a handful of companies are controlling the informational substrate—the algorithms and engines that shape what we see, who we talk to, how we understand reality. SpaceX, Amazon, Microsoft, Nvidia, Oracle, and Google control the infrastructure that props up the internet. OpenAI, Anthropic, Google, and Meta control the AI. Most of these companies + Oracle/TikTok control the media. Together, they’re integrated into our identities in ways that make them fundamentally harder to disentangle from. We’re all worried about some super-sentient AI coming around the corner and putting us out of work, and that’s probably a valid concern. Meanwhile, we’re (un)happily trusting a handful of companies with everything and giving them lots of opportunity to create further extensive reach. The US, and to a large extent, Big Tech is leading a race to the bottom, and the leaders of the world are basically shrugging and going along with it, hoping someone else will fix the problem.Right now, we still have a choice. But 10 years from now? I’m not so sure.Open QuestionsReversibility: Are there examples of successfully removing extensive tech systems? China’s Great Firewall suggests national-scale alternatives are possible, but at what cost to interoperability, fundamental rights and freedoms?Threshold effects: At what point does extensity become irreversible? Is there a measurable tipping point (market share + integration depth + time)?AI acceleration: How does AI change extensity dynamics? Will it accelerate lock-in (personalization, learned behaviors, recommendation engines, cognitive atrophy) or enable competition (lower switching costs via automation, user-created custom software)?Governance mechanisms: What interventions could work *before* extensity reaches “ungovernable” status? Interoperability mandates? Data portability? Public infrastructure alternatives?Measurement: How do we quantify extensity vs. classical monopoly power? I continue to think that market share misses the integration depth and isn’t accounting for race-to-the-bottom conditions between competitors. These factors make removal costly.^AI usage statement: I used Claude primarily as a sparring/truth-seeking partner. Claude forced me to address certain ‘obvious-to-me-but-not-to-others’ assumptions (e.g., is this actually a bad thing if it helps people? How is this different from a classical monopoly? Am I being paranoid?). Claude also helped me trim this down and encouraged me to include direct quantifiable evidence. The piece is written and edited by me, warts and all. ^This concept was initially discussed in Robert Greene’s 48 Laws of Power, specifically Law 11 (Learn to Keep People Dependent on You) and Law 23 (Concentrate Your Forces). Greene’s book was written in the late 90s, and he was primarily discussing extensity in the context of individuals, not corporations. ^Greene actually argues the opposite point in Law 23: “You gain more by finding a rich mine and mining it deeper, than by flitting from one shallow mine to another. Intensity defeats extensity every time.”^See: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers^Full disclosure: My husband works for Google. I also consult for a rival search and browser company. I have very mixed and complicated feelings about Google’s search quality & other legitimate concerns raised about Google’s power, which is why I usually avoid including them in things I write. My point isn’t to get into the merits of Google per se, so much as to point out what I see as a larger trend across Google-like firms.^Stats: orbit.ing-now.com. Of the 11,000 Starlink satellites, around 1,100 are in re-entry, orbital decay, or are otherwise inactive.^To put a finer point on this: It’s the distinction between the ‘show us the algorithm’ concept that a lot of lawyers/policymakers have, versus asking questions about systems, networks, and how the individual pieces of the puzzle work together. In short, there is no singular algorithm that makes up Google, or Meta, or TikTok: It’s a complicated web of algorithms, learning models, databases, individual functions, and systems. This is why engineers tend to roll their eyes when politicians continue to ask for ‘the algorithm’ during the various showboat hearings.^I avoided including ‘harmful’ in that list, because well, harm is at best, a weak moderating force in the face of capitalism. cf: smoking, guns, alcohol, gambling, prediction markets, crypto…^Needless to say, the next time someone says ‘BUT FINES’ to me, I’m going to just send this link without commentary.Discuss Read More

