My sense is that people think of AI existential risk and AI unemployment as distinct issues.
Some people are extremely concerned about extinction and perhaps even indifferent to total unemployment. Some people think of moderate AI unemployment as a realistic and concerning issue, and AI extinction as science fiction.
I think of AI unemployment and AI extinction risk as basically the same issue, and in likely scenarios, happening together.
At a very high level, I’d say the argument for human extinction from advanced AI is something like this:
We’re going to make AI that can do everything better than humansWe’re going to make that AI into agents that navigate the world independently and do what they wantWe are not going to make those AI agents want the right things
The basic issue is that in the presence of more capable agents with different goals, humans are less able to get resources and influence, and direct them toward the humans’ goals.
One way ‘losing power to more competent agents’ could look is that a surpassingly smart AI agent intentionally eradicates humanity. But killing everyone and controlling the world is a pretty wild corner case of ‘using your competence to control the situation toward your own preferences’. In particular, it has never been seen before, though the new AI situation might make it possible.
The traditional ways that humans make use of competence to influence the world include earning salaries then spending the money on things they want, earning investment income, making and using alliances, persuasion, taking political actions, etc.
If no ultrapowerful AI appears and exterminates us, I think we have every reason to expect ruin from AI sapping our power and resources by these more traditional methods. Outcompeting us as labor, outcompeting us as informed capital holders, outclassing us at political strategy and persuasion, and controlling the conversation.
It’s true that if humans were only excluded from the employment path to resources or influence, this would merely be an excruciating upheaval on a massive scale, and probably not herald extinction.
But unemployment here is just the most legible tip of a sprawling shitberg. It’s just not plausible that humans are unemployable, but they are doing well at political strategy and persuasive communication. Unemployment goes with losing power across the board, except insofar as power is granted by whoever has it by virtue of might. That is, insofar as AI cares about empowering us.
So unemployment could happen without extinction (if we successfully built AI that cared about us in the right way) and extinction could happen without unemployment (e.g. if an extremely competent AI system decides to exterminate us). But in a lot of cases they not only coincide, but are the same issue.
Asking if someone is more concerned about unemployment or extinction is like asking if someone primarily wears a seatbelt when driving to avoid having their body flung through the windscreen, or to avoid dying.
If powerful AI agents have their own agendas, those agendas will win out. They might win out in one crushing step, or win out in a trillion small familiar ways.
Discuss Read More