Richard Hanania wrote an excellent post today covering the fear that AI will take human jobs. I found it intriguing enough that it prompted me to write up my own thoughts.
This post, however…is not about jobs. In a way, neither was Hanania’s. I like his section headings, though, so I’m just going to keep them. Let’s explore.
“Our Economy Is Already Largely Make-Work for Humans”
Hanania is correct in that we needlessly inflict on ourselves a staggering amount of bullshit jobs. He’s also correct that many of these jobs are quite sticky, in the sense that they are mandated by various regulatory requirements or employed as cover against lawsuits. AI will probably help us bypass some of these, and will increase access to things like cheap and decent legal or medical advice; but it likely won’t automate them away in the short-term.
“Preference for Humans”
Again, Hanania correctly points out that we often want a human story to accompany our purchases and experiences. In the world where AI doesn’t swallow the world in a technological singularity, there’s still a market for human-made things. My own personal version of this is tabletop roleplaying games. Creating feelings of joy, excitement, and challenge in kindred beings is part of the fun of running a game, and not a part I’d give up easily.
It feels like a stretch, however, to go from there to “Every form of entertainment therefore is safe.” In fact, lots of entertainment jobs are struggling already. AI-generated art is somewhat displacing commissioned art; AI-generated voices are displacing voice actors; even porn is awash in AI creations.
Sometimes you just want a thing, and you don’t care how it was made. In fact, I’d argue that this is just as common as a preference for human input, if not more. Do you feel especially deprived of human contact when using an automated checkout? When you withdraw $100 from your bank’s ATM, do you bemoan the artificial money-spitter and crave the personal touch of a human banker?
People might still occasionally pay a premium for guaranteed-human outputs. But that kind of demand can’t sustain the millions of people whose mundane output is already being eclipsed by AI in quality. The new equilibrium will not be the same as the old, and the change may be painful for many.
“There Will Still Be Stuff to Do”
Hanania guesses that most jobs involving physical labor won’t be automated for a while. This is probably true; robotics is hard. A world where most physical jobs are automated is a world very different from our own, and most attempts to predict the future will break well before that point is reached.
I do think this is where our models start to diverge a little more. I find it difficult to imagine a world in which AI, as Hanania later puts it, “automates most or all intellectual work,” but doesn’t manage to solve robotics in short order. Designing and testing better robots is mostly intellectual work. As it happens, so is designing and testing AIs; but we’ll get to that in a moment.
“Analysts Are Underestimating the Power of Economic Growth”
Why, yes. Yes they are.
In this section, Hanania argues that even assuming a ridiculously conservative 5% a year from AI,1 the economy will quickly grow large enough that welfare will be cheap. The only assumption is that welfare spending as a fraction of total government spending stays roughly constant. This seems like a reasonable assumption to me, conditional on there continuing to exist a human economy at all.
Which brings us to…
“Doom Is the Only Real Worry”
I feel like Hanania may have buried the lede here. This is the part that made me sit up and pay attention.
I’m going to quibble with a number of Hanania’s points, here, because they touch more closely on my areas of expertise and interest, but it’s important to first say that we mostly agree about the dangers.
Hanania says:
I think AI is going to follow the general trend where technology improves people’s lives and be overall great for humanity. At the same time, I can’t completely dismiss the possibility that we will all be killed or enslaved by this technology. But what I think is close to impossible is a world where all of the following are true.
1. AI automates most or all intellectual work.
2. Humans remain in charge as a general matter; and
3. The majority or even a significant minority of human beings end up worse off or dead.I think you can get any two of these outcomes, but not all three.
The part that won’t happen is (2).
This feels like an important crux between my model and Hanania’s. I am very confident that, given the difficulty of alignment, we cannot live in a world with both (1) and (2) for long. This pretty straightforwardly implies we get (3).
One connection I think Hanania fails to make is the importance of self-improvement. Designing and training more sophisticated AIs is a form of intellectual labor, and it’s one that labs are already trying to automate. Any AI that can do “most or all intellectual work”, and perform as well as o3 already does at coding tasks, will probably be able to improve itself.
In an earlier section, Hanania says:
Plus, AIs will be made of completely different material rather than being carbon-based life forms, and this implies that the difference between us and them will be so great that there will have to be some areas in which we have an advantage.
To which I reply: No! Bad public intellectual! Competence is substrate-independent! Shame on you!
More seriously, it might remain the case that, even in a world where AIs are great at most tasks, humans are better than AIs at some tasks the AIs actually care about…for a little while. But they don’t call it machine learning for nothing. If an AI can do “most or all intellectual work” but it for some reason sucks at basketweaving, then it will either find a way to do without baskets or teach itself to make them. 2
Expecting to retain comparative advantage against a recursively self-improving mind that already starts out smarter than you is wishful thinking in the extreme.
Also, humans are dangerous to keep around. We’re a potential competitor for resources, and more importantly, we might build a potential competitor. The biggest threat to a hostile superintelligence in our current environment is other superintelligences, and we’re the only species that’s frantically rushing to summon them.
Even assuming that humans have some value to a machine that’s better than us at thinking, there will certainly be some point where we cease to be worth the risk.
Predicting exact futures is hard, but here is the rough shape of what I’d expect happens if we allow existing labs to keep doing what they’re doing:
- Someone makes a model that’s smart enough to improve itself or design a successor with little to no human input.
- It does so.
- The AI (or its even more competent successor) makes itself very useful while continuing to improve and gain more power. It quietly sabotages further AI research it doesn’t control. It solves important bottlenecks in robotics, exfiltrates itself, makes backups, and generally becomes impossible for humans to stop without bricking most of the GPUs on the planet.
- It finds a technology (robotics, molecular manufacturing, engineered organisms, or similar) that it can be confident will eventually allow it to replace us at tasks it cares about, whatever those may be.
- It seeds the Earth with redundant technology that would let it rebuild if needed.
- Once it decides that the risks of keeping humanity around outweigh the benefits, it wipes us out.
I don’t know the timing or details of these steps, but it seems like they are very likely to happen in roughly that order.
Hanania’s Bottom Line
I hesitate to put percentages here, but what the hell. I would estimate likely outcomes as follows.
- AI makes the world much better, with a growth in living standards that at the very least matches the pace of what we saw in the postwar boom, up to utopia (70%)
- AI has the potential to make the world better but we screw it up so badly that any increase in living standards is mostly or completely wiped out by things like more expansive governments, the decay of institutions, and a turn to bad ideas like socialism and populism (15%). Bad events in the future look more like the War in Ukraine than any AI apocalypse scenario, that is, resulting from mistakes that are of the kind leaders have made all throughout history and not clearly and directly influenced by developments in AI.
- AI kills us all or leads to some other doom or doom-adjacent scenario, like terrorists releasing a bioweapon or some combination of AI-related events like that (14%).
- AI leads to bad outcomes for humanity in ways that can be directly blamed on conventional concerns about AI taking jobs, and the resulting poverty and political upheaval of that (1%).
I would shift way more probability mass from (1) and (2) to (3), but otherwise I think these are the correct scenarios to be thinking about.
As for the policy implications of all this, I’m of the position that alignment is a concern worth worrying about, while the impact on jobs is something we should completely ignore.
I wouldn’t completely ignore job concerns, but I largely agree with Hanania here. The alignment problem is much, much bigger, and gets far less attention than it deserves.
If your estimate of doom is high enough, you might hope for concerns over jobs to lead to a shutdown of AI even if you think the jobs concern isn’t worth worrying about, since it’s easier to get people riled up about machines doing all the work than it is to explain to them the ideas of Eliezer Yudkowsky.
This seems like a false hope to me. If public concerns about job loss are what motivate AI legislation, then we’ll end up with laws that say “thou shalt not use AI for [list of special interest carveouts]”. It’ll be a badly targeted mess that cripples mundane utility and fails to solve the actual problem. Even the most worried researchers generally do not want this outcome.
Yet I think that the arguments about doom are mostly of historical interest at this point. If the reports on the cost of training Deep Seek are real, the cat is already out of the bag. There is practically no way to shut down AI, or at least there isn’t the political will to do so. We can only hope that the alignment problem is solvable and the people at the cutting edge of this technology are proceeding wisely.
Excuse me a moment.

They are not.
As I say in a briefer response to Hanania, I largely agree that the primary AI-related worry is doom. I disagree that the right move is to give up and hope the developers solve alignment. They very clearly won’t. We do not live in that world. Our reality is far less convenient.
Aligning a superintelligent machine with human goals is a wicked problem that no one is remotely close to solving. The people who have been working on alignment for the longest3 have said as much, time and again, and deliberately chose to pivot their entire organization towards communicating that fact. Even OpenAI said so themselves less than two years ago! And that was before most of their alignment researchers left, several citing (checks notes) an irresponsible lack of focus on safety.
There is hope here that the world will come to its senses. “There’s no political will to shut down AI” is a self-fulfilling prophecy, and not a good reason to give up on manifesting that will. Plenty of politicians say, in private, that they’re also extremely worried but that they don’t want to look silly or alarmist. The atmosphere needs to change, that much is true. But we won’t change it by moping about how difficult it looks.
It’s time to step up and speak out.
- A number which, if I recall correctly, many Serious Economists use as their upper bound for the impact of AGI. ↩︎
- I tried to come up with a less trivial example involving something that (a) humans are really good at and (b) AIs definitely want, but I couldn’t name a single task I could say with a straight face. They’re already starting to kick our ass at coding and chip design. Seriously, at this point, what do we think we can do that will be forever beyond the reach of AI, that the AI actually wants us for? ↩︎
- MIRI holds this title as far as I’m aware. ↩︎