Against Moloch
March 09, 2026

Monday AI Brief #16

The conflict between the Department of War and Anthropic has quieted somewhat, but nothing has been resolved and a catastrophic outcome is still entirely possible. Regardless of what happens next, two things are very clear.

This is the least political that AI will ever be. Politicians are finally waking up to the fact that AI is a big deal. Even though most of them don’t understand why it’s a big deal, you can safely assume they will have an increasing appetite for government intervention. The DoW incident is a preview, not an aberration.

This is the least stressful that AI will ever be. The last two weeks have been brutal: I notice several of the writers and thinkers that I most respect have been publicly struggling and in some cases decompensating. I’m afraid the pace is only going to get faster, and the stakes are only going to get higher. Pace yourselves.

In the spirit of pacing ourselves, we’ll cover what we need to cover about DoW, then put it down and move on to happier topics.

The Future We Feared Is Already Here

For years now, questions about A.I. have taken the form of “what happens if?” […]

This year, the A.I. questions have taken a new form, “what happens now?”

Ezra Klein’s opinion piece in NY Times ($) is nominally about the conflict between the Department of War and Anthropic and his analysis of that situation is spot-on: this is possibly the best short piece on that topic. But that conflict is a symptom of a much deeper problem: we’ve gone from being unprepared for AI capabilities that are coming soon to being unprepared for AI capabilities that have now arrived.

AI profoundly changes the nature of government surveillance—it’s now possible to intensively surveil every single American in a way that was previously (sort of) legal but completely impractical. In a sane world, the US Congress would carefully consider the implications of that change and pass appropriate legislation that codifies a reasonable balance between security and privacy.

Lamentably, we don’t seem to live in that world. Plan accordingly.

Can you nationalize a frontier AI lab?

The DoW / Anthropic dispute has rekindled serious discussion about the US government nationalizing frontier AI development. Much of that discussion has focused on legal, political, and philosophical questions, but there hasn’t been much serious discussion of the practicalities.

John Allard dives into the nuts and bolts of nationalization, considering what strategies the government might use and whether those strategies would actually work. He isn’t optimistic about the outcome (which doesn’t mean it wouldn’t happen anyway):

until someone can answer the harder question — whether the US is better off accepting less control in exchange for maintaining its lead — the risk is that every attempt to capture the frontier is what finally kills it.

How AI Could Benefit the Workers it Displaces

AI Frontiers explores how AI might affect workers, arguing that if AI is much better than humans at many but not all jobs, human wages might actually rise.

That counter-intuitive result follows from basic economics, which the article does a good job of explaining. It’s a solid piece, and a good introduction to some of the relevant economics if you’re not already familiar with them. But note that this whole analysis only applies if AI is powerful but not superhuman. Without careful intervention, everything falls apart in a world with superhuman AI:

If machines do everything, then those who own the machines will capture all this value. Products and services would become very cheap, but workers, outcompeted by machines in all tasks, would end up with a vanishingly small share of the economy’s income.

We can flourish alongside superintelligent AI, but only if we make smart choices.

A very short story

Sam Altman:

i always wanted to write a six-word story. here it is:

near the singularity; unclear which side.