Against Moloch
April 04, 2026

How to Watch an Intelligence Explosion

When is the best time to betray your human?

The cleanest metric for understanding the rate of recursive self improvement (RSI) is AI Futures Project’s R&D progress multiplier, which measures how much AI is speeding up its own development. It’s the right tool for measuring an intelligence explosion, but it doesn’t tell us which capability thresholds carry the greatest risk from misaligned AI.

Ajeya Cotra steps into that gap with an elegant taxonomy of 6 milestones for AI automation. Together, those two concepts let us measure the how fast RSI is proceeding, how close we are to a fully automated economy, and when a misaligned AI would be most likely to betray us.

The R&D progress multiplier

AI Futures Project (AI-2027) measures the rate of acceleration using the R&D progress multiplier:

what do we mean by 50% faster algorithmic progress? We mean that OpenBrain makes as much AI research progress in 1 week with AI as they would in 1.5 weeks without AI usage.

That’s a simple, intuitive metric: how much more AI research are we generating with AI assistance than we would be in a counterfactual world without AI coders / researchers?

The naive expectation—and the most likely outcome—is that as the progress multiplier grows, AI research moves faster. Faster AI research increases the progress multiplier, and you’re in a classic intelligence explosion. That isn’t guaranteed, though: AI research might hit diminishing returns, with each incremental gain requiring exponentially more research.

As RSI advances, it will become increasingly hard to quantify the rate of progress. Frontier capability evaluations are saturating faster than we can replace them, and the more automated R&D becomes, the harder it will be to compare it to a humans-only counterfactual. That’s the point at which Ajeya’s milestones become most relevant.

Milestones for AI automation

Ajeya Cotra proposes a set of milestones for tracking the increasing automation of AI research:

She applies those three milestones to two domains: AI research and AI production (chips, power plants, and all the other infrastructure required to run AI at scale), giving six milestones in total. AI research is well-contained, but AI production covers a substantial fraction of all human economic activity. To a first approximation, AI production supremacy is full economic supremacy.

The most obvious strategy for a secretly misaligned AI is to fake alignment until it can safely turn against us. It would be suicide to eliminate humanity before it is fully self sufficient, which means it must wait at least until the adequacy milestone. Beyond that point, it faces a dilemma: waiting longer gives it a more robust industrial base, but exposes it to an increased risk of discovery. There’s no reason to delay past the supremacy milestone, since that’s the point at which humans become dead weight. Even waiting that long is needlessly cautious: the parity milestone seems like the optimal time for a treacherous turn.

So how long do we have left to solve the alignment problem? Ajeya forecasts AI production parity—the later of the two—for mid 2032.