Against Moloch

Monday AI Radar #9

January 19, 2026

This week’s newsletter goes deep on two specific topics. We start with AI and employment: will AI be like past technological revolutions that changed our jobs but didn’t eliminate them, or are we headed for permanent mass layoffs? Next, we’ll do our best to keep up with the breakneck progress of Claude Code and other coding agents.

The AI news doesn’t slow down just because we have a new special interest, so we’ll also check in on how AI forecasters performed last year, assess the environmental impact of AI, review how to pick the best model for the job, and much more. Oh, and we’ll talk about how to understand and manage burnout. That seems pretty relevant right now.

Top Pick

The economics of transformative AI

This is a lightly edited transcript of a recent lecture where [Anton Korinek] lays out what economics actually predicts about transformative AI — in our view it's the best introductory resource on the topic, and basically anyone discussing post-labour economics should be familiar with this. […]

The uncomfortable conclusion is the economy doesn't need us. It can run perfectly well "of the machines, by the machines, and for the machines." Whether that's what we want is a different question.

This is a great piece from a very serious mainstream economist who understands the implications of where AI is headed.

AI, jobs, and the economy

Alon Torres: This time is different

Alon Torres:

Historical reassurances that “it worked out before” are not a plan - they’re a hope that the future will resemble the past, despite mounting evidence that this technology is categorically different.

 Séb Krier: What AI means for jobs

Séb Krier’s piece on the cyborg era is probably the best articulation I’ve seen of the argument that humans will probably still have jobs for a long time. Reminder: these days, when people say ”for a long time” they don’t mean “for the duration of your career”. Zvi appreciates Séb’s thoughtfulness but doesn’t share his optimism.

Dwarkesh, Jack Clark, and Michael Burry

Patrick McKenzie moderates a discussion about AI and the economy in a Google Doc. It’s a cool format, and I think it worked really well for this topic. Jack Clark and Dwarkesh are always great—Michael Burry is smart, but I think he's badly miscalibrated on this one.

Daron Acemoglu: AI can only do 5% of jobs

Daron Acemoglu argues that only 5% of jobs will be taken over by AI in the next decade. I have a lot of respect for Acemoglu, and that outcome is still possible—but it’s an edge case whose likelihood is fast diminishing.

Lynette Bye: AI might or might not take all the jobs

Lynette Bye at Transformer reviews the basic arguments on both sides.

EncodeAI: Is your career ready for AI?

From EncodeAI, here’s an extensive guide to starting your career in the age of AI. People seem to have strong reactions to this—my take is that there’s tons of useful information here, but the organization is chaotic and the presentation can be a bit cringe. Probably most relevant to highly agentic college students or early career folks who can parse through it to find what’s most useful to them.

Agents everywhere

Claude Coworks

Cowork is Claude Code for non-programmers, with a simpler interface and some nice sandboxing features. Zvi takes a look.

How to agent?

This week brings two really good guides to using Claude Code. First, Ado (Anthropic developer relations) has a guide to Claude Code’s most powerful features.

And from Eyad, here’s Claude Code 101. Lots of good details, including an admonition to keep your context window far below 100%.

The gentle singularity; the fast takeoff

This feels increasingly like the early stages of an AI takeoff. Prinz looks at how we got here and where we’re headed.

The robots build a web browser

Very impressive work from Cursor: they built a “planners and workers” system for managing fleets of coding agents, and had them build a web browser from scratch. The result isn’t deployment-quality, but it’s still a remarkable technical achievement. I would have guessed we were at least 6 months from agents being able to work at this scale.

Anthropic cuts competitors off from Claude Code

Huh. I’m not certain this is the wrong call, but it doesn’t feel great.

New releases

OpenAI rolls out a cheaper tier and advertising

Two interesting new changes from OpenAI: they’re introducing a cheaper paid tier (ChatGPT Go, $8 / month in the US) and they’re starting to roll out advertising for the free and Go tiers.

My very strong prior is that once a service starts taking advertising, it has started down a road that almost always leads to enshittification. On the other hand, OpenAI has a clear value proposition that already supports $20 - $200 per month subscriptions. Maybe this time is different?

Claude for Healthcare

Related: it’s interesting to see the frontier labs beginning to carve out different niches, and their recent announcements about healthcare products fit the narrative. OpenAI’s ChatGPT Health targets the consumer market, while Claude for Healthcare is squarely aimed at providers.

AI prescription renewals in Utah

Politico reports on Doctronic, an AI system for renewing routine prescriptions in Utah. This seems like a win on all fronts: better access to medication, an easy pilot program that can be expanded if it goes well, and—frankly—higher quality care than the alternative.

Environmental impacts

Andrew Ng: In defense of data centers

Many people are fighting the growth of data centers because they could increase CO2 emissions, electricity prices, and water use. I’m going to stake out an unpopular view: These concerns are overstated, and blocking data center construction will actually hurt the environment more than it helps.

Correct

SemiAnalysis: From tokens to burgers

Andy Masley has previously done an excellent job of debunking nonsense claims about AI water usage. Here, SemiAnalysis finds that the Colossus 2 data center (one of the largest in the world) uses about as much water as 2.5 In-N-Out fast food restaurants. Yes, they considered blue vs green vs gray water. Yes, they looked at the full supply chain, not just on-site usage.

Crystal ball department

Rating the AI forecasters

This is the way. The AI Digest Survey is a survey of predictions about AI. Each year, last year’s entries get graded and a new survey begins. Epoch just released the 2025 survey results, and a few points stand out to me:

Discarding the Shaft-and-Belt Model of Software Development

How does software development change when the cost of creating software plummets? Steve Newman looks ahead to the era of artisanal software.

Get the most out of your AI

Use multiple models

Nathan Lambert has a nice overview of which models to use when. Everyone’s a bit different—I use:

Capabilities and impact

Time horizon is important, but…

METR’s time horizon study is profoundly useful, but frequently misinterpreted. Thomas Kwa (one of the authors) has a list of the top reasons time horizon is overrated and misinterpreted.

AI is just starting to change the legal profession

Justin Curl interviewed 10 lawyers about how they’re using AI for legal work. The resulting article is a good example of AI diffusion at the start of 2026—the models are very capable, but they have important limitations (for now).

AI isn’t “just predicting the next word” anymore

Pro tip: you can safely ignore anyone who tells you that “AI is just glorified autocomplete”. Steven Adler explains.

AI is getting good at math

There’s been a lot of recent progress using AI for advanced mathematics:

Alignment and interpretability

Chinese models as a model organism

Very clever:

Chinese models dislike talking about anything that the CCP deems sensitive and often refuse, downplay, and outright lie to the user when engaged on these issues. In this paper, we want to outline a case for Chinese models being natural model organisms to study and test different secret extraction techniques on.

Are we dead yet?

Why Anthropic doesn't filter CBRN info during training

Sometimes the obvious solution isn’t the right one. Jerry Wei:

An idea that sometimes comes up for preventing AI misuse is filtering pre-training data so that the AI model simply doesn't know much about some key dangerous topic. At Anthropic, where we care a lot about reducing risk of misuse, we looked into this approach for chemical and biological weapons production, but we didn’t think it was the right fit. Here's why.

What happens when superhuman AIs compete for control?

The latest scenario from Steven Veld and the AI Futures Project explores how things might go if multiple superhuman AIs compete with one another.

Introducing AVERI

Miles Brundage launches AVERI (the AI Verification and Evaluation Research Institute):

we are trying to envision, enable, and incentivize frontier AI auditing, defined as rigorous third-party verification of frontier AI developers’ safety and security claims, and evaluation of their systems and practices against relevant standards, based on deep, secure access to non-public information.

Strategy and politics

The AI patchwork emerges

It’s the beginning of legislative season, and Dean Ball reports on some of the madness being proposed in various state legislatures. As AI becomes a more salient political issue, expect to see a lot more of this.

Extracting books from production language models

This is interesting and unfortunate (although some coverage profoundly overstates the actual findings). The authors find that a number of leading models have memorized significant portions of certain books and can regurgitate them with substantial accuracy.

Note that the findings were somewhat artificial: accuracy was highest with extremely famous works, and extracting source text often required jailbreaking or other complex maneuvers. This is undesirable (and perhaps legally consequential) behavior that needs to get fixed, but it’s hard to argue that actual harm has occurred here.

Industry news

Introducing the AI Chip Sales Data Explorer

Epoch just came out with a dataset on AI chip sales, installations, and power usage. This type of data isn’t sexy, but it’s really useful and Epoch is great at it.

Technical

An FAQ on Reinforcement Learning Environments

Reinforcement learning is hot right now: the frontier labs are pouring compute into it and it’s responsible for much of the recent gain in capabilities. It’s also a lot more complicated than standard pretraining. Epoch investigates the state of RL and where it’s headed.

Side interests

Burnout is breaking a sacred pact

One of the most important things I‘ve learned from many years of going hard on difficult projects is to take burnout very seriously. If you don’t fix it early, it can be almost impossible to repair in yourself or others.

Cate Hall presents a really interesting perspective based on the elephant and rider model of the mind: burnout occurs when the rider consistently breaks promises to the elephant. See also Emmett Shear’s taxonomy of burnout.