Against Moloch
March 16, 2026

Monday AI Brief #17

I’m pleased to report that I have no new AI-related crises for you this week. Instead we get to focus on the fun parts, starting with AI consciousness. We'll ask two leading neuroscientists whether AI is likely to become conscious (conclusion: probably yes, or almost certainly not).

AI is doing fascinating things to programmers: for many of us, this moment is simultaneously exhilarating and slightly heartbreaking. We’ll look at one high level overview of how AI is affecting programming, and one deeply personal reflection on that same topic. Programmers aren’t the only ones being disrupted: prinz joins us to argue that while the legal profession will survive AI, the big law firms will not.

Opposing viewpoints on AI consciousness

Are LLMs likely to become conscious as they approach human-level intelligence? That’s a highly contested topic, with lots of strongly held opinions but not a lot of evidence. Even experts on consciousness can’t seem to agree: this week brings us opposing opinions from two well-regarded experts.

Michael Graziano (originator of Attention Schema Theory) tells PRISM that AI consciousness seems likely, and argues that conscious AI might be safer than “zombie AI”.

In the opposing corner is Anil Seth (previously), with a short video presenting four reasons why he thinks AI consciousness is extremely unlikely.

I’ll publish a longer piece on Wednesday examining Anil’s argument in more detail (sneak preview: I have a lot of respect for him, but in this matter I think he’s overconfident).

The End of Computer Programming as We Know It

I love coding in 2026: I’m several times more productive than I’ve ever been before, and it’s absolutely intoxicating. You can have my agentic coding models when you pry them from my cold, dead fingers. But at the same time, I mourn the loss of parts of my craft that just a year ago were important parts of my identity.

This week brings two very different issues exploring how programmers are adapting to agentic coding. Clive Thompson has a carefully researched piece for the NY Times ($), and James Randall has a deeply personal reflection.

Why prinz thinks AI will kill BigLaw

prinz believes BigLaw will not survive the AI era. He argues that with AI, a senior partner plus a small number of specialists and support staff will be able to do everything a BigLaw firm does today.

This is a likely path for many professions: with AI, the best people in a field can do far more than previously (and get paid accordingly). But the rank and file will find themselves increasingly unemployable.

I underestimated AI capabilities (again)

Ajeya Cotra shares some very interesting thoughts on METR’s time horizon metric. This piece has received attention because she’s changing her January prediction that the metric will reach 24 hours by the end of this year. Based on recent progress (it’s already reached 12 hours), she’s now predicting 100 hours by the end of the year.

Even more interesting to me is her discussion of how the metric starts to fall apart beyond a certain point. She suggests that almost no tasks really have a one year time horizon: software tasks that would take a human a year to complete are really a collection of multi-day or maybe multi-week tasks that are largely independent.

We’re quickly running out of traditional benchmarks that can usefully measure the capability of frontier models. Where we’re going, there is no map and no speedometer.