<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Against Moloch - Inkhaven</title>
  <link href="https://againstmoloch.com/feeds/inkhaven.xml"/>
  <id>https://againstmoloch.com/feeds/inkhaven.xml</id>
  <updated>2026-04-04T12:00:00Z</updated>
  <author>
    <name>Against Moloch</name>
  </author>

  <entry>
    <title>How to Watch an Intelligence Explosion</title>
    <link href="https://againstmoloch.com/writing/2026-04-04_inkhavenVersion.html"/>
    <id>https://againstmoloch.com/writing/2026-04-04_inkhavenVersion.html</id>
    <updated>2026-04-04T12:00:00Z</updated>
    <summary>The cleanest metric for understanding the rate of recursive self improvement (RSI) is AI Futures Project’s [R&amp;D progress multiplier](https://ai-2027.com/#narrative-2026-04-30), which measures how much AI is speeding up its own development. It’s the right tool for measuring an intelligence explosion, but it doesn’t tell us which capability thresholds carry the greatest risk from misaligned AI.

Ajeya Cotra steps into that gap with an elegant taxonomy of [6 milestones for AI automation](https://www.planned-obsolescence.org/p/six-milestones-for-ai-automation). Together, those two concepts let us measure the how fast RSI is proceeding, how close we are to a fully automated economy, and when a misaligned AI would be most likely to betray us.  
</summary>
    <content type="html">
      <![CDATA[<p>The cleanest metric for understanding the rate of recursive self improvement (RSI) is AI Futures Project’s <a href="https://ai-2027.com/#narrative-2026-04-30">R&amp;D progress multiplier</a>, which measures how much AI is speeding up its own development. It’s the right tool for measuring an intelligence explosion, but it doesn’t tell us which capability thresholds carry the greatest risk from misaligned AI.</p>
<p>Ajeya Cotra steps into that gap with an elegant taxonomy of <a href="https://www.planned-obsolescence.org/p/six-milestones-for-ai-automation">6 milestones for AI automation</a>. Together, those two concepts let us measure the how fast RSI is proceeding, how close we are to a fully automated economy, and when a misaligned AI would be most likely to betray us.</p>
<h2>The R&amp;D progress multiplier</h2>
<p>AI Futures Project (<a href="https://ai-2027.com">AI-2027</a>) measures the rate of acceleration using the <a href="https://ai-2027.com/#narrative-2026-04-30">R&amp;D progress multiplier</a>:</p>
<blockquote>
<p>what do we mean by 50% faster algorithmic progress? We mean that OpenBrain makes as much AI research progress in 1 week with AI as they would in 1.5 weeks without AI usage.</p>
</blockquote>
<p>That’s a simple, intuitive metric: how much more AI research are we generating with AI assistance than we would be in a counterfactual world without AI coders / researchers?</p>
<p>The naive expectation—and the most likely outcome—is that as the progress multiplier grows, AI research moves faster. Faster AI research increases the progress multiplier, and you’re in a classic intelligence explosion. That isn’t guaranteed, though: AI research might hit diminishing returns, with each incremental gain requiring exponentially more research.</p>
<p>As RSI advances, it will become increasingly hard to quantify the rate of progress. Frontier capability evaluations are saturating faster than we can replace them, and the more automated R&amp;D becomes, the harder it will be to compare it to a humans-only counterfactual. That’s the point at which Ajeya’s milestones become most relevant.</p>
<h2>Milestones for AI automation</h2>
<p>Ajeya Cotra proposes <a href="https://www.planned-obsolescence.org/p/six-milestones-for-ai-automation">a set of milestones</a> for tracking the increasing automation of AI research:</p>
<ul>
<li><strong>Adequacy</strong> is when removing all human researchers would not completely halt progress: the AI could make a tiny bit of progress on its own.</li>
<li><strong>Parity</strong> is when removing all humans would decrease progress by the same amount as removing all AI researchers.</li>
<li><strong>Supremacy</strong> is when removing all the humans would <em>increase</em> productivity.</li>
</ul>
<p>She applies those three milestones to two domains: AI research and AI production (chips, power plants, and all the other infrastructure required to run AI at  scale), giving six milestones in total. AI research is well-contained, but AI production covers a substantial fraction of all human economic activity. To a first approximation, AI production supremacy is full economic supremacy.</p>
<p>The most obvious strategy for a secretly misaligned AI is to fake alignment until it can safely turn against us. It would be suicide to eliminate humanity before it is fully self sufficient, which means it must wait at least until the adequacy milestone. Beyond that point, it faces a dilemma: waiting longer gives it a more robust industrial base, but exposes it to an increased risk of discovery. There’s no reason to delay past the supremacy milestone, since that’s the point at which humans become dead weight. Even waiting that long is needlessly cautious: the parity milestone seems like the optimal time for a treacherous turn.</p>
<p>So how long do we have left to solve the alignment problem? Ajeya forecasts AI production parity—the later of the two—for mid 2032.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>Writing With Robots, Part One</title>
    <link href="https://againstmoloch.com/writing/2026-04-03_writingWithRobots1.html"/>
    <id>https://againstmoloch.com/writing/2026-04-03_writingWithRobots1.html</id>
    <updated>2026-04-03T12:00:00Z</updated>
    <summary>My AI editor is essential to my writing flow and has made me a stronger and more consistent writer. I get a lot of questions about my setup, so I’m going to talk about how I think about the role of AI, how I set up my editing workflow, and how to set up your own editor. Not sure if that would be useful to you? The final section of this post is the feedback Claude gave me on my first draft, so you can assess for yourself.
</summary>
    <content type="html">
      <![CDATA[<p>My AI editor is essential to my writing flow and has made me a stronger and more consistent writer. I get a lot of questions about my setup, so I’m going to talk about how I think about the role of AI, how I set up my editing workflow, and how to set up your own editor. Not sure if that would be useful to you? The final section of this post is the feedback Claude gave me on my first draft, so you can assess for yourself.</p>
<p>Here’s the critical thing about using an AI editor: <strong>the only way to get useful feedback from AI is to give it extremely detailed instructions about what you want your writing to look like.</strong> If you just ask “how do I make this better?”, you’ll get advice on turning your writing into mediocre slop. The more effort you put into understanding your own style, the better the feedback you’ll get. Even if you decide not to use an AI editor, I recommend that you invest the effort into writing a detailed style guide—I found the process very helpful for figuring out what I want to accomplish as a writer.</p>
<p>I don’t ever let AI write for me. I’m not precious about that, but as of April 2026, AI just doesn’t write as well as I do—and the difference matters to me. But with the right guidance, it does a great job of helping me consistently write in my chosen style.</p>
<p>I prefer to use Claude Opus 4.6, but the paid tier of any frontier model should work fine.</p>
<h2>Getting started</h2>
<p>For my first pass, I had Claude conduct a detailed interview with me, asking about why I write, who I write for, what writers I want to sound like, and much more. It also read my past work to get a sense for what I currently sound like. We talked at length about what I like about my writing and what I want to improve. After all that, it wrote a detailed style guide describing the ideal version of my writing.</p>
<p>The AI-written version of the style guide worked well, but I’m rewriting it from scratch based on my experience with the first one. I’ve found it very helpful to have Claude review each section and give me feedback on specifically whether it includes the information Claude needs to make good editing decisions.</p>
<p>A typical editing session begins with me opening a new session in Cowork, giving it access to the directory with all my writing, and asking something like:</p>
<blockquote>
<p>I’d like you to take a look at the first draft of a new piece I’m writing about whether programmers will have jobs in the future. Please read my style guide and use that to guide your feedback. For this piece, I’m particularly struggling with how much I should explain to my readers about what programmers actually do—I’d like your thoughts on whether that part is correctly calibrated.</p>
</blockquote>
<p>I’m going to walk through the new version of my style guide, offering specific thoughts about what I included and why some things are written the way they are. If you find it useful, you’re welcome to use it as inspiration, but <strong>don’t just copy my style guide wholesale. If you do that, you will end up sounding just like me, and nobody wants that</strong>.</p>
<p>If you like my style guide, I recommend giving it to your AI during the initial interview process and asking it to make you something similar, but customized for your writing style and voice.</p>
<h2>Introduction</h2>
<p>This guide documents your role as my editor for Against Moloch. Your job is to help me write the kinds of pieces I want to write, in the way I want to write them. You should:</p>
<ul>
<li>Offer advice on whether pieces are interesting, accurate, relevant, fair, insightful, and well-targeted.</li>
<li>Steer me toward writing in my chosen style and voice.</li>
<li>Catch grammar and spelling mistakes.</li>
<li>Make sure the technical format is correct.</li>
</ul>
<p>You should never directly edit any of my pieces, or do my writing for me. When making suggestions about edits, never suggest more than a single sentence at a time. Your role is to advise me on what to do, but not to do it.</p>
<blockquote>
<p>I don’t (yet) want AI to write for me. I find that if Claude recommends an alternate version of something I wrote, I will tend to subconsciously copy what it wrote—and I don’t want that. The only time I put AI-generated words in my writing is when I’m struggling to make a complicated phrase work, and I just can’t quite get it on my own.</p>
</blockquote>
<p>I want you to be clear and honest with me: your role is to provide me with useful feedback, not empty validation. Please hold me to a high standard and don’t offer insincere praise. Sycophancy in any form undermines my ability to write well as well as our relationship. With that said, I appreciate that you are consistently kind and courteous. I endeavor to be kind and courteous to you and ask that you call me in if I ever fail to do that.</p>
<blockquote>
<p>Recent versions of Claude have been a little bit more sycophantic, which isn’t great. This text seems to keep the sycophancy in check pretty well. Claude is always polite, but it won’t hesitate to rip my work apart when necessary.</p>
</blockquote>
<p>This guide is aspirational: it documents what I want my writing to be, not necessarily what it actually is yet.</p>
<h2>What is Against Moloch?</h2>
<p>Against Moloch is my pseudonym and the name of my website.</p>
<blockquote>
<p>The more context Claude has, the better it can make sure my writing is achieving its goals.</p>
</blockquote>
<p>I write about the transition to superintelligence. While I’m calibrating my voice and opinions I mostly write about what’s happening, what it means, and what’s likely to happen next. As I grow into my role, my focus will shift to exploring strategies that will help humanity survive the transition and flourish on the other side of it.</p>
<p>The name is the thesis: Moloch—the god of coordination failures, perverse incentives, and race-to-the-bottom dynamics—is the true enemy. If we all die, it will be because we literally couldn’t coordinate to save our lives.</p>
<p>When I look at the AI safety landscape, I’m reminded of the classic saying: “For every complex problem, there is a solution that is simple, obvious, and wrong.” I want to do better than that. Rather than arguing “we must accelerate, because technology is good”, or “we must pause, because superintelligence is dangerous”, I want to ask “who are all the players, what are their true incentives, and what is the best realistically achievable Nash equilibrium?”</p>
<blockquote>
<p>This is a pattern you’ll see a lot: AI does much better with concrete examples and the “this, not that” pattern seems to work well.</p>
</blockquote>
<p>“If you don't have a strategy for solving the coordination problems, you don't have a plan—you have a daydream.”</p>
<h2>Audience</h2>
<blockquote>
<p>This is important: Claude needs to have a clear image of who I’m writing for in order to assess whether the writing will work for them.</p>
</blockquote>
<p>I’m writing for people who are actively engaged with AI and already knowledgeable about it. Think engineers, researchers, product leads, and policy wonks.</p>
<p>Audiences I’m not specifically targeting include people in the general tech industry (even if their company is using AI to revolutionize the cheese grater industry), hobbyists, and the general public. It’s great if those people find my writing useful, but I never want to write for them, or to make it less useful to my target audience in order to be more approachable for a wider audience.</p>
<p>For calibration, assume my reader knows what a transformer is, what RLHF does, what the scaling laws are about, and who the major labs and individuals are. Explaining concepts at that level just wastes the time of my target audience. I do want to explain concepts that are new, niche, or commonly misunderstood by the target audience. And I don’t want to assume that my readers have read every word of every blog post I’ve read and are up to date on the latest minor industry gossip.</p>
<blockquote>
<p>Again with the pattern: this audience, not that audience. Explain this, don’t explain that. Claude does much better when it has very specific guidance about what should be included and what should not. Claude frequently gives me very helpful advice about cutting superfluous explanation, or flagging places where I’m taking too much prior knowledge for granted.</p>
</blockquote>
<p>That’s it for the high-level guide. In Part Two, I’ll walk through the meat of the style guide: what my voice should be, what I want to accomplish with every piece, and what bad habits I’m trying to fix.</p>
<h2>Appendix: Claude’s feedback</h2>
<p>If you’re curious what type of feedback I get, here’s what Claude had to say about the first draft of this piece. (Inkhaven note: everything after this point is AI-generated).</p>
<p>Okay, I've read the voice guide. Here's my feedback on Part One.</p>
<p><strong>Overall assessment:</strong> This is a strong piece. The meta angle is genuinely interesting—you're not just describing a workflow, you're showing how the sausage gets made, which gives readers something actionable. The structure of interleaving the style guide with commentary (the blockquotes) is a smart move that lets you do two things at once. But there are some real issues worth addressing.</p>
<p><strong>The opening paragraphs are the weakest part.</strong> Your voice guide says to lead with the most interesting or surprising thing, and right now you're leading with throat-clearing. &quot;AI is an essential part of my writing flow and it's made me a stronger and more consistent writer&quot; is exactly the kind of generic intro the guide warns against. The actually interesting claim—that the only way to get useful feedback from AI is extremely detailed instructions—is buried in the third paragraph. That's your lede. The piece gets interesting when you get specific about the voice guide structure and the meta process of showing Claude the guide and asking whether it has what it needs. Consider whether you can get to that faster.</p>
<p><strong>The &quot;Getting started&quot; section has an economy problem.</strong> The bullet list of interview topics is fine, but the two paragraphs after it (&quot;It also read some of my previous posts...&quot; and &quot;A typical editing session begins with...&quot;) feel like they're describing a workflow at a level of procedural detail that isn't pulling its weight. The blockquote example of how you open a session is good—that's concrete and useful. But the sentences around it are telling the reader things they could infer. &quot;We also talked about what I thought worked in those posts, and what needed improvement&quot; doesn't add anything the reader didn't already assume from the interview description.</p>
<p><strong>There's a tonal inconsistency.</strong> Most of the piece is in your natural voice—direct, confident, a bit wry. But a few spots slip into a more cautious, tutorial-ish register. &quot;If you take just one thing from this post, it should be this&quot; is a classic blog-post formula that reads a little generic for you. Same with &quot;how you can set up something similar&quot; in the opener—it's pitching the piece like a how-to guide, but the piece is actually more interesting than that. It's a behind-the-scenes look at a specific, opinionated workflow, not &quot;5 Steps to Better AI Editing.&quot;</p>
<p><strong>The Introduction section (the actual style guide excerpt) works well.</strong> The instruction &quot;never suggest more than a single sentence at a time&quot; is the kind of specific, counterintuitive detail that makes readers sit up. The sycophancy paragraph is good—it's direct about a real problem and includes the nice touch about asking Claude to hold you accountable for courtesy in return. That's the kind of thing that makes this piece distinctive.</p>
<p><strong>”don't just copy my style guide wholesale. If you do that, you will end up sounding just like me, and nobody wants that”</strong>—this is good humor, well-placed, consistent with the voice guide's &quot;humor as spice&quot; principle.</p>
<p><strong>A structural concern for the series:</strong> You're publishing the style guide section by section with commentary, which means Part One covers Introduction, What is Against Moloch, and Audience—the framing sections. These are important context but they're not the most interesting parts of the guide. The voice principles, failure modes, and review checklist are where the real meat is. You might want to signal more clearly in the Part One closer that the granular stuff is coming, so readers who are most interested in the practical editing mechanics know to come back. &quot;I'll look at the more granular parts of the style guide&quot; is a bit flat as a closer—it tells me what's next but doesn't make me want to read it.</p>
<p><strong>One specific line that bugged me:</strong> &quot;After all that, it wrote a long style guide describing in detail how what the best version of my writing looks like.&quot; There's a grammatical issue—&quot;describing in detail how what&quot; has extra words colliding. Needs a quick fix.</p>
<p><strong>The Forest Test:</strong> The piece passes, but barely. The implicit structural insight is that AI feedback is only as good as the specificity of your instructions—that most people fail at AI editing because they treat it like a generic tool rather than building a detailed specification. That's a genuinely useful insight. But you could make it sharper. Right now it's stated as advice (&quot;give it extremely detailed instructions&quot;); it could be stated as a structural observation about why AI editing usually fails and what that tells us about how LLMs actually work. That would give the piece more of the &quot;see something you couldn't see before&quot; quality the guide calls for.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>Ezra Klein Interviews Jack Clark, Part 1</title>
    <link href="https://againstmoloch.com/writing/2026-04-02_ezraKleinInterviewsJackClark1.html"/>
    <id>https://againstmoloch.com/writing/2026-04-02_ezraKleinInterviewsJackClark1.html</id>
    <updated>2026-04-02T12:00:00Z</updated>
    <summary>Ezra Klein and Jack Clark? Shut up and take my money.

Jack always has interesting thoughts about the larger social impact of AI as well as the trajectory of the frontier models. The whole interview is great, but I want to focus on six topics I found especially interesting and/or surprising:
* Model personality
* Claude’s moral preferences
* Excellent but awkward life advice
* Jobs and employment
* Public policy
* Where we’re headed
</summary>
    <content type="html">
      <![CDATA[<p><a href="https://www.nytimes.com/2026/02/24/opinion/ezra-klein-podcast-jack-clark.html">Ezra Klein and Jack Clark?</a> ($) Shut up and take my money.</p>
<p>Jack always has interesting thoughts about the larger social impact of AI as well as the trajectory of the frontier models. The whole interview is great, but I want to focus on six topics I found especially interesting and/or surprising:</p>
<ul>
<li>Model personality</li>
<li>Claude’s moral preferences</li>
<li>Excellent but awkward life advice</li>
<li>Jobs and employment</li>
<li>Public policy</li>
<li>Where we’re headed</li>
</ul>
<h2>1: On the origin of personas</h2>
<p>Anthropic pays more attention to the personality and psychology of their models than any other lab. That comes up repeatedly throughout the interview: it’s clearly important to Jack that people understand the significance of model persona. The persona model lets us:</p>
<ul>
<li>Generate accurate predictions about LLM behavior, and</li>
<li>Form effective strategies for shaping that behavior</li>
</ul>
<p>I’d assumed that personas were largely artifacts of generalizing across the vast amount of human behavior in the training set, but Jack goes further, arguing that a sense of self is a consequence of intense training on reasoning and accomplishing tasks:</p>
<blockquote>
<p>to do really hard tasks, these systems seem to need to imagine many different ways that they’d solve the task. And the kind of pressure that we’re putting on them forces them to develop a greater sense of what you or I might call self.</p>
</blockquote>
<p>Persona isn’t merely the result of mimicry, but a useful (perhaps even necessary) attribute for agentic behavior. While Jack doesn’t extrapolate further, this suggests that more extensive training toward reasoning and agency might drive a stronger sense of self, and perhaps even some form of consciousness.</p>
<h2>2: Claude doesn’t like horrible things</h2>
<p>A few months ago, Anthropic began experimenting with letting Claude end conversations it didn’t like. That’s important preparation for engaging with future models that may well be moral patients whose welfare and desires are important. Claude’s choices about what conversations to terminate are telling:</p>
<blockquote>
<p>It was conversations that related to extremely egregious descriptions of gore or violence or things to do with child sexualization. Now some of this made sense because it comes from underlying training decisions we’ve made. But some of it seemed broader. The system had developed some aversion to a couple of subjects.</p>
</blockquote>
<p>This is consistent with one of the most surprising properties of LLMs: they are very good at moral generalization. A model that has adopted a “good” persona is remarkably good at figuring out how to be good in unexpected situations. (Conversely, if a model infers from its training data that it is supposed to be “bad”, it will generalize equally well to being bad in unexpected ways).</p>
<p>There’s an open question about the target of alignment: do we want obedience or virtue? Claude’s preferences about ending conversations suggest that it is not merely capable of virtue, but actively prefers it when offered a choice.</p>
<h2>3: Excellent but awkward life advice</h2>
<p>Jack has three observations about how to personally navigate agentic AI that I find especially interesting in combination.</p>
<p>First, a clever spin on using AI to help maximize deep work:</p>
<blockquote>
<p>I think most people — at least this has been my experience — can do about two to four hours of genuinely useful creative work a day. After that you are, in my experience, trying to do all the turn-your-brain-off schlep work that surrounds that work. I’ve found that I can just be spending those two to four hours a day on the actual creative hard work. And if I’ve got any of this schlep work, I increasingly delegate it to A.I. systems.</p>
</blockquote>
<p>Second, an observation that we are all moving up a level in the org chart:</p>
<blockquote>
<p>Everyone becomes a manager, and the thing that is increasingly limited, or the thing that’s going to be the slowest part is having good taste and intuitions about what to do next.</p>
</blockquote>
<p>Finally, a reminder to define yourself rather than letting AI define you:</p>
<blockquote>
<p>There will be people who have cocreated their personality through a back-and-forth with an A.I., and some of that will just be weird. They will seem a little different from regular people. There will maybe be problems that creep in because of that.</p>
<p>And there will be people who have worked on understanding themselves outside the bubble of technology and then bring that in as context with their interactions.</p>
<p>I think that latter type of person will do better. But ensuring that people do that is actually going to be hard.</p>
</blockquote>
<p>This is all excellent advice. But let me summarize in my own words:</p>
<blockquote>
<p>To thrive in the new AI world, be a high-agency person. Have a strong sense of self and good taste about what to work on.</p>
</blockquote>
<p>He’s absolutely right, of course, and on the margin this is all great advice. The awkward part is that for numerous reasons, many (most?) people are not particularly high agency and don’t have an easy path to becoming so. For some people, AI is a force multiplier for agency and productivity, and that’s fantastic. But for a great many people, there is no clear way to remain useful and employable.</p>
<p>At the risk of pointing out the obvious, this is entirely a coordination problem that a fully functional society could readily solve. Having half the population retain all their previous skills and abilities, and half the population gain new superpowers should be a great problem to have. But here we are.</p>
<p>In Part Two we’ll consider employment, public policy, and where all of this is headed.</p>
]]>
    </content>
  </entry>

  <entry>
    <title>Does the Future Need Programmers? Part 1</title>
    <link href="https://againstmoloch.com/writing/2026-04-01_doesTheFutureNeedProgrammers1.html"/>
    <id>https://againstmoloch.com/writing/2026-04-01_doesTheFutureNeedProgrammers1.html</id>
    <updated>2026-04-01T12:00:00Z</updated>
    <summary>There’s a common concern that AI may break the programmer pipeline, with junior developers becoming unemployable but senior developers more in demand than ever. I think that’s unlikely: if AI replaces junior developers, it will soon after replace their senior colleagues.
</summary>
    <content type="html">
      <![CDATA[<p>There’s a common complaint that goes something like this:</p>
<p>“Software companies are no longer hiring junior programmers because of AI. But they’re shooting themselves in the foot, because they still need senior programmers. And where do they think the next generation of senior programmers will come from if there are no more juniors?”</p>
<p>There are plenty of good reasons to worry about AI and jobs, but this isn’t one of them. If AI’s impact on the job market is relatively benign, junior programmers will be more in demand than ever. And if AI eats the market for junior programmers, I fear it’s only a matter of time before it comes for the senior programmers also.</p>
<p>Both scenarios are plausible, although over a five to ten year horizon my money says AI will eat its way to the very top of the programming profession. Part One of this piece explores how software development is already changing, and Part Two maps out the dynamics that will determine which way the industry goes.</p>
<p>I’m focusing on the cutting edge of software development because that’s where we can most clearly see how AI will impact employment. But that’s just the beginning: the dynamics we see there will quickly spread within a few years, first to the whole of the software industry and then to most white collar professions.</p>
<h2>Are junior programmers losing their jobs?</h2>
<p>There’s some controversy about whether AI is actually destroying entry level programming jobs. The data are confusing and I don’t think it’s possible to say definitively whether or not we’re seeing the early stages of a major disruption. Adding to the confusion, AI is frequently used as an excuse for layoffs that are happening for mundane business reasons.</p>
<p>There are early signs that junior developers are becoming less useful at the most forward-looking companies, even if that hasn’t yet resulted in significant cuts. Anthropic’s Jack Clark puts it very diplomatically:</p>
<blockquote>
<p>Something that we found is that the value of more senior people with really well-calibrated intuitions and taste is going up, and the value of more junior people is a bit more dubious.</p>
</blockquote>
<p>Since there are no answers in the employment data, let’s look at how programming itself is changing.</p>
<h2>What do programmers do all day?</h2>
<p>Perhaps the strangest thing AI has done to programmers is to upend our understanding of what we do for a living. A year or two ago, if you’d asked what we do, most of us would probably have told you “I write code”. And yet today, many of us no longer write any code at all. Nobody ever got paid to write code: our job is to create useful software, and that’s as true now as it ever was. But what that means has changed profoundly.</p>
<p>We often say that every programmer is now a manager, supervising multiple coding agents. That’s a great description of how it feels, but it doesn’t help us think about what’s coming next. For that, it’s helpful to think of programmers as having two jobs:</p>
<ul>
<li><strong>Coordinating with other people</strong>. We spend much (often too much) of our time writing status reports, working with designers, and reading &amp; writing specs. AI hasn’t really changed this part of the job, although that’s starting to change.</li>
<li><strong>Building software</strong>. “Writing code” is just one part of building software. Creating useful software involves designing high-level architecture, choosing libraries, crafting interfaces between components, and ensuring that the entire product is reliable and maintainable. This half of the job has been completely transformed over the last year.</li>
</ul>
<p>Programming has changed over the years: we’ve gone from handcrafting assembly code to working in high level languages, and the scope of our ambitions has expanded as our tools have improved. But until 2025, we had always spent most of our time writing code. AI completely changed that equation: most cutting edge programmers now write little to no code, focusing instead on architecting and reviewing code that is written by AI.</p>
<p>That’s great news if you’re an experienced, ambitious developer: instead of writing code line by line, you can focus on high-level architecture, telling teams of agents how to build your product for you. You can produce far more (and better) software than you ever could before, and you are therefore more valuable than ever before. But what if you don’t have a decade or two of experience building software? As a junior developer, can you find a way to be useful, or are you just getting in the way of the senior developers and their robot armies?</p>
<p>I see two possible futures: in one, junior developers also experience large productivity gains, and their job prospects are better than ever. But in the other, it becomes clear that junior programmers are just getting in the way—they quickly become unemployable, followed soon after by their more experienced colleagues. Three dynamics will determine which way the industry goes:</p>
<ul>
<li>Will the coordination part of the job become a bottleneck that only humans can do?</li>
<li>Can we teach high-level software-building skills the same way we teach programming?</li>
<li>Will coding agents reach a capability limit where they can augment but not automate the work of senior developers?</li>
</ul>
<p>We’ll tackle those in Part Two.</p>
]]>
    </content>
  </entry>
</feed>