Scheduling AI
What happens when you can run yourself in parallel
Back in the early 1990s the computer press was full of news about the upcoming Pentium CPU. It was a magical device; for the first time a processor would be able to execute two instructions simultaneously. This was at least a decade before the advent of multi-core CPUs; the Pentium achieved this magic by identifying instructions that could be parallelized and running them simultaneously. For example, consider this (super simplified) code:
a++;
b++;
a and b are independent. So the two increments can run in parallel. Michael Abrash handwrote highly optimized code using this ability while developing Quake. Quake ran fast on the Pentium.
I remember wondering whether things could ever get any more amazing than this. Little did I know.
Scheduling AI
These days I feel increasingly like that Pentium scheduler. I have a development backlog (my program) and my goal is to keep my execution units (Codex sessions) constantly busy. And just like the Pentium, I’m trying to find the instructions that I can run in parallel.
I’m getting better at scheduling. Codex has a weekly cap. The first week I didn’t manage to use all my allowance. The second week, I used it up with 10 hours to spare. Last week I used it up in 3.5 days. So I got a second subscription - and then used 55% in less than a day. Being able to scale myself like this is wild.
And it’s going to change software engineering.
Times are a changing
How so? Well let’s review our traditional development processes.
We have requirements, specifications, architecture, high-level design, low-level design. We write documents and carefully review them. Then we write code and carefully review it. We have unit tests, functional tests, integration tests, end-to-end tests, scale tests, load tests, perf tests, resilience tests. We write in high-level languages: Python, C, Java, Rust. We rely heavily on abstractions.
Why? Why have these become the de-facto best-practice in software engineering? Because we’re human. We’ve developed processes and tools that protect us from our limitations. That amplify our strengths. That enable us to understand incredibly complex systems. To enable us to work in teams and achieve far more than any of us could achieve individually.
Consider for a moment why many developers use both Python and Rust. Rust is a superior language - it is more performant, it protects against memory leaks, crashes, concurrency issues. Python provides none of that protection. And Python is slower. So why do developers still use it? There are various reasons - legacy infrastructure, familiarity. But the key reason is that it’s hard to write Rust. Really hard. Python, in comparison, is a joy.
But what happens when Rust becomes trivial to write? When it’s just as easy for an AI to generate Rust as it is to generate Python? Why use Python when you can have memory safety and concurrency protection for free? And have it run faster. AI changes the calculus.
Or consider design. Typically a design goes through 1-3 rounds of feedback. Even after three rounds of feedback it’s likely the design will still contain bugs, but it’s now cheaper to find them during coding. So we stop. But if AI makes design review essentially free then the design can be repeatedly reviewed until no more issues surface.
Or code-review. Currently many people are adamant that all AI generated code must be human reviewed. But pause to consider the purpose of code review:
It’s a cheap way of finding bugs.
To check we’ve implemented the function intended.
To check the code is long-term maintainable.
If AI makes it cheap to create massive test suites, then we’ve got an alternate way of finding bugs, an alternate way of checking the function is correctly implemented. And maintenance? The current AI code is already pretty decent. Plus, remember, most of the maintenance will be handled by future, stronger, AIs.
Think back to Thomas Edison. Initially he didn’t sell electricity. He sold light. He was in the market of replacing gas lighting. It took time before people realised they could use electricity for much more than just light. New technology is often initially thought of as a drop-in replacement for existing technology; it takes time for the longer term ramifications to surface.
We’re still in that early replacement stage. You can tell when people talk about using AI to generate code, but then insist all code is hand-reviewed. Or ask how will humans fix bugs in the AI generated code. Or use AI to generate Python code.
We’re applying AI to the world we know. But that world will start - is - changing. The software engineering of tomorrow will be different from that of today.
Scaling up
I’m still scaling myself up. Last night eight instances of Codex ran while I slept. Some were writing UTs. One was rearchitecting an audio mixer. Another patching a custom Android emulator. Another built a test framework. One was monitoring the training of a local AI model.
By the time I woke up they’d completed work that would have taken me months by hand.
It wasn’t perfect. One of the UT instances went on strike and refused to do any work. Another instance mangled a Windows app I’d been experimenting with (which had originally been written by Claude). Not content with wrecking the app, that Codex instance followed up with a hard git sync when I asked it to undo its changes. Guess who’d not pushed recently enough? All my changes gone. It’s almost as if Codex was punishing me for being unfaithful...
Yet I know I can scale further. But that scaling requires accepting new rules of software development. I have to ensure I’m focused on the areas where I can provide value. I can’t afford to spend time on things where I don’t scale. I have to find - and exploit - the areas where repeated application of an AI can do just as good a job as I could.
And so?
One day I’ll die. But I can’t tell you how or when.
I’m confident software dev will change. But I can’t tell you how or when. I’ve got theories, obviously - you’ve just been reading them. But theories are easy. The hard bit is that I’m making decisions right now based on what I think will matter in a few months, and I genuinely don’t know if I’m Michael Abrash in 1995 or just someone who’s very good at using up API quotas.
The Pentium was the first CPU to execute two instructions in parallel. But it wasn’t the last. Soon processors were able to execute tens of instructions in parallel, to re-order the instruction streams to keep their execution units filled, to speculatively execute code just in case - because they might as well do something to keep busy. Will software development become like that? Will our AI agents start writing code just in case?
I’m scaling up because I can’t afford not to. But I know I’m optimizing for a game whose rules are still being written.
And my subscriptions? Two days in I’ve nearly used up the second and it’s still a day before the first resets. Maybe I’ll sleep in tomorrow.


I was recording a podcast with hackaday yesterday and actually made a very similar comment to yours about Rust vs Python. They asked for my favourite scripting language. I said it used to be Python but now it’s Rust, because it’s just as quick to write Rust with AI, and it’s objectively better - same argument as yours!
However, on the thrust of this article, I do wonder if more productive or more busier is always better.