It was never about typing
When code generation is no longer the bottleneck, everything changes
I occasionally post cut-down versions of these articles on LinkedIn. For one of them - The staggering economics of agentic coders - I got some interesting feedback which boiled down to the commentator disagreeing that code generation was a bottleneck - noting “typing is not the bottleneck”.
And they were right. But also wrong. Let me explain.
The core of their argument is that software engineering involves much, much more than just writing code. There’s working out what to build, architecting and designing it, building it (which involves writing code), testing, fixing, deploying, supporting. Writing code is a small part of the equation - indeed typing is rarely the bottleneck.
Except.
The current software development process has evolved from the reality that writing code is a rare skill. Few people have the ability to write code. Even fewer have the ability required to architect large scale systems.
Developing software is so hard we’ve had to invent systems, processes, tools to make the most of the limited skills we humans have. Our limitations affect component sizes, how we decompose code, component interfaces. We need compilers, high level languages, layers of abstraction. Our limitations affect team sizes and structures. And then those team structures leak into the way we build code - Conway’s Law.
Because writing code is so expensive, we carefully architect and design systems to maximise the chance we write the correct code first time. For years the mantra has been “write once, write right”.
Over the years we have developed new development processes to compensate for our limitations. Help us make the most of what we can do. Object-orientated coding, agile, Rust - all designed to compensate for human limitations.
The software development process is intrinsically shaped by the capabilities of the humans that build the code.
But.
We are heading into a new world. Where models can reason over more code more rapidly than we humans ever could. Where AI becomes a team member. Where writing code is dirt cheap.
It will change the way we approach development. It has to. It is naive to assume that everything else will stay the same.
It’s not just writing code
The first big factor is that AI isn’t limited to just writing code. It can help generate and define requirements. It can help with the design. It can help with code review. It can help with testing. It can help with support. It can consume larger codebases than we can hope to. At some stage it seems likely we’ll start building systems larger than we humans can understand. The latest models are nearly there. Humans top out at somewhere between 50-100kloc (yes, I agree, kloc is a poor measure - but it’s arguably the best we’ve got).
Take OpenSSL. It’s ~70kloc. Here’s an architecture diagram that Gemini 2.5 produced from the source code after 70 seconds of thinking.
I don’t know OpenSSL well. The diagram may have some errors. But it looks plausible to me. It’s certainly good enough to get me started finding my way around the codebase.
It used about half of Gemini’s 1 million token context window. Plenty of space left to discuss the code and reason about it.
Last November I couldn’t get any of the then SOTA AI models to draw a diagram for a 17kloc codebase. How quickly things change.
Pressing the keys faster
The second factor is the cost of code writing. Sure pressing the keys faster means we can do the old things faster. But we can also do new things. I’ve seen a significant uptick in people producing prototypes. That’s something current AI tools are ideally suited to. Vibe code an idea and if the code isn’t great, well, it doesn’t matter since it’s a prototype. You’d never ship prototype code, would you?
It changes the modify versus create-from-scratch calculus. Currently code is difficult to create, so when we need changes we modify existing code. But if it becomes cheaper to build anew rather than modify what happens? We don’t repair faulty toasters any more - we buy new ones which have (mostly) the same interface. If we can do the same with code (and this is an ‘if’, not a given) then does code become like a toaster? Continuous deployment has already trained users that UIs continuously change. So does it matter if an interface changes slightly because code has been regenerated?
I had a real-world example of this recently where I planned to modify an existing tool. That’s what I’d always have done in the old world. But I quickly discovered it was faster to start from scratch. Reuse the idea, the architecture, but regenerate the actual code. Is that where we are heading?
So what?
We're entering a world where software development processes will fundamentally transform. Yes, keys will be pressed faster, but that's just the beginning.
This transformation will likely happen in phases:
First, the productivity phase: teams using AI to accelerate existing workflows while maintaining traditional architectures and processes. We're here now, with AI helping us write individual functions and classes faster.
Next, the reimagination phase: questioning long-held assumptions about software design. Why maintain complex abstraction layers designed for human comprehension when AI can reason across millions of lines without them? Why build modular systems when generating a custom solution might be faster than integrating existing components?
Finally, the reinvention phase: entirely new development paradigms emerge. Perhaps we'll shift from "write once, write right" to "generate, test, regenerate" where code is treated as disposable and continuously regenerated based on evolving requirements. Programs become living documents that grow and adapt over time instead of static artifacts we painstakingly modify.
For developers, this means shifting focus from implementation details to system-level thinking, from writing code to defining problems clearly, from debugging to effective AI collaboration. The most valuable skills won't be language proficiency or algorithm knowledge, but the ability to communicate intent and domain understanding to AI systems.
For organisations, it means rethinking team structures, development processes, and even business models. Companies that recognise code generation is becoming a commodity will focus instead on proprietary data, novel user experiences, and problem domains that create lasting value.
The bottleneck isn't typing speed - it never was. But when we remove the constraint of human coding capacity, we unlock entirely new possibilities for what software can become and how we build it.
The future belongs to those who can envision software beyond the limitations that have defined it for decades. Those who see this shift not merely as "faster typing" but as a fundamental restructuring of what's possible.



