Rewriting the rules
AI will change how we think about rewriting software
Twenty-four years ago, Joel Spolsky wrote what became one of the most influential essays in software development. "Things You Should Never Do, Part I" argued that rewriting code from scratch is "the single biggest mistake that can be made." His reasoning? Netscape's fatal decision to rewrite their browser engine, which he claimed opened the door for Internet Explorer to dominate the market.
The essay became gospel. Don't rewrite, refactor. Don't start over, evolve. Every time a developer suggests a rewrite, someone inevitably links to Joel's article.
But AI is poised to change this calculus.
To understand why, let's look at some historical rewrites.
WordStar's story is a cautionary tale. In 1982 Wordstar was the dominant word processor with at least 75% market share when they decided to rewrite for the IBM PC. But the rewrite took years. Meanwhile, WordPerfect captured their market. By the time the new WordStar shipped, it was too late. A once-dominant product became irrelevant because they lost years to a rewrite.
But contrast this with Microsoft's Windows NT project. In 1988, Microsoft hired Dave Cutler to create a new, modern Windows. It wasn't an incremental improvement – it was a complete rewrite. It took five years. But that investment paid off spectacularly. The NT kernel still forms the basis of Windows today. Without the rewrite, Microsoft could well have joined the long list of failed software companies.
So why did one rewrite succeed while the other failed? The conventional wisdom is resources and market position. Microsoft could afford the time and money; WordStar couldn't. Microsoft's monopoly position gave them breathing room; WordStar faced intense competition.
But there's another factor: complexity. WordStar was rewriting a word processor – a relatively self-contained application. Windows NT was replacing an operating system that had grown organically from DOS. The technical debt in Windows was a bigger threat than the risk of a rewrite. Microsoft couldn’t afford not to rewrite.
The irony is that history provided a fascinating counterpoint to Joel's advice – through his own company's product, FogBugz. As Joel was arguing against rewrites, his company was maintaining and incrementally improving FogBugz, their bug-tracking software. They followed his prescription perfectly: careful evolution, no rewrites, steady improvements.
Yet by 2017, FogBugz had become legacy software. The codebase, dating back to 2000, was written in a proprietary language called Wasabi (itself an interesting story about avoiding rewrites). While competitors rebuilt their products for the cloud era, FogBugz remained anchored to its origins. The technical debt accumulated. Eventually, Fog Creek (now Glitch) sold FogBugz to Manuscript, which was later acquired by DevFactory.
The product that inspired the "never rewrite" rule died a slow death precisely because it wasn't rewritten. Meanwhile, companies that did brave rewrites – like Basecamp, which rebuilt their product multiple times – thrived. It turns out that sometimes not rewriting is riskier than rewriting.
This brings us to Mozilla. Joel's article portrayed their rewrite as a catastrophic mistake. But was it? Sure, it took longer than expected. Yes, it hurt them in the short term. But that rewrite gave us Gecko, the engine that powered Firefox and broke IE's monopoly. More importantly, it created a foundation that Mozilla still builds on today and which has given us Rust. Twenty years later, was Joel wrong?
In reality Joel wasn't wrong for his time. In 2000, rewrites were incredibly risky. You had to manually recreate every feature, fix every bug, handle every edge case. The cost was enormous, and the risks were higher.
We’re not there yet, but it’s becoming clear AI will change the calculus. Rewriting will be achieved in hours, not months. And the resulting code will be right-sized. Software which does what we require and nothing else. Software which is lean and fast. Software that has a minimal attack surface. That is cheap to maintain. Easy to fix.
Most importantly, AI will change the risk equation. The biggest risk in rewrites isn't technical – it's time. Every month spent rewriting is a month you're not improving your product. AI collapses this timeline from years to weeks or even days.
What would Joel say about Mozilla's rewrite if they had today's AI? Imagine if they could have rewritten their browser engine in weeks instead of years. If they could have automatically verified that every feature worked exactly as before. If they could have generated comprehensive tests to catch any regressions.
This doesn't mean every system should be rewritten. Some legacy systems are too complex for current AI to handle. Others have regulatory requirements that make automated rewrites risky. And sometimes, the existing system works just fine.
But, soon, the question will shift from "Should we rewrite?" to "Why haven't we rewritten yet?"
This has profound implications:
Technical debt becomes less permanent.
Legacy systems become less of a burden.
Modernization becomes more accessible.
Platform shifts become less risky.
Companies can more easily:
Move to modern architectures.
Switch programming languages.
Adopt new frameworks.
Improve security and performance.
Reduce maintenance costs.
We're entering an era where Joel's advice might need updating. Instead of "never rewrite," the new rule might be "rewrite when AI makes it practical."
The real question isn't whether to rewrite anymore. It's understanding when AI makes rewrites the right choice. Sometimes, like WordStar, maintaining the existing system is still best. Other times, like Windows NT, a rewrite is essential for the future. AI changes the calculus of risk and reward.
And looking further ahead, the very concept of rewrites will become obsolete. As AI evolves, we'll likely move to the workflow world - where requirements are translated directly into right-sized solutions without human programmers involved. But, in the short term, understanding when and how to leverage AI for rewrites will be a critical skill.
Joel’s advice was probably right for 2000. But it's 2024, and the rules are changing. The things you should never do? That list continues to change every day.


It crossed my mind if the right approach for a rewrite today is to do nothing for the first few months and bet on AI improving enough to enable you to get it all done in the final few months.
A risky strategy, but spending the first few months on the beach would be nice :).
A key lesson I’ve learnt about re-writes is that they take 5-10x as long as you think they will. A similar argument goes from writing things from scratch (although the multiplier there may be closer to 2-5x). It’s not just poor estimating and engineer naivety which causes these problems - it also the inability of leadership to accept realistic estimates. This is not necessarily a bad thing - I sometimes wonder how many great products would never have been if they’d had realistic estimates from day 1. I expect that’s now being augmented by leadership using the existence of AI to squeeze down otherwise plausible estimates.