Breathing life into dead code
Why legacy modernization might be AI's killer coding app
In software development there are occasional moments of coolness. Where something happens that makes you take a breath. Maybe you finally understand a bug you’ve been working on for weeks. Or your code comes to life for the first time. They are rare; that’s what makes them special.
Yet, recently, I’ve had a couple. That’s unusual - so let me explain.
But first, we need to go back in time. Forty years ago my parents bought me a puzzle game for my BBC computer. The game put you in charge of Kings Cross station in London. Your job was to be the signaler. Route arriving trains correctly, deal with delayed (or early) trains, ensure trains departed on time. All the time your performance was monitored; perform poorly and you’d get fired. It was surprisingly addictive. And finely balanced - one wrong move and it would (and, for me at least, often did) go wrong very quickly.
A few years later I was at university. Windows 3.1 had just shipped and C++ was gaining ground. So I did the obvious thing. I reimplemented that train puzzle game using C++ in Windows. I wrote 7.5kloc of C++. I learnt about Windows development. And the game worked, so I moved on.
Then, in 2007 I rediscovered the project and converted it from 16-bit to 32-bit. And then moved on again.
Fast forward to last week. I found myself wondering if modern agentic AI tools can breathe life into old projects. Could they get them building again? Could AI help update a legacy codebase?
And so I dug out this old codebase and started to explore the various AI options.
What to use?
One possibility was Codex - OpenAIs agentic coding agent. It’s got a pretty nice workflow - connect it to your GitHub account, choose a repo and then start giving Codex work. These tasks can be bug fixes, or documentation, or exploring the codebase. Whatever you choose. And once bugs are fixed you can turn them into PRs - Codex turns bug fixing into button pushing.
I’ve used it quite a bit on my command line Rust projects - it’s capable of fixing lots of small bugs and even adding small features. One odd, video-generation-esque feature, is the ability to generate multiple solutions for the same problem. So rather than getting one solution, you can ask for multiple. It seems an odd implementation choice - I’d rather one good solution than four mediocre ones. Reviewing is hard; I don’t want to increase the amount of reviewing I’m doing. I’d prefer better reviewing/ testing logic that can detect - and then iterate to fix - problems.
But that’s a niggle. Sitting back and watching multiple Codex agents fixing bugs in parallel is pretty amazing. Watching a swarm of developers fix & enhance your code is pretty cool.
However Codex doesn’t work with a nearly 30 year old Windows based compiler. So it was out.
Claude Code was also ruled out. It runs in WSL and I have a Windows compiler...
So I started out by giving the codebase to Gemini Pro 2.5 and asking it for advice. Gemini offered all sorts of ideas, but after twenty minutes I wasn’t getting anywhere. There were many problems.
First, the oldest version of Borland C++ I could install was V5. But the project originally used V3.1. Compiling with V5 resulted in a slew of obscure errors and a failed build. Gemini & I eventually worked out the cause: V5 introduced non-backwards compatible changes to the Borland windowing framework (OWL) which the project used. So some updating/ porting was required. Then there were issues with the linker, which seemed to have a very short command line length limit. Nor was it obvious which of the many new (for 1997) runtime libraries to use.
All these problems can be solved. But they are obscure, the docs are poor and, in my experience, a fair bit of trial and error is involved. Nor is this rewarding work - it takes a lot of time without actually building anything new.
But, in truth, my goal was to get a baseline before starting the real experiment. I wanted to understand how hard this problem actually was before unleashing Claude MCP. And now I knew. It was hard.
Getting to work
Things got off to a good start. Claude found my old codebase and the compiler.
Then it started debugging the various problems:
and:
Along the way it declared success several times:
Except that all these times Claude was wrong. First I got an executable that hung on startup. Then one that linked to a slew of DLLs (despite asking for a static executable with no DLL dependencies). Then one without any resources (so no menus or UI). But I kept pushing and after several elapsed hours (and exhausting the context window a couple of times) we got there.
All of a sudden everything was done and I had a project that cleanly compiled. Claude had completed the port to the new version of OWL. All the compiler errors were fixed. The build system problems fixed. Wow.
If it hadn’t been for Claude I wouldn’t have bothered trying to update this code. It’s too old, the tools too gnarly, the documentation limited and difficult to find. Borland C++ 5 predates the internet so online information is sparse. And yet, despite this, Claude had figured it out.
It was justifiably pleased when it was complete:
So what?
Last September I wrote about the age of "workflow". A world where you give tools the requirements for a product - and the tool will go and build it for you.
Codex, Claude Code, Claude MCP - they are the first steps on this journey. I’d imagined using them initially to build new products. And we know AI can write code.
But being able to get AI to help with legacy codebases is, in some ways, more powerful. There is a lot of legacy code out there. And being able to adapt, refactor and modernize it using AI tools is immensely powerful.
There are millions of lines of code in legacy codebases. Mountains of tech debt. Decaying architectures. Knowledge lost when developers retired. Lost documentation. Broken build systems.
Understandably, most folks fear projects like these - they touch only what is absolutely necessary. Work around the gnarly bits. Duplicate the scary bits. It’s often the only pragmatic approach but the tech debts builds. The decay continues.
Yet maybe the equation is starting to change - at least for smaller legacy projects.
Claude didn't just fix my compiler errors - it figured out knowledge that had been lost to time. It sorted Borland C++ 5's OWL framework quirks. It read the compiler error messages and did the trial and error to attempt to fix them. It took time. It was tedious. And Claude persevered long after I would have given up.
Admittedly, a 7.5kloc codebase is small. But nor is it a toy project. If this approach scales beyond small codebases, it could unlock significant trapped value. And even if it doesn’t, it will still be useful for all the small legacy codebases out there. The AI doesn't care that the original architect left in 1997 or that documentation is missing - it reconstructs knowledge through persistence. Through trial and error.
Getting the codebase to build is only the first stage. Can Claude help refactor the codebase? Update the code to more modern development tools? I’m not sure yet - exploring that is next on the list.
But given the ever growing volumes of legacy code that is slowly decaying then AI might offer a solution to the software industries tech debt mountain. Even modest success here is likely to have a more significant impact on software engineering than generating new code from scratch.








