Tinkering
Could this be a critical skill in AI innovation?
Yesterday I was talking with a colleague. They’re the second person in our group to get access to Codex. As we talked their eyes looked away and I could see them briefly typing. In the past I’d have thought they were replying to an IM, but not now. Now I knew they were prodding Codex; it had finished a task and they were quickly getting the next prompt in.
I’ve spent a lot of this week thinking about - and talking about - my experiences with Codex. A few questions keep coming up:
How can we roll it across the org? Should it replace Claude Desktop (our current preferred tool)?
How do we teach people to use it?
Is it really that productive? Many have read stories of AI making developers go slower. Others haven’t seen significant changes in productivity in their orgs.
Rolling it out
The first question is easy. Codex is a specialized coding tool. It’s slow at answering simple questions. It’s not a general AI chat tool - for that Claude Desktop is still the sweet-spot. Even for simple tasks coding tasks Claude Desktop plus something like (shameless plug) mdmcp make a good combination; Codex only really starts to shine for the large pieces of development.
The other problem with giving Codex to too many developers on the same project is - how do you stop them tripping over each other? My colleague and I are working on prototyping a large product. In the past you could have fifteen developers working on this. Now? The two of us are tripping over each other. It’s the classic problem where adding extra engineers make you go slower. Except that now it happens with far fewer developers. The world has shrunk. It’s a weird feeling.
More than ever, finding clean fracture lines matters. It’s essential to be able to break projects up cleanly into separate components to work efficiently. Conway’s law is alive and well - except that the heuristics are different.
Required skills
I’ve been pushing to get two colleagues access to Codex. My gut tells me they’d be a good fit. But I’ve struggled to articulate why. Initially I trotted out these reasons:
A desire to experiment and learn.
The ability to be comfortable being uncomfortable.
Experience with the full software dev life cycle, including architecture.
Experience as a team lead.
All of these are true. But, as the week has passed, it’s dawned on me that perhaps there’s something else. Something my gut already seemed to know. An additional important character trait. The trait of being a tinkerer.
Tinkering
There are some people who seem to just naturally spot things that are broken and work out how to fix them. Whether it’s a broken tap, light switches wired up the wrong way, or a car that won’t start, they spot problems, roll up their sleeves, investigate and find a way to fix the problem. Unfamiliar things don’t phase them. They like understanding how things work, aren’t afraid to take things apart, want to experiment and learn.
Is that a key skill for success with AI? I’m beginning to think it might be.
AI has a lot of rough corners. There is no guidebook. You’ve got to figure it out yourself. And just as soon as you’ve figured it out, it changes. The techniques that worked yesterday may not work tomorrow. The traditional approach of building paved paths for others to follow doesn’t work. But it’s ideal tinkering country.
My colleague is a tinkerer. They’re the kind of person who relishes new tasks. They’ll dig in and quickly figure out what to do. Take our project. I gave them collaborator permissions on my repo, and before I knew it they’d got the code built and deployed. All long before I’d got the build clean or written any install instructions. Cool. When we talk we’re now peers discussing how to use Codex, not me telling them what to do.
Maybe, in time, things slow down to allow everyone to catch up. But, for now, the Roger’s diffusion curve seems likely to get stretched out - the gap between innovators and laggards will grow. Does that matter? I don’t know. Will the laggards eventually catch up? I assume so, but I don’t know.
Productivity
Productivity is the other angle. AI is all well and good, but it needs to be more than a fun toy. It needs to have a positive commercial impact. And the current picture is very mixed. Right now it’s best summed up as: the mean is low but the variance is high. On the right task, AI enables incredible speed. Earlier this year I built an email migration tool ~10x faster than I could have done by hand. mdmcp was much faster - 6 months work in a few (admittedly long) days. That’s incredible. So why doesn’t this play out across organizations? Why isn’t GitHub overflowing with new repos?
There’s no clear single reason. Instead it’s a mix:
Agentic coding tools (Codex and chums) are still relatively new; diffusion takes time.
How many people have the desire (or aptitude) to learn how to use the new tools?
How many people have the mix of architectural, team-lead and tinkering?
How many problems are actually amenable to the current AI tools?
Maybe the question needs to become how do find and nurture the skills that enable individuals to flourish. And, perhaps, is there a way to convert our problems into ones that are more tractable by current AI?
And so?
What does this mean for organizations?
First, it’s critical you find and nurture your innovators. Things are moving fast. It’s important to have first hand knowledge about what’s going on; falling behind may carry a heavy penalty.
You need to find and nurture your tinkerer-architect-leads. They are the ones who are most likely able to figure out how to make best use of AI in your org.
And for individuals? My advice is simple.
Learn. If you’ve not got architecture skills then acquire them. Discuss architecture with Claude, ChatGPT. Become familiar with design patterns, threading, the big open-source packages (OpenSSL, K8s, Docker). Read about the architecture of Windows, Linux.
Learn. If you’ve not got team lead skills then find opportunities to lead. Discuss leadership with Claude, ChatGPT. Leading a team of agents requires a subset of the human leadership skills - agents are always positive, don’t need to be convinced to do something and rarely pushback. But the ability to brief, review and support are critical. Codex is imperfect; I’ve seen plans with critical gaps, bug diagnoses that don’t fit all the facts. Being able to spot those and correct the model is critical.
And if you’re a tinkerer then tinker like never before.


I feel like this is the same problem you have trying to turn a singleplayer game into a multiplayer one. The mechanics and way things work are designed for solo play, but suddenly with tools like codex you’re throwing extra players into the mix and the game systems aren’t designed to handle it.