Steel bars and AI leverage
Why the biggest danger isn't moving too quickly with AI, but too slowly
It’s often said that AI has a jagged frontier. Sometimes it is amazing, other times it struggles - or completely fails - often in surprising ways. And that jagged frontier is moving very fast right now. This unpredictability - roughness if you like - can make using AI frustrating. But when it works it can be amazing. It can complete tedious day long tasks in a matter of minutes. It can spot errors you’d have otherwise missed. Once you’ve experienced this, there’s no looking back.
Yet many have still not experienced the magic. Their experiences are tainted by run-ins with Apple Intelligence or Microsoft Copilot or maybe some of the other awful early AI integrations with things like eBay. They might believe the urban myths of AI companies stealing all their data. Or that AI consumes obscene amounts of power and water (one year of regular chatbot usage uses less energy than driving 10 miles).
Then there’s another set of folk - those who have tried AI but used it for something inappropriate. Like attempting to rewrite a 10kloc codebase. Or formulating a plan for world-peace. Arguably they are in a worse category than those who haven’t tried. Having tried - and failed - they are less likely to try again.
And then there’s the small set of folk who’ve used the recent frontier models such as Claude 3.7. They get it. They understand what AI can do. It’s a surprisingly small set of people. Change takes time.
Part of the trouble is the hype cycle has set folk up to expect too much from AI. Todays AI is limited. It will not replace all humans. But it can act as a massive force multiplier.
One of my previous managers used to talk fondly about a steel bar he kept in his workshop - he loved it because he could use it to apply leverage - to act as a force multiplier. And AI is exactly the same. It enables you to do things you couldn’t previously. It enables you to do things you already could faster. It enables you to do everything to a higher quality.
Data privacy and security are realistic concerns. Organisations do need thoughtful policies when adopting AI tools. But these are solvable problems, not the insurmountable barriers they sometimes appear to be.
And that force multiplier currently works best with mundane utility. Helping you do basic tasks better. And quicker.
Examples
Rubber ducking: Over the years when I’ve been wrangling a problem, I’ve often taken a colleague into an office and explained the problem to them. Often they don’t need to say anything. Just their presence - and the discipline of thoroughly explaining the problem - helps me work out what to do next. More recently it’s got a name - it’s called rubber ducking (the idea being to replace my colleague with a rubber duck). It turns out that AI is fantastic at interactive rubber ducking. It’s massively knowledgeable, doesn’t hold prejudices and is infinitely patient. This is a killer use of AI.
Sage advice: I’ve lost count of the times I’ve asked a trusted colleague (or my wife) to review some semi-controversial writing before sending it out. But now I get Claude to provide that review. And even better, Claude is sufficiently fast I can use it in semi-real time with messaging. For example, the other day a senior manager messaged me to propose an approach diametrically opposed to mine. I fired up Claude, pasted in the chat so far, added some notes about what I was intending to reply with and got a fine-tuned reply which took the emotional heat out of what I would have replied with. Awesome. And even better that we reached a friendly compromise.
Getting started: Oftentimes there’s a thing I’d like to do, but I’m not quite sure how to get started. So I ask Claude the meta-question - how do I get started on X? And then the conversation goes from there.
Real life: I’m building a new workshop. I’m designing the foundations. o3-mini-high has helped me figure out the build-up - what materials go where. And Deep Research has investigated concrete suppliers for me, whether to barrow or pump, costs, delivery timescales. When I get to the superstructure, o3-mini-high will be helping design the framework and providing engineering advice to ensure it doesn’t collapse.
It’s these basic tasks where AI can provide the most gain. These tasks are not about building complex search tools. Or chains of agents. Or trying to remove humans from the loop. They are about supercharging humans - making us all better versions of our existing selves.
And so?
Right now I’m heavily involved in AI evangelism. A key part of my job is convincing those who’ve not seen the light the difference AI can make. I’ve never thought of myself as a missionary, but that’s the role I find myself in.
And one thing I’ve recently realised is that anyone who asks you to run a proof-of-concept, or asks for return-on-investment calculations hasn’t used current leading edge tech. They haven’t used Claude. Or o3. Haven’t asked Deep Research to produce a report.
Because if they had they wouldn’t be asking for proof-of-concepts. Wouldn’t be asking for ROI calculations. They’d know. They’d understand.
Yes, leaders are responsible for careful stewardship of resources and need to justify investments. But they’d realise the world has shifted. AI has a positive ROI. You don’t need a proof-of-concept to demonstrate that.
Consider a simple calculation: If you pay staff $50k annually (about $25/hour), Claude's $20 monthly cost is recouped if it saves just one hour per month. One hour.
Multiple studies show Claude saves multiples of that. And ROI is no longer a useful metric; it’s not just about doing existing work faster; it’s also about enabling new capabilities.
The organisations thriving with AI aren't those avoiding all risks—they're the ones taking calculated risks with appropriate guardrails. They're implementing sensible data policies, providing training, and focusing first on low-risk, high-return applications. Focusing on mundane utility.
If you haven't experienced frontier models like Claude 3.7 or GPT-4o yourself, I encourage you to start there. Personal experience is more convincing than any article. Then consider how to bring that power to your organisation thoughtfully and responsibly. The question isn't whether to adopt AI, but how to do so in a way that balances innovation with necessary caution.
The biggest risk today isn't moving too quickly with appropriate safeguards—it's moving too slowly while your competitors gain the compound advantages of early adoption and learning. Finding that balance is the real leadership challenge of our moment.

