Would you like ice with that?
You need ICE if your organization is going to survive
If I were ever faced with a tsunami, I know what to do. Run. Fast. Uphill.
Here in the world of software engineering, it’s becoming clear the tsunami is on its way. Knowing that it’s coming is all well and good. But what should we do? What should organizations do? What’s their equivalent of rapidly running uphill?
It’s hard to know. One approach is to wait it out. Maybe the tsunami won’t be that big. Maybe we’ll get a little bit wet, but we’ll survive. Or maybe AI is all just hype, so we should do nothing? That’s tempting, not least because it’s low effort.
Another approach is to occasionally review AI and try out some tools. Maybe you were an early adopter who tried Github Copilot. Or tried using GPT-3.5 twelve months ago to review some code. You’ll have been underwhelmed. Worse, you might be stuck in the disbelief chasm, unwilling to invest time in experimenting again. And I get it - evaluating AI is expensive. You don’t have resources to spend on things that don’t pay off.
The ICE strategy
But, right now, we know enough to be certain that the do-nothing and wait-it-out strategies are going to fail. If you want to thrive in the coming years you can’t afford to adopt either approach.
Instead, right now, you have to be proactive. You need to invest in AI. Specifically in three key areas.
Incubation. For those who take advantage of it, AI offers faster innovation. The ability to rapidly prototype ideas. To experiment. To fail fast, learn quickly, and identify the most promising applications. This could be building new products. It could be Operator driving testing. It could be using Deep Research to generate company research.
This team is tasked with incubating those ideas - quickly prototyping products, ideas, workflows, and learning how best to use the latest AI tools in your workflows. This team creates the paved paths the rest of your organization will follow.
Centre of Excellence. The pace of change makes it essential to actively keep up. This team follows the latest news, models, techniques. They proactively track the AI landscape. They evaluate tools and vendors. They identify promising looking tools. They provide the pipeline of tools to the incubation team.
Evangelism. Success depends on bringing your whole organization with you. And evangelism is where the rubber hits the road. This team scales successful AI initiatives across the organization through knowledge sharing and championing AI-driven processes. They focus on practical adoption, helping teams integrate AI tools into their daily workflows. They foster and build local AI champions - people embedded within teams who spread and share AI knowledge. They lead your teams to the paved paths - and help them navigate them.
That’s the ICE strategy - incubation, centre of excellence and evangelism. And if you don’t have one yet, you need one.
Other warning signs
Right now every software engineering organization should be:
Using Claude to signpost code and help engineers navigate codebases.
Using Claude to help employees communicate, get unstuck, write documents.
Using Cursor for software dev.
Exploring code composition with o3-mini-high.
Exploring agentic coding with Claude Code.
If you’re not doing any of those, take it as a warning sign. I suggest you act. Soon.
Implementation
Implementing an ICE strategy isn’t without obstacles. Many organizations will face resistance to change - developers will have concerns about AI tools devaluing their hard-earned skills or potentially replacing them. The fear of the unknown is real. There’s no easy answer. The best we can do is acknowledge the concerns and help folk realize that remaining relevant requires developing new skills. And then provide the opportunities to develop those new skills.
Technical debt and legacy systems present another challenge. Your codebase wasn't built with AI in mind, and integration can be messy. Start with isolated projects where AI can demonstrate clear value without disrupting critical systems. Create sandboxes where developers can experiment with AI tools on non-production code first.
Then there's the question of data privacy and security. Tools like Claude, Cursor, and o3 interact with your proprietary code. Establish clear, lightweight guidelines about what can and cannot be shared with these tools. AI models from the large American Labs are almost certain to be fine; tread with caution with foreign products.
Evangelism
Effective evangelism is essential. And very hard.
Start by identifying AI champions in key teams - these are influencers who can demonstrate practical value to their peers.
Develop a tiered approach to learning, beginning with simple, high-success-rate use cases that build confidence before introducing more complex applications.
Track and publicize concrete wins - when a developer uses o3-mini-high to complete in an hour what would have taken days, make that story known.
Create lightweight knowledge-sharing mechanisms like brief tech talks, prompt libraries, and peer mentoring.
You need your engineers to want to use AI; not be forced to use it. They need to believe in the benefits. To believe AI can help them. And when they see their colleagues shipping better code faster - and with less tedium - they will make the change themselves.
One cube or three?
Your ICE strategy will look different depending on your organization's size and resources. For start-ups and small teams, you may not be able to afford a dedicated ICE team. So, designate AI champions within your existing team structure - perhaps one developer focused on keeping up with new tools, another on testing practical applications, and a third on helping colleagues integrate useful techniques.
Mid-sized companies might start with a single cross-functional AI team handling all three ICE functions before expanding. Begin with just 3-5 dedicated people split across these roles.
Enterprise organizations can and should invest in dedicated teams for each ICE function. Your Centre of Excellence might include specialists in different AI domains - one for code generation, another for testing automation, and others for documentation and knowledge management AI applications.
Regardless of size, the principle remains the same: deliberate, structured investment in AI capabilities is non-negotiable. The scale can be adjusted, but the commitment cannot.
When to start?
When a tsunami approaches, there's often a deceptive moment when the water recedes from the shore. Some see this and, not understanding what's coming, wander out to collect stranded fish or explore the newly exposed seabed. Minutes later, they're overwhelmed when the true wave arrives.
We're in that deceptive moment with AI now. Some see the ChatGPT hype receding and conclude the danger has passed. “AI was overhyped,” they say. “It's just another tool.” These are the organizations collecting stranded fish on the exposed seabed.
The real wave - composed of Claude 3.7, o3, and their imminent successors - is building offshore. Those who started running uphill months ago will weather the wave. Those implementing their ICE strategy now are at least moving in the right direction. But organizations still debating whether AI is worth their attention? They're admiring the curious sight of fish flopping on suddenly dry land.
The tsunami is coming. Now is the time to run, and the direction is clear. Incubate. Build your Centre of Excellence. Evangelize. Not next quarter. Not when the budget refreshes. Today.
The shoreline of software development is about to be permanently redrawn. Where will your organization be when the wave hits?


I’ve been pretty underwhelmed with Claude 3.7, and switch back to 3.5 most of the time. I find that Claude 3.7 is less likely to predict what I want correctly, from the (often terse) briefings I give it, it has substantially worse memory (forgetting during conversations what I’ve previously told it, with me having to remind it) and it seems more arrogant. To me it feels rushed out, and I suspect Anthropic are doing something “clever” to reduce context wherever possible. This context/memory issue is the biggest one for me. It often feels like I’ve started a new conversation, when I’m just the 2nd or 3rd response into the existing one.
I’m also pretty underwhelmed with AI in general with my current use-case - helping me build embedded rust applications with embassy. To be fair, embassy is relatively new and evolving, and maybe if I fed the full, current embassy docs into Claude it’d be better. But I’m not convinced - at least with 3.7 - as when I do provide it with some docs, it “forgets” and starts making up its own stuff again.
And this leads me to my currently thinking about AI - it can be crap off the golden path. If you’re doing something esoteric or unusual it’s not great. And that’s where human value remains. I believe it behooves us (always, not just because of AI coming to eat our lunch) to add the unique value we can - this is just one more case.
I also wonder how the big leap up the value chain you’ve talk about in some of your posts is going to happen with AI. I agree with you that AI today is like an eager relatively junior SWE - just one with encyclopaedic knowledge in some areas, and the able to product code really fast. That’s presumably because there’s literally tons of code out there it’s been trained on. But a lot of it isn’t good code - and therefore it generates code that is not stuff I’d want to maintain long-term. I often find myself completely rewriting (with its help) code it’s generated from scratch, because while it might work, or at least point me to something that’ll work, it’s just not something I want to live with. That’s OK for a one-off tool, but not for, say, a multi-million subscriber telephone system, or hospital back-end, or … anyway, I digress. What I was getting to was where is the great code for the better AIs to be trained on going to come from? Where are the design docs and architecture docs? I expect there are some out there, but I’ve always been disappointed by what’s available with any open source project I look at.
Or maybe I just want to remain better than AI at coding :-).