The comfortable local maxima
The neuroscience of AI resistance
I was in a meeting the other week where someone said, "I still don’t use AI - I’ve not found anything it can help me with."
It was an interesting comment. The person making it was a highly skilled engineer with decades of experience. I’m pretty sure AI could help them. Maybe it could review some code. Or write unit tests. Or build a tool to obfuscate customer PII. Or maybe even fix some bugs?
But they clearly hadn’t found that thing yet. And it got me wondering - why did they think AI couldn’t help?
Brain epochs
Perhaps we can get a clue from recent research from Cambridge. Researchers recently published the results of a study into the stages of brain development. This split our brain development into five epochs, with turning points at 9, 32, 66 & 83. The period from 9→32 is where fluid intelligence - raw problem solving, ability to adapt to new situations - is at its peak. But, over time, fluid intelligence decreases and crystallized intelligence - accumulated knowledge, expertise - grows. We start to become set in our ways.
From ages 32→66 the brain plateaus. It is still rewiring but less dramatically and more slowly. Then from 66 onwards modularity sets in; with the neural networks dividing into separate units. And after 83 this trend seems to accelerate…
Learning is hard
Then there’s the reality that learning is hard. It takes effort. Sustained effort. I’d like to be able to play the guitar well. And the drums. And speak Spanish. I’ve tried all these and more and know how hard it is to get good at them. Much harder than when I was at school - back then all I had to worry about was, well, learning. And turns out my brain was more amenable to it too.
But when you don’t know how to solve a problem, you have to learn. There’s no way around that. Things are different, though, if you’ve already found a solution then learning a new way often requires unlearning the original way. You’ve got to go backwards to go forwards.
Our engineer
Does any of this explain our engineer’s comment?
They are an expert in their area - they’ve worked on the same product for decades. They can answer many questions from memory. And know where to look for the questions they can’t.
And therein lies the problem. Over the decades they have found a local maxima. In those earlier years they had to learn. And they had more fluid intelligence. But now that has turned into hard-won crystallized knowledge. They have optimised - and have less to gain from AI. Plus AI requires learning. It will take effort and time. They’ll go slower.
So, while their comment might initially seem perplexing, it makes perfect sense when you put together the phase of their career with the reality of the gains from AI.
The risk
Our engineer’s moat is their crystallized knowledge. That’s what makes them invaluable to their team.
Trouble is, an LLM can be thought of as a fount of crystallized knowledge. It embodies years of best practice in many areas of human life within the weights of the model. Could the combination of a young engineer (lots of fluid intelligence) and a LLM (crystallized knowledge) beat out the experienced engineer (who can’t escape the biological decline in fluid intelligence)?
Maybe.
It’s interesting that, for now at least, it’s junior workers who are most at risk of being replaced with AI. Whereas a combination of biological fluid intelligence - a desire to learn and experiment - and LLM provided crystallized intelligence should arguably give them a benefit. So why don’t we see that playing out?
Some crystallized knowledge is proprietary; LLMs haven’t been trained on this and don’t know it. This is what protects our engineer for now.
Accessing the crystallized knowledge embedded in LLMs requires good communication skills. And it requires the ability to spot LLM hallucinations. Ironically doing this well also requires crystallized knowledge - precisely the thing the juniors lack.
And then there are basic realities of life:
It’s much easier to not hire new junior staff than fire seniors.
Getting things done requires navigating the organisation which needs skills in politics and relationships - yet more crystallized knowledge.
Perhaps the "soft skills" that engineers have so often overlooked will become the new key skill in getting the most out of AI?
And so?
So are we heading to a world shorn of fluid intelligence?
It doesn’t seem likely. The rise of thinking models means LLMs are getting increasingly good at fluid intelligence too. For now there are limitations: it’s only within a session, there is no persistent learning, no accumulated intuition, no transfer across domains. But there are lots of smart people working on those problems. It will inevitably get better.
This could give senior folk a leg up. They are more likely to have the crystallized knowledge to enable them to use AI effectively. Except they are less likely to be willing to experiment and invest in learning.
It’s an interesting double edged sword. The junior folk have the desire but lack the skills; the more senior folk have the ability but lack the desire. Of course, reality is more nuanced than that, but it’s an interesting way of viewing the challenges across the seniority ranges.
I’m not immune to this. I spent years in management before moving back to a technical role - a move that left me without an obvious moat. Maybe my interest in AI is just me rebuilding one? Maybe it’s easier to see the trap when you’re starting from ground level?
Which brings me back to our engineer...
They have decades of crystallized knowledge, honed communication skills, and hard-won judgment about what works. That’s what is needed to get the most out of AI. But they don’t have the motivation; their local maxima is comfortable, thank you very much. The tragedy isn’t that AI can’t help them. It’s that they’ve decided not to find out.


Thanks for writing this, it clarifies a lot. This really builds on your earlier piece about breaking out of routine. My students' fluid intelligence is peakin', makes my Pilate poses look easy.