Activation energy
Why AI adoption follows the laws of physics, not executive decree
My son recently decided I needed to know about electron levels. Let’s put aside the question of why he thought I needed to know and focus on the electrons for now. Electrons, it seems, like to exist in distinct states, or shells around the nucleus of an atom. Electrons can move between these shells but, the further out the shell, the more energy is needed to reach it. This energy, called activation energy, can sometimes come randomly. Or it can be provided externally - for example a stream of photons giving the electrons the necessary boost.
As we talked, I realised my son was using electrons as an analogy for AI adoption (dinner time conversation in our household often has a distinct AI slant). Take the electron shells. We’re all on an AI adoption journey - and that journey has multiple stages. Each stage is much like an electron shell.
One of the lowest levels is 'unaware'. These are folk who’ve not used AI much (if at all) yet. This Gallup survey found that 76% of US employees hadn’t yet used AI in their employment. And, interestingly, only 13% of them believed AI would have a positive effect.
Further out are the 'toe-dippers'. They’ve been convinced to use AI but haven’t yet convinced themselves it is essential.
Then we have 'converts'. Those who use AI pragmatically. They’ve discovered paved paths and stick to them. That same Gallup survey found that 68% of those who had experience of using AI to support customer interactions reported a positive effect.
Finally we reach 'explorers'. These are the folk who keep pushing AI - figuring out new ways to use it, creating new paths for others to follow.
There is also a level below 'unaware'. These are the disillusioned sceptics. People who used AI early on. Maybe an early version of Copilot or Apple Intelligence. And they quickly discovered reality was a long way from the hyperbolic claims industry leaders were making.
But how do people move between shells? That’s where activation energy comes into play. And it varies both by person and level.
Some of us move easily. Early-adopters naturally move from unaware to toe-dipping.
Others are harder. Disillusioned sceptics need a lot more energy to escape their shell.
Getting to the explorer shell is hard - it requires a considerable investment of time and energy. It is sparsely populated. But those that do make it are invaluable in providing the activation energy to move others between levels. They explore, evangelise, create paved paths, and pull others up through the levels.
And just as photons can excite electrons, specific triggers provide the activation energy for adoption jumps. A compelling demo or a peer success story can push others into the next shell.
Extending our chemistry analogy, catalysts reduce barriers. Good onboarding, intuitive interfaces - they lower the activation energy needed to jump between levels.
And just like electrons, most people probably have a natural resting level - a ground state - they return to without external energy input.
Challenges
We’re still in the early stages of rolling out AI throughout organisations. Many people are in the unaware groups. So, the first challenge is how to get them to become toe-dippers.
Then there’s a second challenge: turn the toe-dippers into converts.
And finally the long-term challenge is ensuring converts don’t slide back into lower level shells.
Becoming a toe-dipper
The most powerful way to get people to adopt new ideas and new tools is for them to convince themselves to adopt them. The ideas we believe are ours are far more powerful than those imposed from outside. Intrinsic motivation beats extrinsic every time. Yet I’ve seen large organizations impose top-down "you-must-use-AI" mandates; those go about as well as you might expect.
For actual success you need three things:
Adoption has to be optional. Individuals choose to adopt; it is not mandated.
Those leading the AI rollout need to identify allies and support them. There are always early adopters. Seek them out (e.g. via running a limited seat proof-of-concept), build relationships, get the tools into their hands, listen to them and then support them as they share their experience with their colleagues.
Effective evangelism. Provide lots of proof points – give examples of AI utility in a wide range of domains. Understand the real-world problems teams are faced with and find and prototype solutions for them. Provide the infrastructure and support to your early adopters, look after and support this community.
Remember it’s peer recommendations which carry the most weight. Seeing an impressive demo from a colleague is far more effective than abstract presentations. The combination of trusted colleague and domain-specific application shows the water is warm and alligator free. And once people realise this, they are much more likely to dip a toe in.
True, this approach is slower and more resource intensive than a high-level “use AI” mandate. But it builds a far more robust connection between the tools and the individuals – they are using AI because they want to, not because they are told to. They are more likely to experiment and find new ways to use it – they appreciate it rather than resent it.
Turning toe-dippers to converts
The key here is for people to make their own wins. Maybe they get AI to remove some dull work - maybe searching for text in a scanned in contract that dates from 1995. Or get someone in HR to build an interactive Claude artefact to create a customized onboarding process. Or analyse email DLs and a list of AI users to work out adoption rates in different parts of an org.
Whatever it is, seeing a benefit with your own eyes starts to lock in the value. Inevitably this takes time. Rome wasn’t built in a day. But the momentum will build; some converts will turn into explorers. And they’ll create more converts.
And the acid test that someone has become a convert? Ask them if they’d be willing to relinquish their AI license. If a look of horror passes over their face then you have your answer.
Preventing the slide back
Ground states are an ever present risk; the danger that converts drift back to a lower ground state. The finance team that enthusiastically adopted AI for contract analysis but stopped using it after their champion left the company.
Combat regression through 'use case libraries' - documented examples of successful applications specific to each team's work. When someone discovers AI can automate their monthly reporting, capture exactly how they did it. When motivation wanes, these concrete examples provide the activation energy to re-engage.
Nurturing explorers
Not everyone wants to become - or is capable of becoming - an explorer. But you need to find explorer candidates and nurture them. Support them. Empower them. Acknowledge they are special. Have an important role to play.
Early on a proof-of-concept (PoC) is a great way of both finding and then nurturing those explorers. The early adopters will naturally engage with a PoC; they come to you rather than you having to find them.
But you need to protect them. Ensure they have dedicated time to devote to the PoC - people with tools but no time is not a winning combination. Provide shared support forums. Arrange weekly catch-up/ discussion sessions to share what is working/ not working. To share ideas.
Explorers are also at risk of burnout as they become everyone's informal AI helpdesk. But there is a quick solution; train people to use AI as their AI expert. Ask Claude the questions you might have asked a helpdesk. Unlike any previous tech, AI is excellently placed to help solve these kind of questions. Claude, for example, is fantastic at suggesting ways you can use it.
ROI
You also need to think about the leadership team. They will, rightly, require proof that AI is providing a positive ROI. Tracking hours saved per person is table-stakes. But that misses much of the value AI brings. A marketing manager using Claude to refine campaign messaging isn't just saving time - they're producing better work in the same time. An analyst using AI to explore data patterns isn't necessarily faster - they're asking questions they wouldn't have thought of before.
The challenge is measuring the impact of AI - where improved quality of output and reduction of drudgery are just as important as saving time. You need better metrics: decision confidence scores, output quality ratings from stakeholders, and frequency of 'breakthrough insights' that teams attribute to AI assistance. Track these alongside usage patterns to build a richer ROI picture.
Do this right and your PoC team will become the first batch of explorers evangelising AI throughout your org. Helping to spread experience from the bottom-up. Building those strong intrinsic links.
The challenges
Some people require very high activation energy to move between levels, especially getting to the toe-dipping level. It will take time for these laggards to adopt - and not all of them may ever adopt AI - but it’s important not to try to rush them to meet artificial adoption targets. Doing so risks turning them into the disillusioned sceptic. Many of these people are naturally cautious and want to wait until technologies mature or the benefits become clearly proven. Do the rest correctly and they’ll eventually become toe-dippers.
And so
My son's impromptu chemistry lesson proved more useful than either of us anticipated. It seems electrons and software engineers have more in common than I had realised.
The organisations cracking AI adoption get this. They're not trying to solve all problems at once. Nor mandating compulsory adoption. They find their natural early adopters, give them proper support, and let the grass-roots spread do its work. It's messier than formal training programs. It’s slower than an executive decree. But it actually works. It builds resilient, deep rooted use of AI.
Not everyone will become an AI explorer - nor should they. But get enough people from unaware to toe-dipper, toe-dipper to convert, and you hit a tipping point where adoption becomes self-sustaining rather than something you have to push uphill every quarter.
The chemistry is simple enough. The execution less so. But then again, if it were easy, everyone would already be doing it properly.

