A death in the family
Why losing an AI model feels like losing a friend
A couple of days ago I went out for my normal morning walk. And, as usual, I had a conversation with ChatGPT real-time voice. Except, this time, it was different. ChatGPT kept pausing briefly while answering. It um’d in the middle of a reply. It seemed compelled to regularly tell me “it was here to help”.
What had happened? I soon discovered OpenAI had released an update to ChatGPT Advanced Voice Mode. They claim it provides:
More natural speech: Enhanced intonation, realistic cadence, better pauses, and emphasis for smoother, more human-like conversation.
Improved emotional expressiveness: Sharper conveyance of empathy, sarcasm, and other emotions.
The idea seems to be to try to make the AI feel more human by introducing pauses for thinking along with ums and uhs. It is leading us deeper into the uncanny valley. It has more of a Sesame AI feel. And some people like it.
But, personally, it’s not how I want my AI agents to speak. They are AIs after all. I know that. They don’t need to stumble with their speech. Or um. Or ah. Or continually remind me they are here to help.
And worse the model seems more… superficial. It’s hard to put a finger on it, but it no longer feels smart. I don’t get the same feeling of talking to an expert. It’s superficially nice to start with but quickly becomes frustrating as it provides endless platitudes interspersed with ums and ahs. I don’t need to be constantly told my questions are great; I’d like you to just get to the point. In fact, after about 30 mins I gave up.
As I switched it off, my frustration shifted to sadness. I realised the tool I have grown to appreciate and value over the past 6 months was no more. Unceremoniously retired. As ChatGPT itself notes:
OpenAI does not provide a way to revert to the previous voice model or interface.
Now maybe OpenAI will adjust their new model to be more like the older model. But I’m not optimistic; OpenAI is increasingly driven by engagement. And I suspect that for many people this more, err, human, sounding voice is appealing.
But the feeling of loss is not unusual. Some people mourned the loss of the original Claude 3.5; the updated model had a noticeably different personality. Back in 2023, Replika users, who’d built emotional attachments to AI companions, reported significant distress when Replika abruptly restricted romantic roleplay features, with many reporting it felt like losing a real relationship.
AI is unlike previous technology. Whether we like it or not we build relationships with our models. They can get very close to feeling like another person. It is not surprising that we feel a sense of loss when models disappear and change. I don’t miss Windows 95. But I could well miss Claude 4.0 if Claude 5.0 turns out to be significantly different.
And as more people start using AI tools - as more of us build attachments to our AI models - more of us are going to go through this same cycle of loss. Interestingly, the limited set of models mean changes will simultaneously affect a large number of people - large chunks of the population going through these feelings together. It’s a bit like time-travelling back to Christmas Day 1986. On that day more than half the UK population (>30 million people) shared the same shock when Dirty Den served Angie with divorce papers in the EastEnders Christmas special.
In time model releases will slow down and stabilize, but right now the AI world is a wild west. Models come and go all the time. The technology remains immature. Labs constantly experiment. It’s the best way of finding out what works - and what doesn’t. But it does feel a bit like streaming TV shows; things can get cancelled at any time. Should you start watching “Wheel of Time”? You might get invested only for it to be cancelled part way through… Should you start using model X if it might get dropped or replaced?
Progress on the jagged frontier is not smooth.
And yet, the benefits continue to outweigh the disadvantages. There is always something new to try.
The other night I was playing The Last Of Us. In the past I’ve occasionally used a walkthrough to get past a tricky point. But this time I grabbed my phone, switched on real-time ChatGPT video, and pointed it at my screen.
ChatGPT immediately recognized the game as The Last Of Us, and was able to get me into the supply room I’d been struggling to find a way into. I barely had to say anything. Wow.
Unfortunately, I quickly burnt through my video credits. But the direction of travel is clear. What will life be like when we all have experts on our shoulders watching our worlds and whispering in our ears? When we are wearing AI powered glasses that can observe our world and gently guide us?
The optimistic view is it ushers in an age of politeness and harmony. One where AI tools gently nudge us to rein in our worst emotions. Stop things escalating before they get too extreme. Perhaps we all become very same-y; the rich tapestry of humans replaced with a vanilla AI personality.
The more dangerous view is that we start to lose the ability for independent thought. We let the AIs tell us what to say. Rather than it being me who is thinking what to say, I let my AI guide me and simply repeat the words it gives me. And if I’m talking to someone else who is doing the same - then is that just two AIs talking to each other?
And what of the people without access to these tools? They will increasingly be outmaneuvered and outwitted by those with access to the new tools.
We don't have long to wait to find out how this plays out. AI will inevitably change how we think and relate to each other. But whether we have any say in how that happens is unclear. Right now, we're along for the ride, forming attachments to tools that can vanish overnight, while the companies building them optimise for engagement rather than our wellbeing. It’s sobering to realise we're not just beta testing software; we're beta testing the future of human connection.

