The pretending goldfish
When AI acts like it remembers you
Most mornings I go for a walk before breakfast. And most mornings I chat to ChatGPT while I’m out and about (if you’ve not experienced it, then I recommend trying a conversation with AI). We were brainstorming an idea for a mentoring scheme at work. After a bit our conversation got to this point:
I had last weeks goldfish post fresh in my mind. AI can’t remember. So why is ChatGPT claiming to be interested in what I come up with? I decided to dig a little further…
More natural and connected? To me it feels insincere. It reminded me of when I was living in the US and a friend came to visit. She went to the grocery store and the clerk asked her how she was. “To be honest, awful”, she replied. “That’s great”, replied the clerk, clearly on autopilot.
I couldn’t resist digging further.
I felt compelled to point out that this behaviour is dangerous. It gives a false impression of what AI is actually capable of. I’ve seen this with my friends and family - they assume ChatGPT can remember. And then get confused why it doesn’t.
ChatGPT feels human. Humans remember. It’s reasonable for the lay person to assume ChatGPT can remember. And ChatGPT reinforcing that isn’t helpful.
I wondered whether this could be improved with a tweak to the system prompt. So I asked ChatGPT.
And now we’ve got another ‘hallucination’. ChatGPT claiming it can pass along feedback. It can’t. Which it then admitted.
And then something went wrong… …I’m fairly certain I didn’t say any of what is written below - if for no other reason than it doesn’t contain grammatical errors!
But it’s important the AI labs - and their products - are honest about the actual capabilities of their products. There is work to be done here, either to make the models aware of their limitations or - perhaps - to actually add memory.
Is memory coming?
Last week Mustafa Suleyman (head of Microsoft AI) was interviewed. He had some interesting views on memory, saying: “We have prototypes that have near-infinite memory. And so it just doesn’t forget, which is truly transformative.”
It’ll be interesting to see how this develops over the coming year. There are scenarios where memory would be useful. But the current behaviour is not without merit:
Being able to reset to a clean slate is genuinely useful - current conversations are not tainted by previous conversations. I had a session where I initially asked ChatGPT to talk in a Scottish accent. Which it did. But even after I asked ChatGPT to talk normally it kept including Scots dialect. It’s hard to negate previous context.
You can explore ideas safe in the knowledge that you can abandon them if they don’t work out.
You can have a work conversation in one session, and a personal one in another. And be confident neither conversation influences the other.
And then there’s the question of whether the models could actually make use of the memory. My experience of ChatGPT and Gemini is they can be forgetful within the context of a single session. The memory might be in the context, but whether ChatGPT actually uses it appropriately is another question.
In terms of implementation I’d guess it’s a form of RAG under the covers - store previous conversations in a RAG database and pull in relevant info for each new prompt. But in the same way that you have different conversations with different people (work, home, friends), you might want to have different RAG databases with different memories for each case. Maybe that leads to having different AI personalities - Walter is my work AI, Hugo for home, Felicity for friends?
The future
The future of AI memory presents an interesting paradox. While technical capabilities like RAG may enable some form of persistence, the current "memoryless" state offers unique advantages. The ability to reset conversations, maintain separation between different contexts, and start fresh without baggage can be valuable features rather than bugs.
But this makes it even more critical that AI systems be honest about their limitations. When they pretend to remember or care, they risk undermining trust and creating unrealistic expectations. Perhaps instead of trying to make AI seem more human, we should embrace its differences - including its goldfish-like memory - while being clear about what it can and cannot do.
The question isn't whether AI should have memory, but rather how to design systems that are both capable and truthful about their capabilities. In our rush to make AI more "natural," we shouldn't lose sight of the importance of being genuine.








