Why I Keep Mixing Up Hortensia, Hibiscus and Rhododendron
A mix-up of flower names becomes a window into how human memory works and how it contrasts with AI models.
And What It Reveals About Human Memory and AI Models.
For almost 25 years I’ve been exploring the biological world. Photography sharpened my eyes: when you take pictures, you learn to see more. Later, AI recognition apps added names to what I saw. Writing about plants and flowers deepened it again. Over time, I’ve learned a lot.
And yet three names keep slipping through: Hortensia, Hibiscus, Rhododendron.
They look nothing alike. I’ve looked them up dozens of times. Still, whenever I stand before one, hesitation creeps in. The word is there — but another one arrives instead. It feels absurd: my memory is fine, but the recall fails.
Why Some Names Don’t Stick
This isn’t about memory loss. The plants themselves are secure in my mind: I can recognise them, describe them, even recall where I last saw them. The weakness lies in recall fluency or the step from image to name.
Human memory doesn’t operate like a tidy database. It works by association. When I search for “ornamental shrub with a long Latin name”, a whole neighbourhood of candidates lights up. And in that crowd, Hortensia, Hibiscus and Rhododendron compete with each other.
Building a Web of Associations
Over the years I’ve tried to strengthen the connections. Associations help:
- Hortensia → from hortus (garden): large garden spheres of flowers.
- Hibiscus → the bright red tea I’ve drunk many times.
- Rhododendron → from Greek rhodon (rose) + dendron (tree), literally a “rose tree”.
I add more threads: the colour mauve (from malva), the marshmallow plant, the whole Malvaceae family. I write about them, photograph them, and build stories. Slowly the names gain more grip.
But still they wobble. Hortensias bloom heavily, then fade. Rhododendrons are green most of the year. Hibiscus feels tropical, slightly exotic. They don’t share a common rhythm in my life, so their names remain only lightly woven into my web.
When Models Become a Mirror
Working with large language models has sharpened this insight.
A model’s memory is a geometry: tokens in a high-dimensional space, clustered by co-occurrence. Retrieval is clean, almost mechanical: if a name sits at these coordinates, it will be found.
My own memory is layered, improvised, and deeply entangled with life. When I fail to recall Hibiscus, it isn’t because the data is gone, it’s because the pathways are tangled. Too many neighbours compete: Hortensia, Rhododendron, and all the other long Latin names.
The contrast helps me see my own memory more clearly. Models expose what human recall is not. They highlight that we don’t live in coordinates, but in webs of meaning.
Memory as a Web, Not an Address Book
The metaphor of “neighbours” runs through both systems. In a language model, neighbours are defined by probability: which words tend to appear near which others. In human memory, neighbours are defined by colour, rhythm, smell, experience, emotion.
One wrong thread, and I end up at the wrong flower again. But that same web is what allows creativity: by following detours, memory generates new connections.
Models cluster words. Humans weave meaning.
A Reflection
So these three stubborn names are not just a quirk of recall. They are part of a larger discovery.
By learning how models cluster and retrieve, I’ve gained a clearer sense of how my own memory works and why it so often slips. The contrast explains both the strength of human memory (richness, creativity, lived meaning) and its weakness (confusion, false neighbours).
An LLM will always find the name at its address. I will sometimes lose it, but in losing it I discover something else: how meaning is woven into life.