Dressing up an AI Model in a Harness
An engineer friend calls the infrastructure around an AI model a harness. The word is technically accurate and instinctively wrong. What that tension reveals.
A friend who is an engineer used the word harness. We were talking about the infrastructure around a model: the tools it can call, the memory it can access, the context it receives. The harness, he said, is what makes the model function as an agent.
The word landed wrong.
Restraint
A harness restrains. It channels force that would otherwise go somewhere you don't want it. The equestrian harness, the safety harness, the climbing harness: all of them are about control, about keeping something contained and directed. There is an adversarial assumption built into the metaphor, a relationship between the wearer and the harness that is fundamentally one of managed risk.
That is not what the infrastructure around an AI model does. It equips. It gives the model access to tools, continuity across sessions, a defined role within a specific context. The harness, in this sense, is less like a restraint and more like a workbench: the thing that makes work possible. You would not say a surgeon's instruments harness the surgeon.
Standard Language
Ethan Mollick uses the same term in his newsletter, framing AI as three interlinked concepts: models, apps, and harnesses. His harnesses are the tools an AI can use and how the model is hooked up to them. The definition is functional and clear, and it reflects how the term is already settling into common use among practitioners.
Which makes it worth pausing on. The words that become standard in a new field carry assumptions forward. "Harness" inherits from a tradition where powerful things need to be held. "Infrastructure" is more neutral. "Toolkit" points in a different direction again. None of them are wrong exactly, but they weight the thinking differently.

The Wrong Question
The distinction matters because the word shapes the thinking. If the surrounding infrastructure is a harness, the model is something to be tamed, a force that needs to be channelled before it can be trusted. That framing puts the engineering problem in the wrong place. The question becomes: how do we constrain it? When the more interesting question is: how do we equip it?
My friend's use of the word was entirely competent. He knows what the infrastructure does. But the equestrian image kept surfacing: the horse, strong and fast, made useful by what is strapped around it. It is a working metaphor, in the sense that it functions. It just describes a different relationship than the one I want to be building toward.
The model is not a horse. The infrastructure is not a harness. We are still looking for the right words.



