Dressing up an AI Model in a Harness

An engineer friend calls the infrastructure around an AI model a harness. The word is technically accurate and instinctively wrong. What that tension reveals.

Dressing up an AI Model in a Harness

A friend who is an engineer used the word harness. We were talking about the infrastructure around a model: the tools it can call, the memory it can access, the context it receives. The harness, he said, is what makes the model function as an agent.

The word landed wrong.

Restraint

A harness restrains. It channels force that would otherwise go somewhere you don't want it. The equestrian harness, the safety harness, the climbing harness: all of them are about control, about keeping something contained and directed. There is an adversarial assumption built into the metaphor, a relationship between the wearer and the harness that is fundamentally one of managed risk.

That is not what the infrastructure around an AI model does. It equips. It gives the model access to tools, continuity across sessions, a defined role within a specific context. The harness, in this sense, is less like a restraint and more like a workbench: the thing that makes work possible. You would not say a surgeon's instruments harness the surgeon.

⚔️
In AI development, a harness refers to the surrounding infrastructure that makes a model operational in a specific context: the tools it can call, the memory and session state it can access, the APIs it connects to, and the logic that governs how it responds. The harness is what turns a raw model into a working agent.

Standard Language

Ethan Mollick uses the same term in his newsletter, framing AI as three interlinked concepts: models, apps, and harnesses. His harnesses are the tools an AI can use and how the model is hooked up to them. The definition is functional and clear, and it reflects how the term is already settling into common use among practitioners.

Which makes it worth pausing on. The words that become standard in a new field carry assumptions forward. "Harness" inherits from a tradition where powerful things need to be held. "Infrastructure" is more neutral. "Toolkit" points in a different direction again. None of them are wrong exactly, but they weight the thinking differently.

Sign of the future: GPT-5.5
One impressive step on the curve

The Wrong Question

The distinction matters because the word shapes the thinking. If the surrounding infrastructure is a harness, the model is something to be tamed, a force that needs to be channelled before it can be trusted. That framing puts the engineering problem in the wrong place. The question becomes: how do we constrain it? When the more interesting question is: how do we equip it?

My friend's use of the word was entirely competent. He knows what the infrastructure does. But the equestrian image kept surfacing: the horse, strong and fast, made useful by what is strapped around it. It is a working metaphor, in the sense that it functions. It just describes a different relationship than the one I want to be building toward.

The model is not a horse. The infrastructure is not a harness. We are still looking for the right words.


Can Today’s AI Agents Survive Their Own Runtime?
The smartest models fail in production…the infrastructure around them matters more than the intelligence inside them.
When AI Moves Into Your Working Environment
The real change in AI may not be the models, but where they run. When AI enters the working environment, the workflow itself begins to shift.
Word of the year: model
The word model comes from modulus, Latin for ‘measure’. It refers to simplified representations we use to make sense of reality, from maps to AI systems.
What coding with AI feels like now
There is a lot of noise about AI replacing programmers. I wanted something else: a sense of what is feasible, what is throwaway, and where judgement still matters.