Fidelity Over Distance

by | Mar 29, 2026 | AI, Government, Military

What Survives the Distance

In the previous post, I argued for getting engineers closer to the operators who use the systems they build. The forward-deployed model, imperfect and constrained as it often is, produces better outcomes than a requirements document passed down a procurement chain. But physical proximity has limits, and I have been sitting with a harder version of the question ever since. What if those limits are less fixed than they appear? What if proximity was never really the point?

What Proximity Actually Does

The value of an engineer working alongside an operator is not primarily that they share a building. It is that they share enough of the same reality for meaningful communication to happen. The engineer sees the tool used under pressure. They hear the language the operator uses to describe problems. They understand the workarounds that exist because the system does not quite do what it needs to do. They absorb context that was never written down because it was assumed.

What they are actually receiving, in all of that, is fidelity. How much of the real problem survives the journey from the person who has it to the person who can solve it? Physical proximity is the most reliable mechanism we have found for preserving that fidelity. When the United States Air Force’s (‘USAF’) Kessel Run programme placed active-duty servicemembers alongside industry developers, the result was a delivery cadence of new capabilities every 12 hours at its peak. That is not a technology story. It is a fidelity story. When the information is clean and current, and the feedback loop is short, the work changes character entirely.

The traditional procurement model does not deliberately destroy fidelity. It accepts loss at every step. The operator’s problem gets translated into a requirement. The requirement gets translated into a specification. The specification gets translated into code. Each translation introduces noise. By the time the software arrives, what remains may be faithful to the document, but the document was never a perfect representation of the problem.

The Vocabulary Gap

The specific mechanism by which fidelity is lost is worth understanding because it shapes what any solution needs to address.

Operators and engineers do not share a vocabulary. This is not a failure of intelligence or willingness. It is the natural consequence of working in different disciplines. An operator who describes a problem thinks in terms of mission, tempo, risk, and workarounds. An engineer hearing that description needs to translate it into function, input, output, and edge cases. The gap between those two mental models is where most defence software delivery fails, not in the building, but in the translation.

The requirements document is the traditional mechanism for bridging that gap. It imposes structure on the operator’s problem in the hope that the structure will carry enough meaning to guide the engineer. Requirements documents are good at describing what a system should do in expected conditions. They are poor at capturing why, and almost completely silent on the operational texture that determines whether the software will actually be used.

What LLMs Change

Large language models (‘LLMs’) are unusually well-suited to the specific problem of vocabulary translation. They absorb plain language. They hold context across a conversation. They produce structured output from unstructured input. They can ask clarifying questions that a requirements document cannot. And they do not require the operator to have learned the engineer’s vocabulary in advance.

The research evidence is beginning to accumulate. In safety-critical systems engineering, LLM-assisted processes have reduced formal hazard analysis cycles from months to a single day while maintaining expert verification. The same underlying capability, translating natural language descriptions of problems into structured specifications that technical teams can act on, addresses precisely what the vocabulary gap in defence software delivery needs.

The deployment context is also moving in the right direction. IBM has developed a defence-specific LLM designed to operate in air-gapped and classified environments. Scale AI’s ‘Defense Llama’ is fine-tuned for combat planning and intelligence operations. These are not general-purpose consumer tools pointed at a defence problem. They are systems being built for the specific conditions under which defence software delivery operates.

What this means in practice is that an operator in a classified environment, unable to bring an engineer into the room, can describe a problem in their own language through a properly architected LLM layer. That description can be translated with enough fidelity for an engineer to understand the operational context, ask better questions, and build something closer to what is actually needed. The engineer does not need to have been in the room. The operator does not need to become technical. The gap narrows.

The Honest Tension

This is not an uncomplicated story, and it would be dishonest to present it as one.

LLMs introduce their own distortions. They are confident when they should be uncertain. They smooth over ambiguity rather than surfacing it. They can produce structured output that looks authoritative and contains errors that a technically naive user will not detect. In a safety-critical or operationally sensitive context, that combination of fluency and fallibility requires careful management.

The question worth sitting with is whether those distortions are worse than the ones the traditional model already produces. A requirements document is also confident. It also smooths ambiguity by forcing it into a format that does not accommodate nuance. It also produces artefacts that look authoritative and contain errors that persist through multiple programme reviews before anyone notices. The difference is that LLM-generated translation is faster to produce, test, and correct. The feedback loop is shorter. The cost of a wrong turn is lower.

That does not resolve the tension. It reframes it. The question is not whether LLMs are perfect translators. They are not. The question is whether an imperfect translation with a short feedback loop yields better outcomes than one with a long one.

The Same Logic at a Different Layer

I have been describing a problem that exists at the human layer: the gap between the operator who understands the problem and the engineer who can solve it. The same logic applies one level down, at the layer where AI models meet operational data.

In deployed military environments, AI systems face an equivalent fidelity problem. The model needs to be close to the data to be useful. In denied, disrupted, intermittent, and limited (‘DDIL’) conditions, the data cannot reliably reach a central model. Bandwidth constraints, volume, latency, and the realities of contested environments all make centralised AI architectures fragile at precisely the moments they are needed most. The response, as with the human proximity problem, is to invert the direction of travel. Instead of moving the data to the capability, you move the capability to the data. You take the AI to the fight.

That is the subject of the next post in this series, and it is where the central argument arrives at its sharpest point. The principle is the same at every layer. Always take the capability to the problem. Never assume the problem will travel to the capability intact.

What Fidelity Actually Requires

Proximity was the best mechanism we had for preserving information fidelity between people who think differently about the same problem. It remains valuable, and nothing in this post is an argument against forward-deployed engineers. The argument is that proximity was always a means to an end, not the end itself, and that LLMs change what is achievable when proximity is not an option.

What fidelity actually requires is a mechanism that can absorb the real problem as it exists in the operator’s mind and carry enough of it, intact, to the engineer who needs to act on it. Physical co-location does that reliably. LLMs, used carefully in the right architectural context, may do it well enough to change what is possible in the environments where co-location is not.

I am genuinely uncertain about where those limits are. The research is early, the deployment examples are sparse, and the failure modes are still being discovered. What I am more confident about is the framing: the problem was never really about distance. It was always about what survives it.

Written by Seb Matthews

Author, speaker, and advisor on leadership under pressure and organisational performance.

Related Posts

Take the AI to the Fight

Take the AI to the Fight

Cloud AI made a bet that data would flow to the model. In military operating environments that bet fails, for two distinct reasons. This post explains why, and closes a three-part series on the design principle that runs through every layer of defence AI.

read more

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *