The Tactician and the Technician

by | Mar 26, 2026 | AI, Government, Military

When the Builder Has Never Seen the System Used

Somewhere right now, a software engineer is writing code for a system they have never seen used in anger. Somewhere else, the person who will use that system is trying to describe a problem in language that will survive a requirements document, a tender process, a programme review, and an acceptance test before it reaches anyone who can do something about it.

That gap is not a communications problem. It is a structural one. And it is where defence has consistently struggled to deliver the capability it promises.

How the Distance Gets Built In

The traditional defence software delivery model follows a familiar sequence. Operators and subject matter experts describe what they need. Those descriptions get translated into requirements. Requirements go into a tender. A supplier wins the contract, designs a solution, and builds it. Months or years later, the software arrives, is accepted against the original specification, and goes into service.

Every step of that process adds distance between the person who understands the operational problem and the person with the skills to solve it. By the time the software reaches the operator, the need has evolved, the context has shifted, and the engineer who built it has moved on to the next programme. What arrives is a faithful answer to a question that is no longer being asked.

This is not a failing of individuals. It is the predictable output of a model that treats software delivery as a hand-off rather than a conversation.

The Case for Forward Deployment

‘Tactician and technician’ is not a job title. It is a design principle. Get the builder as close to the user as the security and operational environment will allow.

The ideal version of this is physical proximity. An engineer embedded in an operational unit, watching how tools are actually used, understanding the tempo and the pressure and the workarounds that never make it into a requirements document. Close enough to hear what is said after the briefing ends.

The case that this works is not theoretical. Palantir Technologies built its early business almost entirely on this model. Until 2016, Palantir had more Forward Deployed Engineers (‘FDEs’) than software engineers. They called them ‘Deltas’ and described the role as carrying radical deference to teams in the field, empowered to invent entirely new products and technologies if that was what the problem required. The model was oriented around defence, intelligence, and law enforcement from the beginning. In the UK, Adarga’s deployment to Strategic Command, alongside its applied AI services model, reflects a similar instinct: that proximity to the operator produces meaningfully better outcomes than distance.

Forward deployment exists on a spectrum. At one end, engineers are embedded inside operational units, living and working alongside the people they are building for. Further along, you have structured and regular access to real users doing real work, with short feedback loops and fast iteration cycles. At the other end, you have rapid-response remote support, where an operator can surface a problem through a real-time channel and receive a meaningful response within hours rather than weeks. The right position on that spectrum depends on classification requirements, operational tempo, and what the programme can sustainably support.

What the Forward-Deployed Engineer Actually Does

The value of proximity is not just that engineers learn things. It is that the work changes character entirely.

An engineer who has watched how a tool is actually used in a high-pressure environment makes different design choices than one who has only read a user story. They iterate faster because they can see the effect of what they build. They fix things in real time when the environment allows it, rather than logging a change request and waiting for the next release cycle. They kill bad requirements before they become bad software, because they understand the operational context well enough to push back on a specification that looks sensible on paper and will fail in practice. And they build trust between the technical and operational communities, which is perhaps the most durable thing they produce. That trust makes every subsequent piece of work faster, cheaper, and more likely to be used.

The United States Air Force’s (‘USAF’) Kessel Run programme, founded in 2017, is one of the most documented examples of what this looks like in a defence setting. It was the first software factory to place active duty servicemembers alongside industry developers to write code together, and it demonstrated that the model produced better software faster than the traditional approach. What is equally instructive is what happened next. Kessel Run has since moved toward a more conventional ‘government-led, vendor-managed’ structure. The model worked, and the institution found a way to revert to a distance model. That is not an argument against the principle. It illustrates the structural forces this approach has to contend with.

When Physical Proximity Is Not Possible

There are contexts in defence where forward deployment in the physical sense is impractical or simply not permitted. Classification barriers, operational security requirements, and the realities of deployed environments all impose genuine limits.

The question then becomes whether the distance is fixed or whether it can be reduced by other means. A structured real-time support channel that lets an operator accurately describe a problem, route it to the right engineer, and receive a response within hours is functionally closer to a programme management system with a six-week change request cycle. The geography is the same. The distance is not.

Large language models (‘LLMs’) are beginning to change what this can look like in practice. An LLM acting as a first layer of support can capture a problem in plain language, translate it with enough accuracy for an engineer to act on, route it to the right person, and help preserve the operational context that would otherwise be lost in the handoff. The operator does not need to have learned the vocabulary of the requirements document. The engineer does not need to have been in the room. That is not a replacement for genuine proximity, and it introduces its own questions about what gets lost in translation and where the model fails. But it is a meaningful reduction in the effective distance between the person who understands the problem and the person who can solve it. That question is bigger than this post can properly hold, and I intend to return to it.

The Barrier That Matters Most

Security clearance is a real barrier to forward deployment, but it is a solvable one with the right investment in the right people. Culture is harder to build, but it tends to develop through proximity, which is precisely what this model creates.

The most important barrier to consider is commercial. It is very difficult to forward-deploy an engineer who is employed under a fixed-price requirements contract. That contract was designed to maintain separation between customer and supplier, to protect against scope creep, and to provide a clean audit trail. Those are legitimate objectives. But they make the kind of embedded, iterative, trust-based relationship that forward deployment requires almost impossible to sustain. The Kessel Run story is partly a story about this. The model that worked was the one that looked least like conventional procurement, and the pressure to return to conventional procurement eventually prevailed.

If you want tacticians and technicians working together, the commercial model has to allow it. That means different contract structures, different ways of defining and measuring value, and a willingness on both sides to accept a relationship that looks less like procurement and more like a partnership.

The Foxhole Is a Figure of Speech

You do not have to put an engineer in a foxhole to make this work. You have to close the gap between the person who builds the capability and the person who uses it, by whatever means the environment will allow.

Leave that gap in place, and the likely outcome is software that is technically compliant, operationally marginal, and quietly worked around by the people it was supposed to help. Close it, and the character of what gets built changes, along with how fast it improves and whether it is actually used.

The tactician knows what the problem is. The technician knows how to solve it. The question worth sitting with is whether the programme, the contract, and the culture are designed to let them talk.

Written by Seb Matthews

Author, speaker, and advisor on leadership under pressure and organisational performance.

Related Posts

Take the AI to the Fight

Take the AI to the Fight

Cloud AI made a bet that data would flow to the model. In military operating environments that bet fails, for two distinct reasons. This post explains why, and closes a three-part series on the design principle that runs through every layer of defence AI.

read more
Fidelity Over Distance

Fidelity Over Distance

The problem in defence software delivery was never really about distance. It was always about how much of the real problem survives the journey from operator to engineer. LLMs change what fidelity is achievable when physical proximity is not possible.

read more

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *