The Decision Nobody Audits
I spent a significant part of my recent career as the chief technology officer (‘CTO’) of one of the largest single-organisation information technology (‘IT’) estates in Europe, a government and defence environment. It was a complex, high-stakes environment, and I have returned to it in my thinking many times since.
What I consistently noticed there was a particular kind of failure. Not a technical failure, though there was plenty of that. The deeper failure was a decision made at the leadership level, rarely made explicitly and almost never audited: that delivering something poor on time was preferable to delivering nothing. Chase the date. Honour the financial milestone. Ship the thing, regardless.
The prevailing view, seldom stated but consistently acted on, was that it was better to deliver something inadequate than to deliver nothing at all. This produced a compounding effect that, from the outside, looked like a series of technology problems. It was not. It was one human decision, repeated at scale, cascading through an entire IT estate.
I have watched a version of that decision play out in every technology transformation since.
When Everyone Looks at the Wrong Thing
The data on artificial intelligence (‘AI’) deployment is by now familiar in its bleakness. Research published in 2026 suggests that between 70 and 85 per cent of generative AI deployments fail to meet their intended return on investment (‘ROI’). When organisations examine why, the explanations cluster reliably around technical causes: the data was not clean enough, the model was not suited to the use case, the integration was more complex than anticipated.
These are real problems. I am not dismissing them. But they are where the failure shows up, not where it starts. The failure started earlier, in the decisions made before the technology arrived, and in the decisions made about the technology once it did.
The tendency to reach for a technical explanation when a technology investment underdelivers is a form of decision-making error in itself. It is more comfortable to question the model than to question the governance. It is easier to re-run a data pipeline than to ask who decided what this was supposed to solve and whether they got that right. The technology becomes the available explanation, and the actual decision goes unexamined.
The Three Decisions That Determine Everything
In my experience, AI deployments live or die on three human decisions that have nothing to do with model architecture.
The first is deciding what problem you are actually trying to solve. This sounds obvious, and it is almost always handled poorly. The framing ‘we are deploying AI to improve efficiency’ is not a problem definition. It is a direction of travel without a destination. The question underneath it, what specifically will be different, for whom, and how we will know, is the one that most organisations defer or leave implicit. When the deployment disappoints, that deferral is usually where the trouble began.
The second is the decision about what to do when AI output contradicts what your team already believes. This is the trust decision, and it is where I have seen more deployments quietly stall than anywhere else. An AI system surfaces an insight, a recommendation, a pattern. A senior person in the room does not like what it implies. The question is no longer ‘is the AI right?’ It is ‘Does this organisation have the culture to act on something that challenges existing judgment?’ In organisations where it does not, the AI gradually becomes a system that confirms what people already think, or stops being consulted at all.
The third is the decision about what happens when the AI is wrong. This is the accountability decision. When an AI-assisted outcome is poor, who is responsible? If the answer is unclear, the organisation will quietly stop using AI for anything that matters. Accountability cannot be offloaded to a model. Someone has to decide whether to use the model’s output. In organisations where that has not been established, AI ends up in a corner of the operation where it is safe but irrelevant.
None of these is a technology question. They are questions about accountability, trust, and decision-making culture. The organisations that have answered them clearly before deploying AI are the ones getting real returns. The ones that have not are generating the data that falls within the 70 to 85 per cent figure.
The Mirror, Not the Transformation
AI does not create new problems in organisations. It reveals and accelerates existing ones.
If your decision-making culture is slow and committee-heavy, deploying AI will not change that. If trust between leadership and teams is fragile, introducing an AI system that produces outputs people are not sure they believe will widen that gap rather than close it. If your organisation has never established clear accountability for technology-assisted decisions, AI will make that ambiguity visible in ways that are harder to manage than before. None of this is AI’s fault. It is the organisation revealing itself under new conditions.
I have observed this pattern across every technology wave of my career, from the internet through to cloud, mobile, and now AI. The technology changes. The underlying dynamic does not. The organisations that extract genuine value from a transformation are rarely the ones with the best technical capability. They are the ones who already had clarity on how they made decisions, who was accountable for what, and what they were actually trying to build. The technology rewards the operating model that was already fit for purpose. It exposes the one that was not.
What Cloud Already Taught Us
I ran a managed cloud hosting business in the early years of commercial cloud adoption. Our customers arrived, almost without exception, with the same primary question: how much cheaper will this be than what we are running now?
The honest answer was that it depended entirely on what they were willing to change.
Workloads with consistent, medium-to-high utilisation, running steadily around the clock, rarely resulted in meaningful cost reductions. The economics did not shift because nothing else had shifted. What I used to describe as the laws of IT thermodynamics applied: something has to change to affect the cost. Moving the same workload to a different location and expecting a different outcome is not a strategy. It is a hope.
The workloads that offered genuine opportunity were those with bursty, metronomic, or scheduled peaks of utilisation: systems that ran hard for defined periods and sat largely idle in between. These could be re-architected around cloud economics in ways that delivered real returns, but only if the customer was prepared to change the architecture, refactor the application, or alter the process around how the system was used. The customers who were not willing to change anything concluded, reliably, that the cloud had not worked for them. The customers who treated the technology as an invitation to reconsider the work itself got a different result.
The same dynamic is playing out with AI right now. Organisations that want the efficiency gain without changing how they make decisions, govern outputs, or restructure work around what AI does well are running the same play. They will reach the same conclusion. The organisations that treat AI as an invitation to reconsider how work is done, rather than a cheaper way to do the same work in the same way, are the ones that will find the real return.
The Question That Cuts Through
If you want to test quickly whether an AI strategy has been thought through at the human level, I have found one question that tends to cut through a polished presentation with some reliability. The scale adjusts for the size of the organisation, but the form is the same:
If you had ten, or a hundred, or a thousand, or ten thousand extra people available to you right now, what would you do with them?
If a leadership team cannot answer that question clearly and specifically, they do not yet know what they are asking AI to do for them. They have a technology investment without a problem definition. The question removes the AI from the room entirely and asks what the organisation is actually trying to accomplish. If the answer is sharp and concrete, the AI conversation that follows tends to be productive. If it produces a pause or a general statement about doing more with less, the deployment that follows will produce exactly the kind of outcome the data already predicts.
The question works because it reframes AI in human terms before the technology enters the discussion. Most AI strategies start with capability, what the model can do, and then work backwards toward application. This question forces the reverse: start with the actual problem, the actual gap, the actual constraint, and then ask whether AI is the right response to it.
Before You Deploy, Decide How You Decide
The hard part of AI is not the model. Every organisation I have spoken with in the past two years has access to capable AI. The hard part is the choice to confront what deploying it demands: clarity on what problem you are solving, a culture where people can challenge what the AI produces without career risk, and named accountability for what happens when the AI gets it wrong.
Most organisations skip this work because it is not technical and it is not comfortable. It requires leadership to examine decision-making cultures, trust gaps, and accountability structures that have often been left implicit for years. The technology makes that deferral harder to sustain, because AI deployed into an organisation without that clarity will find every crack and make it visible in real time.
I am not arguing against AI adoption. I am making an argument about sequence. The organisations that will get the real return are not the ones that move fastest. They are the ones who answer the human questions first, before the technology forces the issue.
The question I keep coming back to is this: if your AI strategy were to fail, what would the postmortem actually say? If the honest answer is that it would blame the data, the model, or the integration, the decision that needs examining probably has nothing to do with any of those things.
I could be wrong about this. If you have seen organisations succeed with AI despite unclear decision-making or fragile governance, I would genuinely like to know how. The pattern I have observed is consistent, but it is not exhaustive. What is your experience?



0 Comments