When Progress Feels Busy but Not Better
Strategy days feel decisive.
The ambition is clear, and the priorities are articulated. Trade‑offs are debated. Slides are refined. Language is tightened. Alignment appears strong.
Six months later, the organisation is undeniably busy. Initiatives are underway. Reports are circulating. Executive forums review progress with regularity. And yet a question gradually begins to surface:
Why aren’t we seeing the progress we expected by now?
It’s not that work isn’t happening. It is.
It’s not that people aren’t committed. They are.
The discomfort is slightly opaque. The original intent of the strategy - the shift in performance, capability, risk posture, or market position - is harder to recognise inside the activity now that it's underway.
Different parts of the business seem to be running their own race. Product-related initiatives appear to be advancing, at least on the surface. Finance keeps tightening targets and pushing for short‑term performance, sometimes clawing back investment elsewhere in the name of optimisation or 'stretch targets'. Technology focuses on infrastructure that rarely translates into visible improvements for employees. Transformation shifts focus as priorities evolve. HR refines processes and programs to ensure efficient delivery, while longer‑term capability and culture building quietly lose funding.
Each function can explain what it is doing. Each can point to effort. Each can also point, if pressed, to somewhere else as the source of friction or lack of progress.
Collectively, however, the organisation can feel less aligned than it did on the day the strategy was launched.
It's not that the strategy has been abandoned. It has been gradually reshaped in translation.
In a previous piece, I argued that impact measurement often disappoints because intent and work design drift apart. One reason for that drift is simpler - and more structural - than most organisations acknowledge:
Intent rarely survives intact as it moves through layers.
What often looks like weak execution is something more structural: the strategy itself has changed form as it moved through the organisation.
The Easy Diagnosis: It Must Be Execution
The default explanation is usually execution.
Perhaps the strategy cascade wasn’t clear enough.
Perhaps middle management didn’t fully align.
Perhaps priorities became crowded.
A familiar diagnosis is made: somewhere between articulation and action, discipline weakened.
Leaders go looking for answers in places like employee feedback. Annual engagement surveys are opened with renewed intensity. A red score on “strategy clarity” becomes the headline. Patterns are quickly inferred. Conclusions are drawn with confidence.
If employees say they don’t see strategy reflected in decisions or resource allocation, or that change isn’t being managed well, the interpretation becomes: they don’t understand the strategy.
Data points begin to reinforce existing narratives. Rarely does anyone pause to ask whether those questions might be reflecting structural tension rather than misunderstanding.
From this perspective, the remedy is equally familiar. Clearer messaging. Sharper KPIs. Stronger accountability. Tighter governance.
Another common response is to double down on delivery. If outcomes aren’t yet visible, the instinct is to accelerate activity. Increase cadence. Tighten milestones. Add reporting. Push harder. After all, progress must come from effort.
What’s rarely asked in that moment is an important (and sometimes politically uncomfortable) question:
Are we still solving the right problem?
It is easier - and often more comfortable - to enhance focus on specific activities than to reconsider direction. Especially when significant time, budget, and professional identity are already invested.
So organisations become more efficient at executing work that may no longer reflect the original intent. From the outside, this looks like commitment. In practice, this is where drift accelerates.
Over time, the organisation can become highly disciplined at delivering a version of the strategy that no longer reflects the original choice. More often, what has shifted is the meaning of the strategy in day‑to‑day decisions - long before anyone labeled it an execution problem.
The risk is subtle but significant: leaders end up judging progress against an intent that has already been reshaped inside the system.
How Strategy Changes Without Anyone Admitting It
While strategies are meant to articulate clear competitive positions, not all begin life this way.
Some begin as pillars, themes, or focus areas. Important, sensible directions, but not always anchored in a distinct theory of how the organisation intends to win.
When the starting point is already activity‑shaped, drift begins almost immediately. The 'strategy' is translated into projects before it has fully crystallised as intent.
Even when the strategic intent is sharp at the top, something else happens.
Intent rarely collapses in a single, dramatic moment. It degrades incrementally.
A strategy articulated at enterprise level is necessarily broad. It speaks to positioning, performance, resilience, or transformation.
The problem isn’t that people ignore the strategy. It’s how they interpret it.
Interpretation is not a flaw in the cascade - it is the point of it. Senior leaders are not meant to dictate activity at every level. They are meant to set direction and allow others to apply judgment. The risk emerges when intent is not sharp enough to constrain that judgment. In the absence of clear strategic choices that meaningfully narrow options, interpretation gravitates toward existing beliefs, incentives, and local priorities.
And so, the next layer translates their interpretation of the strategy into portfolio priorities. The next translates those priorities into initiatives. The next into deliverables, milestones, and performance objectives.
At each translation point, interpretation enters. Language narrows. Ambiguity is resolved. Assumptions are filled in. Some elements are emphasised. Others fade.
Simultaneously, incentives and constraints filter what receives attention. Local leaders attend to what they are measured on, what they have capacity to influence, and what reduces immediate risk (be it to the organisation or to themselves). Work that aligns cleanly with those realities is amplified. Work that doesn’t quietly recedes.
When interpretation drifts too far from original strategic choice, the organisation may still look busy, but it is no longer competing on the terms it set for itself.
None of this is sabotage. It looks like focus and planning.
The problem is that this focus and planning happens locally. When each part of the organisation optimises for its own priorities, overall coherence weakens unless something is deliberately holding it together.
By the time intent reaches the level where impact is assessed, it may already mean something different.
This is structural drift.
Structural drift is the gradual reinterpretation of strategic intent as it passes through layers of translation, incentives, and local focus without anyone formally deciding to change the strategy.
It also explains why isolated data points can be misleading. When intent fragments across layers, survey results, KPIs, and performance metrics begin to describe local focus rather than overall system coherence. Viewed in isolation, they appear rational. Viewed together, they often reveal misalignment.
And if intent has shifted in form, what exactly are we evaluating when we later ask whether the strategy “worked”?
Why This Keeps Happening (Even in Well‑Run Organisations)
We've explored how drift shows up. The harder question is why it is so consistent.
Structural drift does not require incompetence. It only requires normal organisational mechanics and human psychology.
None of this is unusual. It is what happens when broad intent meets layered governance, performance cycles, and human judgement.
Firstly, translation often reduces complexity. Senior strategy is typically intentionally broad because it must accommodate uncertainty. But broad intent is uncomfortable at operational levels. Teams need specificity. They need milestones, owners, deadlines. So ambiguity is resolved, and in resolving it, intent is narrowed.
Secondly, incentives create gravity. Performance systems typically reward what can be measured within annual cycles. Budgets are approved in discrete increments. Targets are reviewed quarterly. Even leaders with long‑term ambition operate inside short‑term accountability structures. Over time, those structures exert more influence on behaviour than the original strategic narrative.
Thirdly, identity and sunk cost play a role. Once a direction has been chosen and resources committed, it becomes psychologically difficult to revisit underlying assumptions. Questioning whether the work still reflects intent can feel like questioning prior judgment. So momentum continues.
Finally, local optimisation feels like a responsible step. Teams solve the problems in front of them. They protect their own performance metrics. They manage risk within their remit. Each decision is rational in context. Collectively, however, those rational decisions compound into divergence.
None of these forces are dramatic. They are steady, which is why drift is rarely experienced as a rupture. It is experienced as gradual reinterpretation — one you don’t notice unless you’re specifically watching for it.
By the time the annual review arrives, the strategy itself has not necessarily changed. What does shift though is how evidence of success is framed in reporting. Teams scramble to demonstrate achievement against visible targets (revenue, margin, product launches) while less tangible shifts (quality, change) are supported with thinner data and more optimistic interpretation. The gap may be recognised internally, but emphasis gravitates toward what is defensible to the Board within the remuneration cycle.
Why the Obvious Fixes Often Make It Worse
When leaders sense misalignment, the instinct is understandable.
Clarify the strategy cascade.
Tighten governance.
Increase delivery discipline.
Add recognisable metrics.
Each of these responses appears responsible. Each signals control. Some of these levers address structure. Others intensify performance. The latter are easier to apply, but more dangerous when intent has already drifted.
Let's look more closely at what they actually do.
Clarifying the cascade can be an effective way to realign wayward efforts if strategy awareness is low. If it's not, then there can be unintended impacts depending on who is doing the clarifying. If it is reinforced from the top, it can simply highlight the growing gap between original intent and current reality — without addressing why the gap emerged. If it is reinforced at functional levels, it re-communicates the already translated version of the strategy, further embedding the drift rather than correcting it.
Tightening governance often feels responsible - a way to rein in personal projects and refocus efforts on strategy aligned work. But it can also centralise decision rights around those who have already shaped the operational interpretation. It reduces variance, but it can also reduce the space for teams to question whether the chosen initiatives are still the right ones.
Increasing delivery discipline improves consistency when translation is sound. The risk emerges when discipline sharpens activity without revisiting the strategic direction. Milestones tighten. Reporting improves. Activity becomes more visible. Whether that activity is plausibly delivering the intended shift is rarely examined with the same rigour.
Sharper metrics can clarify poorly defined outcomes. But metrics tied to visible consequence quickly dominate attention. What is tied to consequence always gains priority. Under pressure - particularly in the lead-up to Board reviews and remuneration discussions - leaders gravitate toward measures that are clean, defensible, and favourable. Outputs travel upward and complications fade.
Once resources, identity, and reputational capital are invested in a direction, it becomes psychologically and politically harder to question it. Challenging the direction can be interpreted as resistance. Over time, dissent narrows. Conversations become safer. The system protects the story it has already told.
From the outside, the organisation looks decisive and aligned. Internally, it may be reinforcing a translation of the strategy that no longer reflects the original intent.
The issue isn’t weak execution. It’s that many of the standard fixes strengthen control over activity without re-examining whether that activity still reflects the strategic intent.
What This Means for Leaders
If structural drift is predictable, the implication isn’t to abandon cascade processes or demand heroic execution. It is to recognise that intent does not self‑preserve.
Clarity at the top is necessary. It is not sufficient.
A small number of organisations manage this well. They anchor relentlessly to a small number of non‑negotiable strategic choices and allow those choices to constrain decisions consistently over time. Reinforcement is cultural as much as procedural. Trade‑offs are surfaced repeatedly, not only during annual planning cycles.
Others rely on formal strategy processes to do the work of preservation. Once the slide deck is approved and cascaded, intent is assumed to be embedded.
The difference between these two types of organisations isn’t communication volume. It is whether the organisation treats intent as something that must be actively maintained against the natural pull of local optimisation and short‑term pressure.
That work is less about intensity and more about coherence.
The Real Question We End Up Avoiding
Impact measurement struggles not only because work and incentives drift from intent.
It struggles because intent itself rarely survives intact long enough for impact to be assessed against it.
When leaders later ask whether a strategy “worked,” they are often evaluating outcomes against a version of intent that no longer exists in the system.
Execution did not necessarily fail. The reference point moved.
Until organisations recognise structural drift as a predictable force — and treat preservation of intent as an active discipline rather than an assumption — debates about execution and measurement will continue to circle the wrong question.
In the next piece, we’ll look at what really happens when leaders try to prove that an initiative “worked”, and why the answer is often far less clear than the Executive Report suggests.