From Strategy to Scorecard
A new five-year strategy has been announced. The language sounds familiar: resilience, disciplined growth, and long-term value creation. Within weeks, it is translated into divisional plans, broken into initiatives, assigned metrics, and filtered into an enterprise dashboard that will govern progress.
Along the way, something happens. Intent is clarified, then simplified. Trade-offs are flattened into corporate targets. Ambiguity rarely survives the journey upward, and so is translated into numbers that feel discrete and familiar. By the time the strategy reaches the board pack, it already looks neat.
Twelve months later, a bold expansion has delivered double-digit growth. An immaculate board deck with a clean narrative arc is presented: strategic conviction, disciplined execution, and measurable uplift. The leader behind it is praised for judgement and promoted within months.
In another division, a structural redesign landed just as demand softens. Margins compressed during the transition. The initiative is described as “disruptive,” the timing questioned, and the leader quietly moved on.
Both outcomes are treated as evidence: One proves strategic clarity, the other proves flawed judgement. The discussion moves on.
What's missing from the discussion is what else was moving at the same time:
- A competitor exited the market.
- Regulatory conditions shifted.
- A prior investment matured.
- Sector demand softened.
- Timing, as it often does, played a role.
The numbers are not wrong, but the story that forms around them is selective. Once it hardens, it doesn’t just explain the past. It begins to define what “good” looks like.
In an earlier piece, I’ve argued that impact measurement often fails when intent drifts. This looks at how that drift can influence reporting and success measures when results appear to be good.
Why Simple Success Stories Win
Organisations like linear explanations: A decision was made. Capital was allocated. Execution followed. Outcomes improved. Therefore, the decision was good.
The reverse feels equally logical.
This model is attractive because it is easily governable. Boards can defend it, executives can reward it, and performance reviews can reference it. It fits inside annual reporting cycles and compensation frameworks. It rewards what can be understood quickly. On the other hand, measures that require nuance often struggle to secure airtime.
Clean stories also travel well. They are highly promotable, and create clarity in complex environments. They give investors, regulators, and employees something reassuringly coherent. Nuance, on the other hand, does not travel as well. “It worked, but partly because of favourable external conditions that may not continue” rarely survives the first slide deck review.
And so, like rising ice cream sales and increased shark attacks, the organisation compresses correlation into causation. Not deliberately, but very efficiently. Unfortunately, such efficiency has consequences.
When explanations become streamlined and simplified, accountability narrows to what can be easily reported. The individuals best positioned to shape the narrative — those presenting to boards, investors, or promotion panels — begin to influence how “strategic impact” is defined, what is considered successful, and what is not. Over time, the definition of success subtly narrows to what can be clearly attributed to visible leadership action. Less visible contributors, such as timing, inherited advantage, system strength, or collective capability, recede into the background.
The story does more than explain success. It allocates credit. And when credit is allocated consistently to the same kinds of visible leadership actions, it consolidates influence around those who are seen to have caused the result. Over time, those individuals are not just rewarded. They become reference points for what leadership should look like.
When Results Start Defining Leadership
Attribution error is usually framed as an analytical problem: complex systems produce messy causality. Or in organisational shorthand: “There are multiple factors at play, but we can’t report all of that, so let’s report what we can currently measure.”
The deeper issue is a structural one. When simplified success narratives become the dominant explanation for results, they do more than misinterpret outcomes: they determine which behaviours are elevated.
If visible boldness coincides with good numbers, it looks validated. If the numbers are down, the same boldness looks reckless. If cautious moves coincide with deteriorating conditions, they look weak. If conditions improve, the same caution looks considered.
Over time, the organisation is not just rewarding outcomes. It is selecting for a particular style of leadership. Every promotion, funding decision, and executive appointment quietly communicates what “good” looks like.
It also communicates what is acceptable to question (and what is not). If strategic impact is defined primarily through confident causal stories, those who complicate the story risk being labelled obstructive, overly cautious, or “not commercially aligned.”
No one needs to silence dissent directly. The boundaries of disagreement deemed acceptable narrow on their own, and what looks good is often what can be told cleanly. Meanwhile, what cannot be told cleanly starts to look like underperformance.
Behavioural Selection Effects
When this happens, certain behaviours compound advantage.
Leaders who can tell clean and confident rise faster. Their narratives align neatly with outcome goals. Their decisions appear decisive, causal, attributable.
Nuanced operators face a challenge. They talk about conditions, constraints, trade-offs, and risk exposures. They describe second-order effects. They hedge where evidence is incomplete. In a performance review, that can sound like hesitation.
Caution begins to resemble doubt. Reflection begins to resemble lack of conviction or a tendency to overcomplicate. Conditional language travels poorly when promotion panels are scanning for employees who delivered “strategic impact.”
No one needs to instruct leaders to behave this way - the system makes the pattern obvious. If confidence and conviction are repeatedly rewarded, while caution and caveats quietly sidelined, people will adjust their behaviour or leave.
How A Winning Story Builds Power
The change doesn't happen like a sudden storm. The effect is so gradual most organisations don't realise it's happening until something goes wrong.
- Year one: A series of bold restructures, reinvestments, or cost reductions produce visible end-of-year uplift. The improvement is attributed to decisive leadership and disciplined execution. The attribution becomes the reference point for what effective leadership looks like.
- Year two: The leaders associated with those moves gain greater influence over what gets funded and approved. Investment concentrates around protecting momentum. Work perceived as disruptive or less immediately visible struggles to secure backing.
- Year three: Underinvestment in foundational capability begins to create friction. Delivery slows. The emerging strain is attributed to operational leaders “not executing,” rather than to the earlier design choices that narrowed where attention and capital flowed.
Once success has been declared, dissent becomes socially prohibitive. In most organisations, no one wants to be the person who tries to challenge a winning story. Questioning the narrative can look like questioning the competence of those now elevated by it, and proven winners are rarely destabilised.
When success stories gain narrative gravity, they begin pulling decisions, promotions, and capital toward them. Investment allocation narrows around familiar approaches. Promotion patterns align with leaders associated with visible wins. People attempting to deliver initiatives that interfere with "successful teams" are seen to be impractical or uncommercial.
Four years in, the organisation appears to be running efficiently. It knows what “good” looks like. It funds variations of the same theme and appoints leaders who resemble prior successes.
Five years in, something starts to surface. Strategic optionality has narrowed because investment in the foundational work required to sustain growth was shifted to maintaining the conditions that made its last success possible. As a result, it is less adapted to conditions that meaningfully differ.
Leaders start to look and sound the same. Confidence is everywhere while curiosity is harder to find. Ideas that don’t fit the established formula struggle to get airtime, because they don’t resemble the kind of moves that have previously been rewarded.
From the inside, this feels disciplined and focused, but from the outside, it can look like the organisation is doubling down on a direction that no longer makes sense. Organisations rarely realise they've become over dependent on one formula until the market changes and the conditions that made it work disappear.
Why Activity is Easer to Defend
This dynamic is further amplified by how performance is reported. It’s easier to defend what you did than to interrogate what actually drove the result.
Activity-based reporting is common because it is clean. It focuses on things like initiatives launched, milestones delivered, revenue uplift achieved, or costs reduced. All of this is attributable and fits neatly into ownership structures.
Outcome-based reporting is more awkward. If done well, it resists the urge to compress complex performance into a single, confident explanation. It asks not just what happened, but what conditions made it possible. That means acknowledging market tailwinds, regulatory shifts, and legacy strengths alongside deliberate decisions. It also means recognising that most outcomes emerge from choices to invest in longer-term capability rather than optimise short-term wins. Credit and accountability become less cleanly owned. Causality becomes harder to summarise in a few sentences. That is why this form of reporting introduces friction: it makes the narrative compression discussed earlier more difficult.
It also forces uncomfortable questions about repeatability. Activity tells a story about what we did, while outcome asks what would have happened anyway. One is much easier to highlight and celebrate. The other is much harder to compress into a promotion case.
In addition, activity reporting does more than simplify measurement: it protects the narrative. If the story is that bold leadership drove success, then reporting focussed on initiatives delivered and programs executed reinforces that same story of bold leadership driving success. It sustains the identity of those associated with it.
Outcome reporting, by contrast, can destabilise the narrative. It asks uncomfortable counterfactual questions, surfaces external dependencies and distributes credit more diffusely.
In systems that rely on stable leadership identity and defensible board narratives, stability is usually prioritised. Not because anyone is manipulating the data but because complexity introduces a political risk that leaders are reluctant to take.
Why Systems Default to Simplicity
In most cases, none of this is by deliberate design. Organisations operate under reporting cycles, investor scrutiny, and governance expectations that all favour clarity. Compensation structures demand differentiation. Leadership pipelines need visible evidence of impact to justify promotion.
Since ambiguity is difficult to reward and conditional success is hard to justify, organisational systems and processes default to clear-cut answers and clean explanations. This simplicity feels like control, and control feels like competence. Over time, the preference for clear-cut over nuance becomes embedded.
Those closest to capital allocation and board communication tend to influence which explanations travel upward. They shape what directors recognise as credible leadership, and therefore what they back, reward, and protect.
Blind spots rarely announce themselves as blind spots. They are created when one explanation becomes so widely accepted that alternative explanations stop being explored. Over time, repeated reinforcement turns a partial truth into an unquestioned assumption about what drives performance, and what no longer gets examined becomes invisible.
When assumptions become embedded at board level, they stop being debated and start being defended. Directors often believe they are rewarding performance. In reality, they may be reinforcing the explanation of performance.
What This Designs Over Time
When this is the case, the issue is not that attribution is complicated, but that simplified success stories become a cultural design force. They influence who rises and shape how capital flows. They recalibrate risk tolerance and teach emerging leaders what to display and what to suppress.
An organisation may believe it is selecting for performance, while in practice it is selecting for confidence over judgement. If it repeatedly translates correlation into causation it will gradually reshape:
- Its risk tolerance
- Where it allocates funds
- Who it promotes as leaders
- Its definition of strategic competence and performance
It may believe it is building strength while quietly narrowing the range of leaders capable of challenging it.
Strategic fragility rarely comes from a single bad decision. It emerges from years of reinforcing one successful pattern until alternative patterns become unfamiliar, underfunded, or politically unsafe to pursue. When conditions eventually shift (as they always do) the organisation can find itself led by leaders selected for yesterday’s environment, operating within capital structures optimised for past assumptions, overseen by boards accustomed to a particular narrative of success. At that point, correction is possible, but it is slower, more expensive, and more destabilising.
Attribution error, in isolation, is an analytical inconvenience. Repeated over time it becomes a selection mechanism, and selection mechanisms design culture more powerfully than any values statement ever could.
When simplified success stories quietly determine who rises, what gets funded, and which risks feel legitimate, reporting stops being descriptive and becomes formative.
The question, then, is no longer about whether attribution is complex to report. It is whether the stories your organisation tells about success are shaping the kind of leadership, and the kind of resilience, it will have when the environment no longer cooperates.