blog

Why the Concepts that Test Best Still Fail After Launch.

Why the Concepts that Test Best Still Fail After Launch
Image of the post author Jodie Shaw

In large organisations, concept approval is often treated as a settled science. An idea goes into testing, clears a purchase-intent threshold, shows enough differentiation to look credible, and moves forward. A build cycle is funded with the confidence that demand has been validated.

When adoption underperforms after launch, the explanation tends to arrive later and elsewhere. Justifications like “ the execution was uneven”, “the category shifted,” or “competitive pressure intensified” are bandied about in launch failure post-mortem meetings. Rarely is the original approval logic interrogated, even when the pattern repeats.

This is not about dishonest respondents, bad research, or incompetent delivery. It is about a structural mismatch between what purchase intent measures and what adoption actually requires. Purchase intent captures stated openness to an idea in isolation. Adoption requires replacing something already embedded in daily behaviour. The distance between those two conditions is where many concept-led initiatives quietly lose a quarter of build time.

submit-your-brief

How intent became decisive

Purchase intent did not become dominant because it is uniquely predictive. It became dominant because it is administratively effective. It produces a single number that can be benchmarked, compared, and defended. It fits neatly into stage-gate processes and portfolio reviews. It allows decisions to be made without requiring judgment about behavioural change.

In organisations where accountability is distributed across functions and timelines, defensibility matters. A concept that clears an intent threshold appears justified. The decision to fund it can be explained upward and laterally. When performance disappoints, responsibility migrates. The metric remains intact.

This is not a flaw in intent measurement. It is a consequence of how metrics are used inside governance systems. Intent was elevated from signal to gatekeeper because it simplified approval, not because it resolved feasibility. Once that elevation occurs, the organisation implicitly treats appeal as a substitute for adoption readiness. That substitution is rarely made explicit, but its effects are visible after launch.

According to Bain & Company, roughly 80% of new product launches fail to meet internal revenue targets. Harvard Business School professor Clayton Christensen estimated that 30,000 new consumer products are launched each year and 80–95% fail. The precise percentage varies by category, but the pattern remains the same. Organisations validate appeal, while markets enforce replacement.

The behavioural reality intent does not capture

Concept testing typically evaluates ideas under conditions that remove cost. Respondents are not required to abandon an existing solution, reconfigure routines, or absorb operational risk. They are not asked to live with the consequences of adoption. They are asked whether the concept makes sense and whether they would consider buying it.

In real settings, adoption is rarely additive. A new product displaces something, even when positioned as an enhancement. That displacement carries effort. Sometimes it is financial. More often, it is cognitive, procedural, or reputational.

Users may recognise the logic of a new concept and still resist it because what they use today works well enough. They may hesitate because learning something new consumes time they do not have, because early performance is uncertain, or because reversing a visible decision carries social or professional cost. None of these constraints is meaningfully captured by intent measures.

This is why intent often predicts interest but not behaviour. It reflects how attractive an idea appears before any change is required. Adoption begins only when replacement is unavoidable.

Switching cost as the binding constraint

Switching cost is frequently discussed as a pricing or contractual issue. In practice, it is the total effort required to move from one state to another. It includes the time required to learn a system, the disruption to established workflows, the risk of error during transition, and the uncertainty surrounding whether promised benefits will materialise.

For many users, the most significant component of switching costs is not the expense but the exposure. Adopting a new solution often requires admitting that the current one is insufficient, committing to a learning period, and accepting the possibility of regret. These costs are invisible in concept stimuli but decisive in behaviour.

This is where intent fails as a gatekeeper. It cannot distinguish between concepts that integrate into existing behaviour and those that require users to reorganise how they work. Two ideas may test similarly on appeal while carrying very different switching costs. One will travel. The other will stall.

The quarter that is lost

For senior product and marketing leaders, the consequence of misjudging adoption is not reputational - it is operational. A funded quarter represents a finite allocation of engineering capacity, marketing focus, and organisational attention. When that capacity is spent advancing a concept that users will not replace existing behaviour for, the loss is permanent.

The organisation does not simply reset. Other opportunities are delayed or displaced. Over time, repeated outcomes of this kind degrade confidence in innovation programs and in the research that supports them.

This is why the gap between concept testing and performance has become a leadership issue rather than a technical one. The problem is not that the research failed to execute. It is that the approval process relied on a metric that avoided confronting the primary constraint on adoption.

Where concept testing falls short

Concept testing evolved in an environment where novelty and differentiation were often sufficient. In those conditions, measuring appeal captured much of what mattered. As categories mature, the primary barrier shifts and awareness becomes less scarce than willingness to change.

Many concept testing practices have not adjusted accordingly. Purchase intent remains central because it is familiar, scalable, and easily summarised. Switching cost, by contrast, is contextual and uncomfortable. It varies by user, role, and workflow, and it resists reduction to a single number.

The result is a systematic bias toward what is easy to measure rather than what determines outcomes. Concepts are optimised for attractiveness rather than adoptability.

An illustrative decision scenario

Consider an illustrative example, abstracted from recurring patterns across large organisations. Two concepts test within a few points of each other on purchase intent. Both are seen as relevant. Both appear differentiated. Either could plausibly be funded.

The first concept integrates into existing tools and routines. It requires limited setup and produces a visible benefit early. Adoption does not require users to abandon what they already do; it modifies behaviour incrementally.

The second concept promises greater long-term value, but only if users replace an entrenched solution. It requires onboarding, configuration, and a period of adjustment before benefits are realised. Switching cost is substantial, even if the eventual upside is compelling.

On intent alone, these concepts appear equivalent. In practice, they are not. The first is likely to accumulate usage. The second may attract trial but struggle to convert it into commitment.

When purchase intent functions as the gatekeeper, this distinction is lost. When switching cost is examined explicitly, it becomes determinative.

TREND4-Every-Snack-Bite-Counts

Reframing adoption friction

Adoption friction is often broken into discrete checklist items: learning curve, trust, workflow compatibility, time to value. In reality, these compound into a single question: how much effort must the user absorb before the concept becomes worthwhile?

Learning curves, trust thresholds, workflow compatibility, and time-to-value are not independent variables. Instead, they compound. A concept that delays payoff increases perceived switching cost. A concept that requires trust before delivering a benefit increases the risk of regret. A concept that disrupts routines magnifies cognitive and operational burden.

Treating these as discrete checklist items understates their effect. Treating them as expressions of switching cost concentrates attention on the constraint that actually governs adoption.

The political difficulty of change

Replacing purchase intent as the gatekeeper is not analytically difficult. It is politically uncomfortable. Intent allows approval without forcing explicit judgment about behaviour change. Switching cost does not.

Once switching costs are acknowledged, trade-offs surface quickly. Either the organisation invests deliberately in reducing it through design, integration, or incentives, or it accepts slower adoption. What becomes harder to justify is funding concepts on the assumption that appeal will compensate for the effort required.

For research leaders, this requires a shift in posture. The role moves from validating attractiveness to interrogating feasibility. That posture can create tension in systems that reward momentum over constraint. It also restores credibility with executives who have funded too many strong concepts that failed to travel.

What changes in practice

In practical terms, concept testing must move closer to observed behaviour. Every concept should be evaluated against what it would realistically replace. Measures should surface perceived effort, exposure, time-to-value, and interest. Outputs should clarify trade-offs rather than smooth them away.

This does not require abandoning quantitative discipline. It requires redirecting it. Instead of optimising for appeal, research must illuminate constraints.

A concept should advance because switching costs have been confronted and deliberately addressed, not because they were invisible in testing.

The cost of maintaining the status quo

When a quarter is spent building a concept users will not switch to, the loss cannot be recovered. When a quarter is spent building something users will not switch to, the organisation does not recover that time. It absorbs the loss and proceeds. The approval logic remains intact. The metric continues to function as designed.

The question is not whether purchase intent has value.

It is whether appeal should continue to outrank replacement in determining what gets built.

If this pattern looks familiar, the solution is not more intent data. It is better judgment about adoption.

Kadence International works with product teams who are tired of approving concepts that test well and stall later. Our approach to concept testing does not stop at appeal. We surface the switching cost, replacement behaviour, workflow disruption, and time-to-value before funding decisions are made.

That means confronting feasibility early. It means quantifying friction, not smoothing it away. It means identifying whether a concept integrates into existing behaviour or requires users to reorganise how they work — and what that will realistically demand.

If you want concept research that protects build capacity rather than justifying it, we should talk.

FAQs

Why does purchase intent often fail to predict real adoption?

Purchase intent measures stated interest in isolation. Adoption requires behavioural replacement under real constraints. Those are not the same condition.

In testing, respondents are not required to abandon current tools, reconfigure workflows, or accept performance risk. In market, they are. That gap explains why strong intent scores routinely translate into weak uptake, a pattern reflected in Bain & Company’s finding that roughly 80% of new product launches fail to meet internal revenue expectations. Interest is cheap. Replacement is not.

When organisations treat expressed openness as proof of behavioural readiness, they find appeal and discover resistance later.

What is 'switching cost' in product adoption?

Switching cost is the total effort required to move from one operating state to another. It includes time to learn, workflow disruption, integration complexity, performance uncertainty, and reputational exposure if the change fails.

Most of it is not financial.

For many users, the largest cost is admitting the current solution is insufficient and publicly committing to a replacement that may not deliver. That exposure rarely appears in concept testing but governs real decisions.

If adoption requires reorganisation before payoff, switching cost is the constraint, regardless of how attractive the concept appears.

Do you conduct research in the local language?
We believe that conducting research in the local language is crucial for gaining true depth of insight. Not only does it allow us to understand the intricacies of local nuance but also the wider culture that sits behind this. We’re proud of the multi-lingual team that enables us to do this. Between us, we speak over 40 languages!
How can concept testing better account for adoption friction?

Concept testing must evaluate ideas in the context of what they would replace, not in isolation from existing behaviour. That means explicitly measuring perceived effort, time to value, integration burden, and risk of regret alongside purchase intent.

When friction is measured directly, trade-offs surface that intent alone conceals.

Research should clarify whether a concept integrates incrementally or demands behavioural reorganisation, because those are fundamentally different adoption profiles even when headline scores are similar. If that distinction is not visible before funding, it will surface in usage data after capacity has already been spent.

Metrics do not eliminate judgment. They make its absence visible.

Why do large organizations continue to rely on purchase intent metrics?

Because purchase intent is administratively efficient. It produces a single, defensible number that fits neatly into governance systems, stage-gate reviews, and portfolio comparisons without forcing explicit decisions about behavioural feasibility.

It simplifies approval in distributed organisations where accountability is shared across functions and timelines.

Once embedded, the metric protects the process as much as it informs it. Replacing it requires confronting uncomfortable trade-offs about adoption speed, integration investment, and risk tolerance. Few systems volunteer for that friction.

Institutional convenience is often mistaken for predictive strength.

What should senior leaders evaluate before funding a new concept?

They should ask one direct question: what must the user stop doing in order to adopt this?

If the answer involves replacing entrenched workflows, retraining teams, or accepting delayed payoff, then switching cost is the primary risk, not awareness or differentiation. Data from the U.S. Bureau of Labour Statistics (2023) shows average employee tenure at just over four years; within that window, people optimise for stability, not experimentation.

A concept should advance only when the organisation has a credible plan to reduce or absorb switching costs through integration, incentives, or staged rollout. Without that plan, intent is noise.

Appeal does not override effort.