blog

Why Your Path-to-Purchase Data Is Built on a False Assumption.

Why Your Path-to-Purchase Data Is Built on a False Assumption
Image of the post author Jodie Shaw

A consumer brand paused retargeting in two matched regions for six weeks as part of a routine incrementality test. Spend was reduced to zero, channel conversion tracking collapsed, and internal dashboards showed an immediate, severe drop in attributed revenue. Channel owners flagged the change as a crisis. Budget reallocations were discussed and forecasts revised.

But here’s the thing: actual revenue did not move. Order volume held steady, and new customer acquisition remained within historical variance. From the perspective that matters most, the business was unaffected, and only the reporting narrative had collapsed.

Results like this surface when marketing teams run properly controlled removal tests rather than relying on attribution models. What they expose is not a measurement glitch but a structural flaw in how modern marketing teams interpret their own data.  Attribution systems do not measure what causes demand. Instead, they measure where demand becomes visible. They capture the final observable steps of a decision process that began elsewhere and then recast those steps as influence.

Retargeting is the clearest example. It follows users who have already demonstrated interest, places itself at the end of the journey, and absorbs credit for purchases that were already likely to happen. When such a channel is removed, and revenue remains stable, the implication is not subtle. The channel was harvesting existing demand rather than generating new demand.

This pattern is not rare. It appears across categories and spend levels when subtraction is properly tested. The persistence of attribution-led budgeting in the face of this evidence is not an analytical oversight. It is a governance choice.

submit-your-brief

What Attribution Is Actually Measuring

Attribution systems map recorded touchpoints and distribute credit among observable interactions. That function is useful for reporting and coordination. It was never designed to determine which activities led to a purchase.

When a system assigns value based on how close a touchpoint sits to conversion, it automatically privileges whatever happens last. Channels that capture intent late in the journey will dominate dashboards even if they had no influence on whether that intent existed in the first place. Retargeting benefits most from this structure because it is architected to follow users who are already leaning toward purchase and position itself at the moment of action.

This design choice quietly reshapes which channels look valuable and which ones get cut. Late-stage capture channels appear indispensable. Early-stage demand-shaping activity appears inefficient because its effects are diffuse, delayed, and difficult to observe within conventional reporting windows. Over time, this pushes spending away from demand creation and toward demand harvesting, regardless of which actually changes outcomes in the market.

The error is not that attribution models are poorly calibrated. It is that they are being asked to answer a question they were never built to answer. They can tell you where the behavior occurred. They cannot tell you what caused it. Treating those two things as interchangeable has created a systematic distortion in how marketing budgets are set.

Once that distortion becomes embedded in planning cycles, it stops being a measurement problem and becomes a strategic one. Channels that do nothing to expand demand continue to receive budget increases because they dominate dashboards. Channels that shape demand upstream struggle to defend their budgets because their contribution cannot be captured by the same reporting logic.

The Financial Distortion This Creates

Attribution outputs do not remain descriptive for long. They become prescriptive. Apparent ROI becomes justification for increased spend, and marketing teams learn quickly which metrics protect their budgets and which do not.

When late-journey channels dominate attribution dashboards, they gain automatic leverage in planning conversations. Spend flows toward whatever appears closest to revenue. Over time, this builds a portfolio skewed toward capture rather than creation, even when the former adds little incremental value.

Channels that harvest demand produce clean short-term returns, which makes them look reliable. Channels that shape demand often require sustained investment before results appear, which can make them appear volatile. The result is a slow migration of capital away from activities that expand the market and toward activities that simply skim it.

As upstream activity loses funding, overall demand growth weakens. As demand growth weakens, late-stage capture channels become even more dependent on harvesting the same shrinking pool of high-intent users. Their apparent ROI remains high because attribution continues to over-credit them, even as the underlying business stagnates.

What looks like optimization is often just an internal reallocation within a fixed-demand envelope. Revenue appears stable, and dashboards look productive, but the brand quietly loses its ability to generate new demand at scale.

Why Multi-Touch Does Not Fix the Problem

Weighting models are often presented as a corrective to last-click bias. In practice, they operate on the same underlying traces and reproduce the same structural distortion in a more elaborate form.

Multi-touch attribution takes recorded behavior and recasts it as implied influence. It assigns percentages to touchpoints based on position, recency, or frequency without any evidence that those dimensions correspond to persuasion. The model may look more sophisticated, but the causal assumption remains unchanged. Observed sequences are still being treated as if they represent influence rather than documentation.

There is no empirical basis for treating impressions as independent variables or for assuming that persuasion accumulates additively across touchpoints. There is no proof that the journey itself is the persuasion rather than the residue of a decision already underway. Multi-touch models simply spread credit across the same biased dataset and call the result more balanced.

These systems persist not because they are analytically superior but because they are organizationally useful. They generate ranked channel lists, produce defensible ROI narratives, stabilize internal budget politics, and allow leadership teams to claim control over complex systems through apparently precise numbers.

In practice, multi-touch attribution functions less as a measurement upgrade and more as a governance mechanism. It protects existing budget owners, converts correlation into defensibility, and preserves the same capital allocation bias while giving it a more technical vocabulary.

A Second Subtraction Failure

The same pattern appears outside consumer marketing.

A mid-sized B2B software firm paused programmatic display advertising across two matched enterprise segments for eight weeks following a procurement dispute with an agency. Spend fell by ninety percent. Attribution dashboards showed an immediate collapse in pipeline contribution. 

But pipeline volume did not materially change. Lead quality held and conversion rates remained within historical variance. The only measurable effect was a modest increase in direct traffic from corporate IP ranges and a slight uptick in organic branded visits. The revenue variance remained within the same quarterly noise band as before the test.

Internal post-mortem analysis showed that more than eighty percent of users who had previously interacted with display ads still entered the funnel through direct navigation or email links during the blackout period. The display channel had not been generating new demand - it had been intercepting existing interest.

As with retargeting, the channel looked indispensable under attribution because it sat close to conversion. But when it was removed, the business outcome barely changed.

This is the difference between documentation and causation. Attribution models faithfully recorded where behavior occurred. They failed to identify whether any of those behaviors had altered the probability of purchase.

Where the Model Actually Breaks First

The claim here is not that late-stage channels never move volume. There are environments where they do.

Time-boxed promotions, flash sales, distressed inventory, impulse retail, and heavily discounted consumer goods operate under different demand mechanics. In those contexts, retargeting and programmatic display can surface offers at moments when customers are genuinely undecided and price-sensitive. Removal tests in those categories do sometimes show material lift.

What does not generalize is the assumption that those mechanics apply to subscription services, durable goods, enterprise software, healthcare, financial products, or any category with long consideration cycles and high perceived risk.

In those environments, the bulk of the decision work happens before the last click. Removal tests consistently show that late-journey capture channels mostly redistribute credit rather than generate incremental outcomes. They accelerate decisions that were already likely to happen rather than creating new ones.

This boundary matters because most marketing teams implicitly assume universality. They apply the same attribution logic across fundamentally different demand regimes, treating a flash-sale retailer and a B2B firm as if persuasion unfolds the same way in both.

It leads teams to import budget logic from impulse environments into considered-purchase environments where it does not belong.

TREND1-From Treat to Treatment

The One Rule That Makes Causation Operational

If attribution does not measure causation, it cannot be allowed to allocate money.

The following operational rule is straightforward: A channel earns budget only if removing it demonstrably reduces revenue.

This rule replaces credit assignment with subtraction, replaces reporting with testing, and turns “high ROI” from a fact into a hypothesis that must survive removal.

When retargeting fails a subtraction test, it stops being treated as a growth engine and becomes what it actually is: a demand-harvesting utility. When a channel survives a subtraction test, it earns strategic status not because it looks productive on a dashboard, but because it measurably changes market outcomes.

Attribution software cannot enforce this rule because it was never designed to do so. It can only describe where behavior occurred, not what caused it. Controlled removal tests, market holdouts, and sequencing trials become the budget gatekeepers instead.

This change does not require new tools. Instead of allowing attribution outputs to justify spend, organizations allow subtraction results to veto it. Channels that fail removal tests lose funding regardless of how impressive their dashboards look.

Who Loses When Subtraction Becomes Law

The moment removal tests become budget gatekeepers, existing budget owners lose control. Channel leads who built influence on proximity metrics lose their leverage, reporting teams lose their authority, and CMOs lose the ability to present clean, ranked channel narratives to boards.

Budgeting becomes slower and more contentious. Experiments take time and results arrive with error bars. 

Teams that relied on late-stage metrics see their budgets shrink, and teams that operate upstream gain influence only if they can prove lift. The company stops rewarding whoever shouts “ROI” the loudest and starts rewarding whoever can survive subtraction.

None of this is comfortable. But that discomfort is the point. 

The Institutional Contradiction

Most boards say they want disciplined growth. Most CFOs say they want causality. Most marketing teams say they want accountability. But none of them are willing to accept subtraction.

Attribution persists not because it is analytically convincing but because it satisfies all three constituencies. It gives boards neat dashboards. It gives finance teams plausible ROI narratives. It gives marketing teams a defensible way to protect their budgets.

Subtraction threatens that equilibrium. That is why attribution remains in charge of capital allocation even in organizations that intellectually agree it is non-causal. 

Where This Leaves Leadership

The argument does not end in a recommendation. It ends in a choice.

Either continue allocating money based on systems that cannot distinguish creation from capture, or accept that some of your highest-performing channels will fail removal tests and lose funding.

Choosing subtraction means choosing to live without clean narratives, ranked channel lists, and comforting ROI tables. It means accepting slower decision cycles, messier forecasts, and visible uncertainty.

Continuing with attribution-led budgeting means accepting the opposite tradeoff. It preserves internal stability, protects existing budget owners, and locks capital into channels that look productive but do not change demand.

A Practical Path Forward

For organizations that want to move beyond attribution fiction, the first step is not to buy new software. It is to change the questions that dashboards are allowed to answer.

Subtraction needs to be institutionalized as a budget veto, not treated as a side experiment. Channels should face periodic removal tests as a condition of continued funding. Budget increases should be gated by demonstrated incrementality rather than by attribution performance.

Qualitative work needs to be reintroduced into the causal stack. Longitudinal customer interviews, decision diaries, and post-purchase reconstructions are not soft add-ons. They are necessary to interpret why removal tests succeed or fail and to identify where persuasion actually occurs in the decision process.

What This Means for Brands Working with Kadence International

For brands working with Kadence International, the focus shifts from optimizing attribution dashboards to building causal evidence. That means designing subtraction experiments that can survive executive scrutiny. It means pairing those experiments with qualitative inquiry to decode what actually changed in the buyer’s head. It means triangulating lift data with decision narratives to distinguish demand creation from demand capture.

Kadence’s role in this system is not to produce prettier reports. It is to design research architectures that can falsify channel claims, surface hidden drivers, and translate uncertainty into usable rules.

That work is slower than attribution reporting. It is messier. It also produces decisions that actually change market outcomes.