blog

Why Global Campaigns Underperform Across Markets.

Why Global Campaigns Underperform Across Markets
Image of the post author Geetika Chhatwal

Global campaigns do not fail because brands misunderstand localisation. They fail because the core message is approved before it can be properly tested across markets.

A global campaign is approved against a fixed rollout window, not against a fully validated market response. Creative is signed off with directional ad testing, enough to confirm it works somewhere, not enough to prove it will work everywhere.

Local teams receive assets after the core message is fixed. They can adjust execution, language, casting, and media, but not the campaign that was never fully validated in their market.

The Swedish brand IKEA approaches the problem differently by delaying what most brands lock in too early.

Case Study: IKEA — Preventing the Wrong Message from Scaling

The constraint IKEA solves is not cultural misunderstanding. It is testing how people actually use space before the campaign is finalised.

For IKEA, the risk lies in how homes are represented worldwide.

A global catalogue built on Western layouts, like open kitchens, defined dining areas, and generous floor space, can be approved centrally and distributed at scale. The structure is consistent, the products identical, and the imagery aligned.

In markets such as China, where urban homes are typically smaller and more multifunctional, the way space is used changes how furniture is selected and arranged. A kitchen is not simply smaller; it is also organised differently. A dining area, for instance, is not always fixed and often serves multiple purposes.

If the catalogue reflects the wrong model of space, the product remains visible but becomes harder to place within real use.

At its peak, the IKEA catalogue reached more than 200 million copies globally, produced in dozens of versions across markets. According to Quartz, one edition alone involved over 200 million copies and 72 region-specific versions, developed over an 18-month production cycle.

Before finalising creative, IKEA tests how products are used within real homes in each market. Layouts are then adjusted within the same creative structure. Floor space is reduced, and objects are repositioned. The product remains the same, but the environment around it reflects how it will actually be used.

Ikea-catalogue-global-messaging

Caption: The same kitchen adapted before rollout—open layouts in the US, reduced floor space and tighter framing in China.

These changes are small, but they come before rollout, not after underperformance.

They do not rebuild the campaign; they remove what testing shows will not translate.

Most localisation happens after the campaign is already built. IKEA does something different. The catalogue is not redesigned from scratch. It is adjusted just enough to ensure that what is shown can be recognised and used within each market.

Download our Agency Credentials
Preventing the Wrong Message from Being Approved

Most campaigns do not fail because brands misunderstand local markets. They fail because the decision to proceed is made before testing can invalidate the message.

By the time creative is reviewed centrally, ad testing is incomplete. There is enough signal to move forward, but not enough to challenge the campaign’s core message.

So the campaign moves forward. At that point, the most important question is no longer asked: whether the message performs across markets.

If it does not, the campaign does not fail outright.

It underperforms consistently across markets. Each market responds slightly below expectation, but not enough to trigger intervention.

By the time that pattern becomes visible, the campaign is already in flight.

Correction at that stage is possible, but rarely pursued. Changing the core idea requires rework across assets, alignment across teams, and disruption to a coordinated rollout. Most organisations absorb the underperformance rather than rebuild mid-launch.

This is where ad testing and market research diverge. Used after launch, it explains underperformance. Used earlier, it stops the wrong message from being approved. Ad testing is not absent in global campaigns. It is extensive. But it is rarely designed to stop a campaign from moving forward. Research shows that creative quality accounts for around 47% of a campaign’s sales lift, making it the single largest driver of advertising effectiveness.

Why Most Ad Testing Doesn’t Prevent This

Ad testing exists, but it is designed to confirm, not to reject. Most global campaigns are evaluated to optimise execution, not to challenge the campaign itself. Copy, visuals, and formats are assessed in controlled conditions, often with pre-defined audiences and limited exposure. The objective is to improve performance, not to determine whether the campaign should proceed.

This creates a false signal. A campaign can perform well in isolation and still fail when it enters a real market context, where usage, constraints, and expectations differ.

Timing reinforces the problem. By the time multiple markets are involved, the cost of change is already high. What should function as a decision point becomes a validation step.

Campaigns are refined, not challenged. They move forward with confidence that is not grounded in how they will actually be received.

Before creative is approved, research can challenge whether the campaign reflects how the category is actually used in each market. It can surface when a product plays a different role, when a behaviour is assumed but not established, or when a message is interpreted differently than intended.

Once the message is fixed in the creative, research cannot change it without triggering a rebuild. It can only optimise around it.

The role of research is not to optimise campaigns. It is to determine whether they should run at all.

Without that intervention, campaigns move forward with partial validation. The same message carries across markets, and the same limitation repeats in each of them.

Most brands do not operate this way, not because they disagree, but because the structure does not allow it.

Campaign timelines are locked to global launches rather than local validation cycles. Creative is approved through layered stakeholders, which makes late-stage change difficult. Ad testing is typically scoped for optimisation, not for rejecting the core idea.

Local teams and agencies are then expected to adapt what has already been approved. They can refine execution, but they cannot correct a campaign that is built on the wrong foundation.

Local teams are often asked to adapt campaigns they did not shape, and cannot change. No amount of localisation can compensate for a message that was never validated in the first place.

Rebuilding creative mid-cycle introduces cost, delay, and internal friction. By the time local insight becomes clear, the campaign is already in flight.

Case Study: McDonald’s — When the Product Locks the Campaign

The same issue appears at a different level when the product is fixed before it is validated locally.

A global campaign can be adapted in tone, casting, and setting. But if the product it is built around does not fit local behaviour, there is little left to adjust.

McDonald’s expansion exposed a constraint that most global campaigns cannot address at the creative level: if the product does not align with local behaviour, the campaign cannot compensate for it.

In India, the issue was not awareness or positioning. It was that the product embedded in the campaign did not align with what a large share of consumers would consider purchasing.

If the product had not been adapted before rollout, the campaign would have been constrained by a structural limitation. The campaign would run, but it would not convert into repeat behaviour.

The adjustment happened before that point. McDonald’s redefined its core offering within the market by introducing chicken and vegetarian products that aligned with local consumption patterns. This move replaced the foundation on which the campaign would be built.

McDonalds-in-India-menu-localization

Advertising followed what people were already willing to buy. The message reflected familiar eating occasions, shared contexts, and established expectations around food.

The global brand remained recognisable. However, the product it represented changed.

If the product had remained unchanged, the campaign would have scaled awareness without building repeat behaviour, a failure that no amount of creative localisation could correct.

This is the constraint that most localisation strategies cannot solve at the communication level.

If the product does not align with behaviour, the creative cannot compensate for it. The campaign can be adapted, but it cannot be corrected.

What McDonald’s avoids is the same failure seen in global advertising: locking the wrong message too early.

The-Fast-Food-Flip-TREND-3-Climate-Positive-600x300-Sep-19-2025-04-01-42-4716-PM

Localisation Is a Decision Point, Not an Adjustment

Across both cases, the pattern is clear.

Localisation does not fail at execution; it fails when it is allowed to influence the campaign.

Teams can adjust execution, messaging, visuals, and placement, but they cannot change the message that has already been approved.

IKEA intervenes before that point by adjusting how products are placed within real environments. McDonald’s does the same at the product level. In both cases, the change happens before the campaign is locked.

That timing determines performance. Campaigns do not need to be rebuilt for every region. But they cannot be approved without being tested in the environments where they are expected to perform.

Understanding how meaning shifts across markets requires more than instinct. It requires evidence: how people behave, what they prioritise, and how they decide.

At Kadence International, we help brands uncover those patterns through mixed-method research, cultural analysis, and in-market testing, so campaigns are built to perform from the start.

If a campaign is scaling globally, the question is not how to localise it. It is whether the message has been tested where it needs to perform.

FAQs

Why do global advertising campaigns fail to resonate across markets despite strong performance metrics?

Global campaigns often appear successful at a surface level: reach, impressions, and even conversions may hold steady. The failure lies deeper. Consumers interpret meaning through local context, and when messaging does not align with cultural norms, living conditions, or decision drivers, engagement becomes shallow. The campaign is seen, but not internalised.

What is the difference between translation and true localisation in advertising?

Translation adapts language. Localisation adapts meaning.
True localisation considers how a product fits into daily life, what motivates purchase decisions, and how cultural context shapes interpretation. It influences not just copy but also visuals, positioning, and even the role the product plays in a market.

How can brands balance global consistency with local relevance in advertising?

The most effective brands maintain consistency in core elements, such as brand values, visual identity, and strategic positioning, while allowing flexibility in execution. This includes adapting tone, cultural references, casting, and use cases to reflect local realities. The goal is not identical campaigns, but consistent meaning across different contexts.

At what stage should market research be used in ad localisation?

Market research should inform localisation at every stage:

  • Before development: to understand behaviour, context, and category role
  • During development: to test message interpretation and resonance
  • Pre-launch: to validate creative effectiveness across markets
  • Post-launch: to track performance and refine execution

Localisation driven only by post-campaign data is reactive and often too late to correct underlying misalignment.