If your growth strategy still assumes the shopper will arrive on your product page, it is built on a journey that no longer defines how products are considered.
Products can perform well across every visible metric and still disappear, not because they underperform, but because they never enter the decision at all. That loss lies in how options are defined before any comparison takes place, in systems that filter what is shown to the consumer.
Consumers are starting inside AI interfaces that reduce the market before it is seen. A question replaces a search, and context replaces keywords. What comes back is a narrowed set of options, often just a few recommendations assembled before the shopper has seen what else exists.
“What’s a low-sugar electrolyte for workouts?”
“Which detergent works for sensitive skin?”
The input is natural. The output is filtered, shaped by signals the consumer cannot see and the brand does not fully control.
Research from Ipsos shows that roughly one in three Americans is open to using AI shopping agents, with significantly higher interest among younger consumers. At the same time, consumers are not handing over the final decision. They are using AI to reduce effort, not replace judgment.
How AI Shopping Agents Are Filtering What Consumers See
The agent's role is to narrow the field before evaluation begins.
Consumers are comfortable letting AI reduce the field because it removes friction. It turns a broad category into something manageable before evaluation begins. What changes is where the options are defined.
This behavior is already embedded across markets. In the UK, comparison platforms such as Compare the Market and MoneySuperMarket structure what gets evaluated. In China, super apps like WeChat and Alipay shape the journey before search begins. Across Southeast Asia, platforms such as Shopee, Lazada, and TikTok Shop determine what is seen. In Japan, trust signals, reviews, and familiarity filters are used early. AI agents make that filtering more explicit and more consistent.
Platforms are now being built around this model. Amazon’s Rufus answers product questions and recommends options directly within the shopping experience, reducing the need to browse across multiple listings. OpenAI and Stripe have introduced the Agentic Commerce Protocol, while Google is advancing a Universal Commerce Protocol to connect discovery, checkout, and post-purchase workflows. Retailers are embedding agents directly into their ecosystems, with Walmart and Target enabling shopping inside AI environments. The agent is becoming the layer that determines what enters the decision at all, and what appears depends on how clearly a product can be interpreted.
Consider a product positioned as “hydration support.” In testing, it performs well. It converts when seen. But consumers do not search that way. They ask for “low-sugar electrolyte for workouts” or “hydration drink without artificial sweeteners.” If the product attributes do not align with that language, it does not appear.
Where information is incomplete, agents narrow further rather than compensate. A shopper might overlook missing details or imperfect information and still proceed. An agent currently does not.
The set appears complete to the consumer. From the consumer’s perspective, the process feels faster and more direct. From the brand’s perspective, the field had already narrowed before they had a chance to compete.
Why Products Are Being Excluded Before Consumers Compare
Most performance systems assume the brand has already entered consideration, so they track what happens after a shopper engages.
What they don’t capture has always existed. Brands have never had full visibility into the options consumers exclude before they start comparing.
In an agent-shaped journey, that invisible stage expands. The set is defined earlier, often before search, before comparison, before any measurable interaction. If a product is not included at that point, it does not register as a missed opportunity. It does not register at all.
Research from Bain & Company shows that consumers now consider far fewer brands than they did a decade ago, often narrowing choices to just three to five options. Performance metrics continue to move, even as a growing share of decisions are being shaped elsewhere.
AI agents accelerate that compression without exposing it. The blind spot isn’t new, but it is becoming more consequential. Brands optimize what is visible, while the conditions for being included sit outside the data they rely on.

How Brands Must Structure Product Data to Be Included in AI Recommendations
The response starts with understanding where visibility is needed and ensuring products can be interpreted clearly in those moments.
Not every product needs to win every query. The task is to define the need states that matter and ensure the product appears when those needs are expressed. Inclusion is shaped by how clearly the product can be understood by systems that rely on complete and interpretable information.
In practice, this means structuring product data around how consumers express needs, not how products are categorized.
Products exist for an AI agent only if they are clearly defined in the language of the need.
A protein bar described as “nutrient-dense” is harder to interpret than one labeled “high protein, low sugar, under 200 calories.” A yogurt positioned around “wellness” competes less effectively than one explicitly tagged “high protein, lactose-free, no added sugar.” The difference is not branding; it is interpretability.
Agents do not interpret vague positioning. They match explicit attributes to explicit requests. Use cases, functional benefits, ingredient signals, and constraints such as sugar content or allergens are not supporting information. They determine whether a product is included at all.
The same applies to availability and pricing. If a product is inconsistently listed across retailers, missing from key channels, or priced in a way that varies widely, it becomes fragmented. Fragmentation reduces confidence, which in turn affects inclusion.
This is not new, but it is now enforced. Historically, a consumer could bridge gaps. They could infer, reinterpret, or search again. An agent does not currently do that. If the data is incomplete or inconsistent, it narrows the set.
That shifts ownership. Inclusion is no longer driven primarily by marketing. It sits with product, merchandising, and the systems that define and distribute product data.
Internal product categorization reflects how a product is built, while agents respond to how a need is expressed. A brand might organize around “snacking occasions”, while the agent resolves queries like “high protein snack under 200 calories with no artificial sweeteners.” Misalignment between the two reduces the likelihood of inclusion.
You can see this directly. Take the same need state and vary how it is expressed:

How Market Research Reveals What Consumers Never See
Agent-driven systems do not behave like open shelves. They rely on structured inputs, clean signals, and interpretable data. Where that breaks down, products are not downgraded; they are just omitted.
As these systems scale, they standardize how decisions are constructed. The same signals get rewarded. The same gaps get penalized. On Amazon, for example, visibility is shaped by what the system can interpret and prioritize, including conversion signals, availability, and commercial factors such as margins and stock levels. Labels such as “Amazon’s Choice” reflect those signals, not necessarily a complete view of consumer preference.
Over time, visibility concentrates among brands that are easier to interpret, not necessarily better suited to the need.
Brands no longer control whether they are considered, and omission leaves no trace in traditional data. If a product is never included, there is no impression, no click, and no conversion to analyze. The question shifts from why a product was chosen to whether it was ever considered at all.
This cannot be solved through better tracking. It requires understanding how inclusion is determined in the first place.
That can be observed directly. Testing the same need state across different prompts, platforms, and contexts reveals which products are consistently included and which are not. A product optimized for “running shoes” may not be included for queries such as “cushioned running shoes for long distance” if those attributes are not explicitly defined.
This is where market research becomes critical. Not as a retrospective tool to explain performance, but as a way to map how decisions are shaped before they appear in reporting.
Most frameworks were built to understand preferences within a defined set of options, reflecting only what was surfaced in the first place. What is missing is how that set was constructed.
The focus shifts to how consumers express needs in natural language and how systems interpret those inputs to determine which products are included.
This requires methods that capture behavior as it unfolds. Digital ethnography allows researchers to observe how consumers interact with agents in real contexts, including how prompts are formed. Passive behavioral tracking shows what is included, what is ignored, and how often products appear across repeated interactions. Decision journey reconstruction maps how a shortlist is assembled across platforms, moments, and need states.
These approaches enable testing whether intended positioning translates into actual inclusion. A product may be designed to meet a specific need, but if that need is expressed differently by consumers or interpreted differently by systems, the product may not be included at all. Once that gap is visible, it can be traced back to how the product is structured, described, and interpreted across systems.
The role of research shifts with it, from understanding what consumers chose to understanding what they never had the chance to choose. That understanding becomes the foundation for staying in the market at all.

How AI Is Changing What Brands Need to Optimize For
The optimization problem has changed. For years, visibility meant ranking well in search results. SEO was the primary lever, and Google was the system to optimize for.
Visibility is now determined across multiple agent-driven environments, from ChatGPT and Claude to retail systems such as Amazon’s Rufus, each interpreting and filtering information differently. There is no single index to optimize against, and no stable set of rules to rely on.
This is where a new layer of optimization is emerging. Generative engine and agent optimization are not replacements for search. They sit alongside it. The task is no longer only to be found, but to be interpreted correctly across systems that decide what enters the decision at all.
That requires a different approach. Products need to be structured, described, and maintained in ways that align with how needs are expressed, not just how brands define them. This is not a one-time adjustment. It is an ongoing discipline, shaped by how different systems evolve and how they interpret signals over time.
The complexity is the point. Each system applies its own logic, and those systems are continuously changing. Optimization becomes less about ranking in one environment and more about ensuring consistent inclusion across many.
Products that are not included do not compete, and by the time the impact becomes visible, the loss has already compounded.