AI’s Great Divide: East Vs West

Why Asia is embracing AI — and why the West is still on the fence.

Introduction

Artificial intelligence is no longer an emerging trend, and is already shaping work, commerce, and communication across the globe. In our recent global study conducted across the United States, the United Kingdom, China, Japan, and India, 88 percent of respondents said they already use AI in some form. But high usage does not mean high trust. Only one in three consumers in the US and UK say they trust AI, making trust the first obstacle and the last mile. This report reveals that AI is not being judged only on what it can do. It is being judged on how it makes people feel. 

Download-global-AI-study

Consumers and employees evaluate AI as if it were a character in their story. Some see it as a hero, others as a sidekick, and many as a wildcard. These metaphors reflect far more than whimsy. They signal confidence levels, emotional alignment, and expectations about the future. These emotional interpretations are grounded in data from the Pew Research Center. In the US, nearly half of adults in 2024 reported feeling more concerned than excited about AI’s rise, while only 11 percent felt predominantly optimistic. Globally, this emotional divide is even more pronounced. According to a 2025 House of Ethics report, 53 percent of respondents across Asia-Pacific expressed excitement about AI. Meanwhile, 50 percent globally also expressed nervousness, reinforcing the character lens of hero, sidekick, and wildcard as more than a metaphor. 

Emotional-Sentiment-Toward-AI-by-Region

What emerged from the data is that AI is personal. It is filtered through cultural norms, generational habits, and industry-specific knowledge gaps. In markets like India and China, optimism runs high. In Japan, skepticism prevails. In the US and UK, where privacy concerns dominate headlines, emotional hesitation often outweighs technical capability. This matters for any brand deploying AI tools to reach or serve customers. Adoption is not enough. Curiosity alone will not drive long-term engagement. 

Without trust, AI becomes a feature rather than a force. To move past pilot programs and isolated use cases, brands must connect AI to the real aspirations of their users—creativity, clarity, balance, and speed. AI today is not just a technological shift. It is a human test; one that challenges companies to build not only better models, but better relationships.

How People See AI? 

AI-personas-by-region

Consumers are not engaging with artificial intelligence as a neutral tool. They interpret it through stories, assigning roles that reflect emotional proximity and functional expectation. 

The hero represents confidence. This is how AI is seen by those who trust it to improve their work and decision-making. They believe AI enhances creativity, accelerates learning, and delivers a competitive edge. These users are more likely to understand how AI works and actively seek out new applications. In our study, this perspective was most common in India and among younger generations, particularly millennials.

The sidekick suggests familiarity but with limits. These consumers are comfortable using AI for support but not for leadership. They may use AI to automate routine tasks or streamline research, but they still want human oversight. This lens is common among older professionals who have witnessed previous tech cycles and prefer to maintain a sense of control. This was the dominant view in Japan, where 74 percent of respondents saw AI as a supporting character rather than a driving force.

Download-global-AI-study

The wildcard signals caution or confusion. These individuals use AI but don’t yet trust it. They may worry about unintended consequences, data misuse, or overreliance. Their knowledge of AI is often limited, and they are less likely to believe AI is relevant to their role or industry. 

What’s important is that these archetypes are not static. As familiarity grows, wildcards may become sidekicks. With the right training and successful use cases, sidekicks may shift toward seeing AI as a hero. But this movement depends on how brands and businesses shape the AI experience, not just what it does, but how it’s communicated and supported.

These character lenses are more than just metaphors. They are predictors of adoption, indicators of trust, and signals of where engagement efforts must be focused. Understanding who sees AI as a hero, sidekick, or wildcard can help organizations tailor strategies that meet users where they are and move them toward deeper, more confident use.

How-People-See-AI

Insight for Brands: Tailor messaging to user archetypes. Emphasize empowerment for ‘heroes’, simplicity and reliability for ‘sidekicks’, and transparency and support for ‘wildcards’.

Trust emerged as the most powerful variable in shaping how people interact with artificial intelligence. Without it, even the most advanced tools remain underused or misunderstood. In our study, trust consistently influenced whether consumers viewed AI as a hero, a sidekick, or a wildcard. It also dictated how open they were to learning, adopting, and benefiting from AI in their roles.

The data reveals a clear divide across markets. In India, trust levels are notably high. Respondents frequently described AI as empowering, helpful, and intuitive, supported by strong self-reported expertise and interest in AI upskilling. In China, the picture is more layered. Confidence in AI’s capabilities is high, but emotional alignment lags. While adoption is widespread, trust appears more cautious, shaped by uncertainty. In contrast, the United States, United Kingdom, and Japan exhibited significantly lower trust, with a greater emphasis on risk, regulation, and the potential for harm.

Download-global-AI-study

These trust gaps reflect broader trends beyond this study. According to the 2024 Edelman Trust Barometer, only about one in three consumers in the US and UK say they trust AI technologies, compared to over 70 percent in China. The decline is most pronounced in markets where innovation is fastest, suggesting that speed alone does not breed confidence. In fact, it can have the opposite effect.

AI-Trust-Levels-by-Country

The reasons for this mistrust vary. Privacy concerns top the list, cited more frequently than job loss or skill displacement. Many consumers are wary of how their data is being collected, stored, and used by AI systems. Others point to a lack of transparency or accountability in AI decision-making. These concerns are compounded by limited public understanding of how AI works and what it can or cannot do.

Brands operating in low-trust environments cannot assume that technological fluency will close the gap. Building trust requires consistent messaging, visible safeguards, and a clear demonstration of human benefit. Transparency is no longer just a compliance issue. It is a competitive differentiator.

Download-global-AI-study

Where trust is established, engagement follows. People who believe AI is designed with their interests in mind are more likely to explore its features, ask questions, and recommend it to others. Those who do not trust it are more likely to limit their interaction or disengage entirely.

Trust is not a static metric. It can be gained or lost depending on how AI is introduced, supported, and communicated. The challenge for brands is to treat trust not as a barrier, but as a strategic lever. When pulled correctly, it can unlock higher adoption, deeper usage, and stronger emotional alignment.

Why Generation Matters

How people relate to AI is not just a matter of geography or exposure. It’s generational. Age shapes not only how confident someone feels about AI but also what they expect from it and whether they believe it belongs in their life at all.

In our study, millennials and Gen Z respondents emerged as the most enthusiastic adopters. They are more likely to describe AI as a hero, trust it to make decisions, and seek out new ways to integrate it into their daily tasks. They are also more curious. When asked how interested they were in learning about AI’s role in their job, younger participants consistently scored higher. This is not simply a function of digital fluency. It’s the result of growing up in a world where intelligent systems—recommendation engines, voice assistants, algorithmic feeds—have always been part of the environment. For these generations, AI is less of an introduction and more of an evolution.

AI-Expertise-by-Age-Gender

By contrast, older generations, particularly Gen X and Boomers, express more caution. Many see AI as a sidekick, useful for support but not for leadership. Their concerns are rarely about functionality. Instead, they revolve around control, comprehension, and unintended consequences. In the data, these groups were more likely to say they lacked expert knowledge and less likely to believe that AI could enhance their role. This doesn’t mean resistance. It means hesitancy, often grounded in a desire for greater clarity, training, and assurance.

How-Age-and-Gender-Shape-AI-Attitudes

This generational divide also plays out in sentiment. While 98 percent of those who see AI as a hero report feeling positively about using it at work, that number drops among those who view AI as a wildcard. For older users, that uncertainty often stems from limited exposure or a lack of clear, relatable use cases. Without strong onboarding, AI can feel like a moving target—too fast, too opaque, too impersonal.

Brands must recognize that adoption curves are emotional as much as they are technological. A solution that feels empowering to a 28-year-old might feel destabilizing to a 58-year-old, even if it performs the same task. The same language, features, and UX will not resonate equally across age groups. A universal rollout strategy may be efficient, but it’s not effective.

Download-global-AI-study

To close this gap, organizations must meet each generation where they are. For younger employees, the focus should be on exploration and ownership giving them opportunities to stretch AI’s potential and shape its application. For older employees, the path is clarity, security, and support through structured training, visible safeguards, and real-world proof points.

AI will not be adopted evenly. But it can be adopted inclusively. That starts by recognizing that age is not just a demographic marker. It is a lens through which trust, utility, and relevance are defined.

Insight for Brands: Segment onboarding and UX strategy by generation. Offer exploratory AI use cases for younger users, and reassurance-focused training for older users.

Cultural Shifts in AI Adoption

AI may be global in reach, but it is local in meaning. Cultural context plays a defining role in how consumers engage with intelligent systems. In our study, clear regional patterns emerged that reveal both the depth of AI integration and the underlying beliefs that shape its use.

India stood out as the most optimistic market in the study. Respondents were more likely to describe AI as a hero and reported higher levels of confidence and preparedness. About 7 in 10 Indian participants viewed AI as a positive force shaping the future of their industry. For many, AI was seen not just as a workplace tool but as a growth enabler, something that could amplify personal and professional capability. This optimism is backed by real investment in AI education and policy. From school curricula to government partnerships with tech companies, India is creating infrastructure to support trust from the ground up.

China showed similarly high levels of confidence, with strong technical knowledge and high reported usage. Yet even with this confidence, over half of the Chinese respondents still saw AI as a wildcard. This tension suggests that rapid adoption can coexist with uncertainty. It’s a reminder that momentum alone does not resolve consumer unease. Cultural narratives around progress and competition may fuel adoption, but do not eliminate the need for transparency and emotional reassurance.

Download-global-AI-study

Japan offered a striking contrast. Despite being a leader in robotics and automation, most Japanese respondents viewed AI as a sidekick. Only a small minority saw it as a hero. Preparedness levels were also among the lowest across all countries surveyed. For Japanese users, trust is earned through demonstrated utility, not promise. Practical training, industry-specific applications, and visible human oversight are essential to closing the confidence gap.

The United States and the United Kingdom sit somewhere in between. Both markets show relatively high adoption, but lower emotional alignment. Concerns around privacy, ethical misuse, and job displacement persist. In these markets, AI is often seen as powerful but opaque—effective in theory but hard to navigate in practice. Consumers here are more likely to demand proof before participation. They want visibility into how decisions are made and clarity about where the human stops and the machine begins.

AI-Preparedness-by-Country

These cultural patterns should not be treated as fixed identities. They are fluid, shaped by policy, media, education, and experience. But they are real, and they matter. For brands building AI-powered tools or campaigns, success depends on more than localization. It depends on cultural listening. A message that works in Bengaluru may fall flat in Boston. A user journey that resonates in Shanghai may not translate in Tokyo.

To build trust, drive usage, and unlock value, brands must understand the emotional climate of each market they serve. Cultural fluency is not an add-on to AI strategy. It is a prerequisite. For example, what works in India’s high-trust, high-growth environment may fall flat in Japan’s risk-averse corporate culture.

Insight for Brands: Craft culturally tailored AI messages. In India, there is an emphasis on growth and opportunity. In Japan, they demonstrate reliability and control. In the West, highlight privacy and transparency.

AI as Amplifier, Not Automation

When people talk about artificial intelligence, the conversation often begins with automation. But as our study shows, that is not what users want most. They are not asking AI to do their jobs for them. They are asking it to make them better at what they already do.

Across all markets, respondents expressed a desire for AI to enhance creativity, accelerate learning, and support better work-life balance. This vision positions AI not as a replacement, but as a collaborator. The most valued use cases were those that helped people think more clearly, act more efficiently, or feel more in control of their time. The benefit, in short, was personal.

what-do-people-want-from-AI

When asked to imagine AI granting them a single superpower at work, the top answers included faster learning, improved creativity, and better balance. These are not technical outcomes. They are emotional and cognitive states. They reflect a deep-seated belief that the role of technology is to elevate human potential, not just optimize output.

This desire is strongest among those who already trust AI. In that group, 72 percent said AI would save them time, 67 percent believed it would enhance the customer experience, and 66 percent expected it to give their company a competitive edge. These users are not just willing to adopt AI; they are invested in it, because it supports their own goals.

Future-Impact-of-AI-on-My-Job

Even among those who do not view AI as a hero, the themes remain consistent. The gap lies not in what people want from AI, but in whether they believe AI can deliver it safely and reliably. Misuse, misunderstanding, and lack of training are the primary obstacles—not lack of demand.

This shift in mindset has major implications for brands. Positioning AI solely as a productivity tool misses the mark. Consumers are not inspired by efficiency alone. They are inspired by the promise of doing more meaningful work. That promise must be communicated clearly, demonstrated consistently, and reinforced through experience.

The most effective AI products are those that feel like extensions of the user’s mind, not machines that dictate outcomes. They offer guidance, not control. They create space for creativity, not just shortcuts.

This is the emotional frontier of AI adoption. Brands that understand it will build stronger relationships with their users, not because their tools are smarter, but because they make people feel smarter, more capable, and more human.

Insight for Brands: Position AI as a creative partner, not just a productivity booster. Showcase how AI helps users feel more capable and human, not just faster.

Barriers Holding AI Back

Despite high usage and rising curiosity, artificial intelligence is still not fully embedded in the workplace. Behind the optimism lies a quieter story of hesitation—one shaped not by technological limits, but by human concerns.

In our study, the most commonly cited obstacles to AI adoption were not about access or cost. They were about preparedness. Respondents pointed to a lack of training, limited expertise, and uncertainty about how to use AI responsibly. 

Download-global-AI-study

Industry context further amplifies these barriers. In financial services, 71 percent of firms report using AI in some capacity, but only 41 percent apply it extensively across operations. The internal mood is mixed—47 percent of professionals view AI as an innovation driver, while 39 percent worry about displacement (KPMG, 2024). In contrast, AI in retail and consumer goods is viewed more optimistically. A 2024 NVIDIA report found 69 percent of CPG businesses using AI reported revenue gains, and 72 percent saw reduced operating costs, especially through customer-facing applications like queue analytics and virtual try-ons. Even in markets with high trust, these practical gaps are slowing momentum.

Barriers-to-AI-Adoption

The number one concern, across every demographic, was data privacy. It was mentioned twice as often as job displacement. This reflects a growing recognition that while automation can reshape roles, it is data misuse that undermines trust. Consumers are asking hard questions. Who owns the data? How is it used? Can it be reversed or explained? In markets like the US and UK, where privacy debates are active and visible, these concerns translate directly into restrained engagement.

The disconnect between public perception and expert confidence only adds to the confusion. While job loss remains a top concern for many, recent analysis by the International Labour Organization suggests that only 5.5 percent of global employment is at high risk of automation. The larger impact may be role evolution rather than elimination, particularly in clerical and administrative sectors. 

Download-global-AI-study

Similarly, while 76 percent of experts believe AI will benefit them personally, just 24 percent of US adults feel the same. This optimism gap reflects a breakdown in communication. The people building AI and the people expected to use it are not speaking the same language.

The-Great-Disconnect-in-AI

What consumers need is not more hype, but more help. They want guidelines, not guesswork. In our study, 54 percent said it would be very helpful to have structured onboarding through standard workbooks or internal AI guides. Without these tools, many users are left to navigate AI on their own, deepening confusion and raising the risk of misapplication.

what-workers-want-from-AI

The solution starts with transparency, training, and storytelling. Brands must demystify AI’s capabilities and limits, not just through features, but through relatable narratives. Demonstrating how AI works in simple, familiar terms reduces fear. So does showing how it has helped others in similar roles.

AI will not scale if it remains intimidating. It must become visible, teachable, and human-centered. Until then, its full potential will remain locked behind a wall of uncertainty that no algorithm alone can break.

Insight for Brands: Invest in visible, human-centered onboarding. Reduce fear through plain-language narratives and role-specific training materials.

Case Studies in Building Trust

Lowe’s — Human-Centered AI for DIY Empowerment

My-lowes-AI

Image Credit:

Background:
Lowe’s, one of the largest home improvement retailers in the United States, has been investing in AI to enhance both customer experience and frontline associate performance. In early 2025, Lowe’s launched Mylow, an AI-powered virtual assistant developed in partnership with OpenAI and deployed across all 1,700+ stores.

Approach:
Mylow acts as a digital advisor for DIY projects, offering guidance on materials, tools, and step-by-step instructions via kiosks, handheld devices, and mobile platforms. It is designed not to replace human help, but to supplement it with personalized, conversational support. Associates also use Mylow to answer customer queries more confidently.

Outcomes:

  • Lowe’s reported improved associate confidence in product recommendations and faster customer interactions.
  • The company also adopted AI-powered computer vision systems (developed with NVIDIA) to reduce checkout errors and automate stock management at self-checkout lanes.
  • The strategy aligns AI with emotional benefit, framing it as a supportive sidekick that empowers rather than replaces.

Perfect Corp — Visual AI for Trust in Beauty

Perfect-Corp-AI

Image Credit: Heaptalk

Background:
Perfect Corp is a Taiwan-founded, NYSE-listed company providing AI and augmented reality solutions for the beauty and wellness industry. Its platforms are used by global brands such as Estée Lauder, Bobbi Brown, and Madison Reed to offer real-time virtual try-on experiences.

Approach:
The company’s AI tools enable users to preview makeup and skincare effects with precision, using facial mapping, AI skin diagnostics, and AR overlays. The user experience is interactive, visual, and personalized, designed to build consumer trust by eliminating trial-and-error risk.

Outcomes:

  • In 2024, Perfect Corp reported 12.5% year-on-year revenue growth and expanded its AI diagnostic tools beyond cosmetics to hair care and wellness.
  • New tools like the Frizzy Hair Analyzer and Skin Quality Index were launched globally, expanding trust-based personalization in physical and digital retail environments.
  • Its approach blends visual credibility with emotional reassurance, especially for high-involvement, identity-driven purchases.

These case studies show that trust is not just a matter of compliance or infrastructure. It is an outcome of experience. When AI helps people solve problems they care about, without making them feel vulnerable or out of control, it becomes not only more useful, but more welcome.

The common thread is emotional alignment. Whether in home improvement or beauty, the brands that are leading in AI adoption are those that design for both function and feeling. They make AI feel like an enabler, not an experiment.

Insight for Brands: Trust comes from real-world benefits. Demonstrate how AI tools support personal agency, whether in choosing lipstick or fixing a sink.

The Road Ahead for AI Engagement

Artificial intelligence is no longer a novelty; it’s an expectation. But its success won’t be defined by adoption alone. It will be defined by how it makes people feel. As this report shows, trust remains the currency of progress, and emotional alignment—not just technical capability—will separate the AI winners from the rest.

The future of AI is not about more features. It’s about more fluency. Fluency across generations, across geographies, and across emotional states.

For brands, this means evolving from deploying AI to designing relationships with AI, where every chatbot, virtual assistant, or recommendation system reflects cultural nuance, personal aspiration, and emotional clarity.

Insight for Brands: The next era of AI is human-first. It belongs to companies that recognize AI not as a tool to control consumers, but as a medium to collaborate with them.

Explore the Full Findings

Dive deeper into the regional data, generational shifts, and trust-building strategies shaping the future of AI. Download the full report to access complete insights, charts, and brand implications.

Download-global-AI-study