AI adoption succeeds when people feel safe, informed, and empowered. In this session, experts from HP and BI Norwegian Business School unpack how narrative, knowledge, and trust shape the way employees embrace AI — and what leaders can do to accelerate responsible adoption across their organisations.
Watch the webinar recording here.
Transcript
[00:00:00]
Introduction
Janine: Thank you everyone for joining us. Welcome, I'm excited to be hosting our first webinar in the Kadence AI Adoption Engine series. I'm Janine Michalek, Insights Director at Kadence International America's office, and I'll be hosting the next half hour along with my brilliant colleague Jenni Coggan, who you see on screen here.
Janine: At Kadence we've been really studying how businesses and employees are experiencing and adopting AI or artificial intelligence. We ran a global survey in the US, the UK, China, Japan, and India. What we found was very striking. 88% of middle to upper management are already using AI in some form, probably not surprising.
Janine: And that was pretty consistent across all of the countries I mentioned. But what we found was the way people feel about AI, their perceptions, their attitudes, their emotions, the way they talk about it dramatically differs from the east and to the west in Asia, in particular in China and India.
Janine: Employees are really excited and they're embracing AI, the narrative there and their mental state. It's about growth, it's about empowerment, it's about the possibilities for the future. In our research, one of the things we did was use metaphors and in employees in the east, in those eastern countries, they describe AI as a hero.
Janine: They also describe AI as a Swiss army knife. It's something that amplifies human productivity, but also goes beyond that amplifying creativity. In the west, the emotions we found were very different there. The conversations are more towards risk, regulations. The mental state of the employees is much more cautious, less optimistic.
Janine: Here. AI, it's not described as a hero. It's described as a sidekick. And it's not described as a Swiss Army knife. It's described as a calculator. So employees do see it as a useful tool, but it's somewhat limited and it's not transformative. Transformative in their thinking. So this gap that we're seeing is not just cultural.
Janine: What we found in the research is that it's directly affecting adoption rates and comfort levels. That adoption or the way they talk about it really shapes their trust level. And that trust level is what's driving usage. And ultimately that trust is whether employees will embrace AI as a partner for progress in their organisations and in their personal lives.
Janine: I'm going to turn it over to Jenni to talk a little bit more about our research.
Jenni: Thanks so much Janine. And hello to everybody. Yeah, so just to add to that, in our research we identified what we call the AI adoption engine, and really this is three distinctive cogs, or if you like, three levers that turn the wheel of adoption forward.
AI Adoption Engine: The Three Cogs
Jenni: So the first cog, which is really what we're speaking about today, is shaping the narrative. That's what we're speaking about today. This is about the stories, the metaphors, the conversations and how that narrative influences acceptance. Our language and culture, our actual way of being in the world has a significant impact on how we embrace new innovations and technologies.
Jenni: Our internal narratives really do shed light on whether we approach with cautious deliberation - or catalytic adoption. So that's our first cog. Our second cog is really about building knowledge. So that will be our second webinar. This explores why training and education are the ultimate game changers in closing the AI learning gap.
Jenni: And the last one is about trust. That will be in our third webinar. That really, trust really unpacks the gateway to AI success. And why - we do see the different levels between the east and the west - does come down to trust. And we'll be digging into that in our third webinar. Each cog is essential, but it all starts here today with the narrative. If the story sparks fear, adoption stalls. If it sparks empowerment, adoption accelerates.
Janine: Okay, so let's get our discussion started. I am super excited to introduce our two amazing guest speakers. We have Luke Thomas and Dr. Vegard Kolbjørnsrud with us. Luke joins us from HP. He's a renowned AI transformation strategist and innovation leader, and he is currently leading global initiatives that integrate AI into customer understanding and workforce innovation at scale at HP. Vegard is an associate professor at Strategy joining us from the BI Norwegian Business School. He is an expert in new organisational forms, digitalisation and business models. His research focuses on how AI transforms leadership, decision making and collaboration within organisations.
Janine: So together over the next 25 minutes, we're going unpack how the right narrative can really set the stage for adoption trust and long-term success of AI within organisations.
Janine: We will have a Q&A at the end of the session, so feel free to drop any questions or your perspectives in the chat anytime along the way, and we'll get to those in the end.
Panel Discussion: Narrative and Industry Perspectives on AI Adoption
Jenni: All right, so let's start with the basics on the topic of how the way we talk about AI affects its adoption in the workplace. So Vegard, we're going start with you from your research on AI and decision making, how does language shape employees' willingness to collaborate with AI systems and workflows?
Vegard: I think language matters and we tell stories about, you know, what gives meaning in life, generally, and also then about AI. But I think also, it's not only how language affects behavior, but also what affects language. Because what we have found in our research, is that there are some fundamental differences in values and mindsets between east and west.
Vegard: And actually there's a north, south dimension too because it seems like, northwestern Europe are the most skeptical along some, some dimensions. And so when there is placed a high value on privacy and accountability and transparency, and by extension explainability, then you start asking a lot of critical questions and that sort of I think shapes narratives because also the narratives have to fit people's values and priorities too, in order to shift them a little bit. Because if they're completely off, they won't reach their audience, right? So I think it's like a two-way thing.
Vegard: Is a more critical stance a problem? It depends, I think, because if it stops experimentation and adoption, it's a problem. But if you adopt and experiment to try out, then actually your critical stance is an asset because you're avoiding the bad uses of AI, the misuses of AI, et cetera, which are also important risks, right?
Vegard: So I think it's more of a nuance thing. It depends on whether your skepticism or your questions is a barrier to adoption, it's a problem. If it's something you apply when you adopt, it's a good thing.
Jenni: So values, underpinning the behavior, which is also being driven by whether they will adopt or not. Thank you so much, Vegard.
Jenni: Luke, on your side, obviously these behaviors also probably differ from industry to industry, so not just on a personal level. So you're joining us from HP and I wonder, do you think that the way the narrative is - is it different between different types of companies?
Janine: So your company, for example, that is involved in building technology versus other companies that are maybe involved in healthcare or finance. Do you think the narrative and the path to adoption there is different?
Luke: I think the media has done a fantastic job giving this perception that AI is going take away everyone's jobs. Right now you're seeing more and more research showing that most of these pilots are failing. They now come to commercial instances. I mean, you see more and more executives saying that, Hey, it's not - most of these layoffs are happening not because of AI but probably because we over-hired a lot of folks during COVID, right?
Luke: So I think we are coming to a point - a self-realisation - that, be it any vertical, be it tech, healthcare, finance, it all depends on the data. They say if you don't have a good data strategy, you really don't have a good AI strategy. So I think the focus is really on the data right now.
Luke: And that also means that we have to redesign the workflows. You cannot apply all your AI stuff right now to existing workflows because it just wouldn't make sense. I think now enterprises are really thinking through as to what that means. Like, for example, in the past, you know, sales guys, sales reps spend a lot of time in scheduling onboard, you know, trying to figure out or do a lot of data entry and all that good stuff.
Luke: Now they can free up a lot of that time by using some of these AI assistants. And we, ourselves at HP have built an RFP tool, which just helps these sales guys to respond back to the prospective customers as quickly as possible, which is always seen as a very positive sign in the past.
Luke: You know, responding to a proposal takes a lot of time. So all these efficiencies that we are gained by using AI has helped the worker to a large extent rather than replacing them. So I think rather than talking about, it's going replace all jobs, it's replacing a lot of tasks that are entailed within the job.
Luke: And I think, as long as you bring forth that message to the folks at any organisation, I think people will embrace it. That's what we're seeing right now in the industry, where a lot of focus is on the data side of things and how to make sure that the data that you have is good enough to yield good outputs.
Luke: At the same time, embracing a culture which fosters this adoption of AI tools in the existing workflows and also redesigning workflows to ensure that AI is pretty much glued into it so that it is optimised to give you those results which you want. Otherwise, you are really asking for something, which those desired outcomes will not be possible because, you know if you put garbage in, you get only garbage out.
Janine: Thank you so much, Luke. That's perfect. and I would love to take that and kind of turn it into something actionable for our audience organisations out there. You're right on topic where AI is, especially in the West, seen as a replacement rather than a resource, and there's fear about job replacement and all of that.
Janine: So I'd like to know how organisations can really reframe that narrative where it's human-plus-AI, it's better, and it we're amplified by that, not reduced by that. if you had to pick a single word or a single metaphor for AI that would help build optimism, what would it be?
Luke: I think for me, AI is an, an exoskeleton. It doesn't replace you, it augments you. It gives you more strength, more power, more reach, more capabilities, while you stay in control. This is the case with any disruptive technology, or even product as such. You know, in the past, I remember when I was a kid, I used to think about memorising all those phone numbers…
Luke: Now I don't even, I don't even know sometimes even my kids' phone number because it's all there in the address book. So it doesn't mean that, you know, I'm not intelligent enough, but it maybe more of my time is assigned to things which matter most because other things that you can easily get.
Luke: So I think for me, that's how I kind of look at AI at this point. And, and now if you direct the motion, it really amplifies what you want to do. And I think that's the approach which I would take towards adopting AI technology as now.
Janine: I love that. That does make you feel positive about it, thinking about it that way. Vegard, in Asia, AI is often framed as a partner for growth. While in the West, again, the conversation is very different. We see in Asia, the government initiatives, they're really embracing AI at a very early age. India has CBSE program for secondary education that is built to create an AI-aware workforce.
Janine: At a very young age. these children are being taught not just how to use AI, but how to build AI models and AI agents. Singapore has AI for kids. China's government just did a huge AI textbook rollout. Is there one narrative that really resonates in Asia that Western companies could borrow?
Vegard: I think actually what you describe here is that their actions match their narrative and what we have found in our research is that the real antidote to AI fear, so fear of replacement for instance, is not more information, but it's what I call exposure therapy. So basically that you learn some practical skills in how to use AI. and use in in your work because then you realise that you're an actor; you can affect that.
Vegard: You see that you have a role to play even with these powerful tools and even recognise some of the limitations of some of these tools, so you realise that you can reinterpret your own job and your role. So [how we talk about it] has to match the practice. Skills and [having more information] mitigate fear. So it has to really match.
Jenni: I think that is really good, I love the way you said it actually, Vegard, the exposure therapy; having practical skills and tools. That will create that extra step so that AI does become a movement from being fearful to amplification. And I want to go there now.
Jenni: So we know, we've heard a lot of employees say things like, AI is going to replace me. How can organisations reframe the narrative to "AI-plus" - AI actually empowers me. Like the way Luke said, it's almost like your exoskeleton. Perhaps the inner dialogue has to shift, from AI is going replace me to What tools can I develop to embrace AI and enhance my career and therefore drive business success and profitability.
Jenni: So Vegard, you've written about AI as a collaborative colleague. Very nice terminology. What structures or practices help people work effectively with AI rather than compete against it?
Vegard: What research shows is that if you use, particularly generative AI, as a sparring partner or as a sounding board, you're more effective in complex problem-solving and creative work than if you just use it as a ghostwriter.
Vegard: And then you have a much more active role in that and you interact with that technology in very different ways than we've done traditionally with technology. If you search on Google in a traditional web search, if you don't get a hit, you have to try again.
Vegard: And sort of here [with AI] you go into a dialogue and you can shape and you can narrow, you can widen, you can sort of redirect along the way, which is much more like how you interact with people. It's not about consciousness, a sympathetic thing, but it's more in how we collaborate.
Vegard: And it has some degree of agency, it's more than just a tool or infrastructure. It's kind of like an actor. And I think we should treat this as such. I mean, it doesn't necessarily mean that you will invite it for a Christmas party, but you get my point.
Jenni: Although you might ask it what to do for your Christmas party, I guess.
Vegard: But you probably wouldn't like the transcript, right?
Jenni: I love how you said it should be a sparring partner. It's almost like, how can we use it to help us amplify rather than, What can it do? Or Where is it going to be replacing me?
Jenni: I want to hear from you, Luke, in terms of your employees. How can you help them or how do you help them see AI as this partner, as the sparring partner rather than as a threat?
Luke: Yeah, so Jenni, the biggest challenge for any large organisation, including HP, is sometimes they, the employees, need to be aware of these tools we've made available to them, right? That's the biggest challenge. Because they go by what they see in the media, and then they have this perception stuck in their head. And at the same time, they're not even aware of all these tools that are made available to them.
Luke: So one of the biggest emphasis that we have on our team is basically trying to embrace as many AI tools as you can, which are compliant with cyber and privacy, the teams in HP. But at the same time, try to see how you can actually use it in your current workflow.
Luke: If you don't know how to go about doing it, see if there are existing tutorials within the company databases. If not there, maybe talk to a mentor. I think from a top-down perspective, our leadership is saying, Hey, we made it very, very clear in our previous or recent earnings call, that we are positioning the company as a AI-plus company rather than a plus-AI company.
Luke: So that means we are taking a very AI-centric approach to everything that we are going do, especially in the future of work. At the end of the day, and when you talk about the future of work, what it also entails is how do you embrace some of these AI agents as part of the workforce, as a part of the org structure, and what does that mean for the company in the future?
Luke: I think these are the kinds of discussions that executives, leaders, mentors within an organisation should have with the workforce to make them better understand, rather than them going by what they see in the media. And you've seen a lot of companies as well? For instance LANA, they first said that they're going be a only-AI kind of company then they did a U-turn on that.
Luke: So, it has come to a point, at least in the short term, you need to always have a human in the group to validate the outputs that are coming out from these AI units. I think at the end of the day, if you are going to embrace [AI in] the culture – especially on how it needs to be a part of the org structure and what that entails – people are more intrigued to figure out, okay, How can I now use it as a partner versus as a threat for me personally from a productivity standpoint?
Luke: Even writing simple emails right now, it's so simple? I mean, just writing whatever you feel like and then you run the Copilot, and it fixes all the typos, fixes all the grammar. You spent so much time doing all that in the past. So I've saved so much time doing all these things.
Luke: And the same thing goes for meetings as well. Sometimes you get in the middle of a meeting, you just go and say, Hey, recap the meeting so far. And it really, really helps. And I think, spending more time on training and upskilling our employees and making them aware of all these tools and providing them an overview of the key trends, not only what's happening right now or what's to come in the future as well, I think helps them to broaden their mind about the whole aspect of adopting a new technology.
Luke:. Otherwise they may be in fear and have this resistance. So we spend a lot of our time in change management within the organisation. It might look very silly at times, but it's really not a technology issue sometimes - it's more to do with how it impacts the humans, playing a huge role in the workforce, along with the existing workflows that we have. And once you have all that in place, you will not see stats like 95% of all AI projects are failing. I think you'll see a big difference in the coming years.
Luke: So I think that fostering a culture to embrace AI within an organisation, especially when it comes from the top down, helps employees to a large extent. And especially when you emphasise that, yeah of course there are layoffs going on in the tech industry for sure, but I think now more and more folks are coming and saying that it's more about, it's not really purely because of AI, but it's also about a lot of other factors that come into play.
Luke: So I think when you reinforce those kind of things into the organisation, it helps them rethink on how they look at AI in a way which will help them to, you know, turbo boost their job and their productivity and efficiency and help them, you know, probably get a fantastic bonus and appraisal in theh end as well. That's how I look at it.
Janine: Luke, just continuing on that leadership and top-down approach. What we do know is that leadership language really matters. I just recently read an article about Nova Nortis and they really, a year ago had no global strategy for AI use whatsoever. And in the past year, they've been building this top-down global AI strategy, and they're really treating it as a marketing program within their organisation.
Janine: They're marketing AI, they're having town halls, they're having TV ads throughout the buildings, so really communicating that. From the leadership position based on your experiences Luke, what would be the wrong way for leaders to talk about AI? Let's look at what not to do for a moment.
Luke: The wrong way to do it, is saying that, 50% of people's jobs are going go away in the next couple of years because of AI. All the layoffs are happening because of AI.
Janine: So instilling fear would be the wrong way.
Luke: The wrong way. Right. I think if leaders were to say, and I want to emphasise on this, Hey, is it going to replace a lot of tasks that you do within your job? Yes, that will definitely happen. Then also to talk about, okay, if you were to replace most of these tasks, how do you emphasise re-skilling your employees, and making them sure that they are AI-ready for the next, new type of work – which we still don't know – that they're going.
Luke: Right. So as long as companies come up with commitments and/or pledges saying that, Hey, we are going spend X billion or X million dollars on upskilling our current workforce, because most of the existing tasks can be done by AI. I think that brings a lot of confidence for folks in an organisation saying that the company really care, not only our current aspirations, but even our future, trajectory as well.
Luke: So I think those are the kind of things that the leadership need to talk, talk about. But again, we are, we are in a phase right now in the state of flux, right? There was so much expectation three years ago when all this came about, and we are at an inflection point now saying, everyone's already talking about, you know, agentic AI without even knowing what the generative AI can do.
Luke: There's a lot of confusion even by leaders, what's the difference between an AI agent and agentic AI? Because that's the buzzword right now in the industry, and the media does a fantastic job confusing people too. So I think, leaders need to have a very, pragmatic approach to this and say, Hey, these are the face by face, this is how we think this is going evolve.
Luke:. Because we are at a stage where the adoption curve is on two fronts, right? You have digital AI with all your generative AI and AI agents. At the same time, you have this physical AI with robotics as well, which is booming in China as well.
Luke: So now what does that mean? What does it mean for the youth unemployment that's going on in China? It's also trickling down in the US as well. So how do we repurpose all those graduates who are being promised a dream job after they finished their graduations or whatever degrees that they passed.
Luke: I think those are the, those are the kind of narratives that we need to support the younger generation and also the generation is already in the workforce who sometimes are resistance to change to new things because it takes a lot of time, effort, energy, and at times it might also impact their job.
Luke: So as long as you have the right narrative from those leaders, and especially if they know what they're talking about, that's the most important thing…
Janine: So we're running out of time here and I just want to quickly make sure we cover everything. Vegard regard from your leadership and governance lens. What differentiates organisations that successfully embed AI culture from those that might stall at the pilot stage?
Vegard: To build off on what Luke just said, I think. Realising that managers and executives this time, they're not only leading other people's digitalisation, they're also leading their own. They have to get exposed to this technology themselves – not to become experts, but actually to know what it's like, so they ask the right questions and they have relevant references. People will see through it if they just repeat some prepared talking points. They have to have real exposure.
Vegard: And then actually thinking about the organisation, what makes it intelligent? You know, how people and tech can be used in concert. I think in a lot of organisations, in most organisations, the intelligence in use is not as high as the intelligence of the people there, because we use people for a lot of things that don't really require intelligence.
Vegard: So we can release some of that and automate some of the basic stuff, the boring stuff, even some of the dangerous stuff, so people can use more of their abilities and actually get to do more interesting work. I think that's a really promising and uplifting narrative, if you like, but it has to be based on – it's not like a story, like a fairytale – it has to be based on how you practice and how you invest and how you build your company.
Jenni: I think just to recap and then I guess we'll close, it's been a really stimulating conversation. Thank you to both Vegard and Luke. We obviously started out saying how the narrative affects everything, underpinned by our values, which leads to the behavior, which leads to the adoption.
Jenni: There were some really nice things coming out, like we have to have positive narratives. I love that idea of "exposure therapy," of seeing AI as a "sparring partner" and also what was said now. In the last conversation from Luke, the pragmatic approach. Like, how do we get our students future-fit, ready so that AI, like Vegard says, becomes part of the workforce that actually ultimately helps amplify us as human beings, and us in our careers, and our businesses as a whole, so that we are all future-fit and can be more profitable.
Jenni: Thank you very much. I'll hand over to Janine to close for us.
Janine: And yes, we're at time and closing right now. Thank you so much Vegard and Luke for joining us and spending time with us and sharing your insights. And thank you all for joining us. This is just the beginning of our series, so stay tuned for our next webinar where we'll explore that second Kadence cog of AI adoption that Jenni spoke about – closing the knowledge gap through training and education – which we touched upon here as well.
Janine: Until then, thank you for being part of the conversation.
Want deeper insight into how employees across the US, UK, China, Japan, and India think about AI?
Read our complete multi-country study, AI’s Great Divide: East vs West, for the data behind these adoption gaps and what they mean for leaders. Access the full report here.

