Ep. 23- AI Systems That Actually Serve Your Customers.

In this episode of The Elusive Consumer, host Ellie Tehrani sits down with Ilyas Iyoob, Head of Global Research at Kyndryl, to explore how enterprise leaders can master the delicate balance between AI automation and human expertise in mission-critical environments. From implementing AI in the world’s largest IT services infrastructure to understanding when automation enhances versus undermines customer experience, Ilyas shares insights on strategic AI deployment that treats technology as a “rookie with speed and precision” rather than a replacement for human judgment, revealing how organisations can leverage AI’s capabilities while preserving the irreplaceable value of human insight and domain expertise.

Transcript


00:09
Ellie Tehrani
Welcome to the elusive consumer, Ilyas Ayup, Head of Global Research at Kyndryl. I am so happy to have you here today.


00:19

Ilyas Iyoob
Thanks for having me.


00:20

Ellie Tehrani
Ellie, before we start going into your personal and professional background, in a few short paragraphs, could you explain to our listeners what Kyndryl does and where your role specifically sits within that?


00:35

Ilyas Iyoob
Sure. Kyndryl is the largest IT services provider in the world, actually. So most of the largest banks, the airlines, hospitals, all of their technology, all of their data runs through a Kyndryl data center. Kyndryl was part of IBM previously and then spun out a few years ago. And some people call it a $20 billion startup. But since spinoff, I have taken over as head of Global Head of Research. And so we run, we are sort of the lookout on behalf of all of our customers. We keep an eye on emerging technology, bring it on, test it, validate it, build around it, and then make sure that our customers always stay competitive.


01:25

Ellie Tehrani
Understood. And we’re going to talk all things AI and new technologies throughout this session. But, but before that, I wanted to read through something that I’ve seen you post quite recently about AI. So I’m quoting here. You mentioned that AI isn’t a rival taking over your role. It’s a rookie that you’re passing the baton to a rookie with speed, precision, and tireless energy, but zero experience. It doesn’t know the nuance, the edge, cases, or this is how you really get things done, tricks. And that’s where we as humans come in. So the best teams don’t just run faster. They know when to pass the baton. And AI can take over the stretches that drain our time and focus so we can sprint ahead where human judgment, creativity, and strategy matter the most. Could you expand on that a bit for us?


02:26

Ilyas Iyoob
Absolutely. So I think many out there, the quote isn’t, you know, that unique. I would think that many people out there realise the balance between human intellectual and AI automation. Even though with the latest technologies, AI is getting more and more intellectual, there’s always the place for humans to play, you know, a part in. And the best organisations out there are the ones who know how to balance, where to draw that line, and. And for what purpose. So, for example, I was just talking to a friend of mine, you know, in the context of this podcast, I thought it would be relevant. This friend, he was buying an espresso machine. He was all excited about it. I’m a huge coffee fan myself, but I wouldn’t say I’m all the way out there to Go buy the fanciest espresso machine.


03:29

Ilyas Iyoob
But so he goes and buys this espresso machine, and you called me over to test it out. And then he was telling me something. He said that as soon as he bought it, he started getting all of these ads for coffee beans and for all the accessories, the pods that go with it and all of that. And it seemed a little synthetic, if we want to call it that. It seemed very, you know, meh. You know, we. We get that can be done with Gen AI, you know, it would have. And these are his words. He said, you know, it would have been nice if instead of all of that, you know, if I had gotten like, you know, a link to what are some, you know, brewing tips, you know, or.


04:17

Ilyas Iyoob
Or even like an invitation to a masterclass, a barista masterclass, that would have been so much to take the consumer along the journey. Right? So that’s just a consumer example, but the same thing happens in enterprises as well. On one extreme, you could go apply Gen everywhere, which all of us know is not the answer. On the other hand, you know, we also know that we can’t keep running things the way they did. But knowing, you know, when something requires more empathy, when it requires more emotion, even though Gen AI can do those things, it would better for a human to do those things. Whereas when there are actions that are more repetitive, and even when they are repetitive, are they repetitive? Just because something is repetitive doesn’t mean you have to give it to AI to do it. Right?


05:14

Ilyas Iyoob
What if along the journey you learn something? Like while you’re doing this repetitive motion, you learn something along the way. For example, you’re, you know, you have a, you know, toddler at home. You know, yeah, you’re feeding the toddler every day, but with every conversation with the kid, they’re learning new words and you want to be the first person to hear them say their first word, right? And then glow to your significant other when they come home from work saying, hey, I heard it first. So sometimes the journey actually, you know, is worth the journey. And so you as a human, you may want to take part in it. So for different, we should know where to pass the baton and where not to.


05:53

Ilyas Iyoob
And eventually that’s what gives us that right blend between, yeah, we want to be monetarily better, but we also want to give that great experience to our customers as well.


06:04

Ellie Tehrani
And that customer experience and knowing where to use automation and where the human insights come into play, and also the personalisation that you mentioned with your friend’s case, is something that is very interesting and will be getting to later in this episode. But before all of that, let’s dive a little bit into your journey into AI specifically and how you became interested in understanding consumer behaviour.


06:39

Ilyas Iyoob
So unlike many others, I’d like to say that I was a pro athlete and then I kind of stumbled into AI, but my background actually is in applied mathematics and that’s what I studied. Did my PhD here at the University of Texas. And so it was called different things at different times. It was called data mining for a while, and then analytics for a while, and then optimisation. And so it was a field that I was already in. So, you know, it’s not like, you know, I don’t have a fancy story for it, actually. But over the years with leveraging AI at different stages, I think that is what has given me the ability to see where it plays well and where it doesn’t play well, because this is just another tool in a huge toolbox.


07:35

Ilyas Iyoob
There are analytical tools that perform better, there are programmatic tools that perform better than AI. There are machine learning tools that perform better than gen AI. There are optimisation and mathematical programming tools that perform better than genetic algorithms in different places. So depending on your background, you know, the, you know, that’s one of the reasons I, you know, I enjoy working with different teams. You know, Kindle research has many parts out of which AI is one of them. And we have quantum and we have digital twins and we have all of those. When, you know, coming from my background of applied mathematics, we get to see the entire spectrum. And so this gives us an opportunity to know what tool to use where and what not to. Right.


08:16

Ilyas Iyoob
Instead of the proverbial, you know, if you, if all you have in your hand is a hammer, right. Everything looks like a nail. So I did come from that background, but I had the opportunity to, you know, be part of a startup early on in my career. And so in addition to the technical skills that, you know, I had to keep, there’s also all of these business skills. I had to do product development, you know, R and D first and then go into product development and then go do marketing for those products and then go sell. We sold. I sold the first deal for our company in that space, in the cloud computing space.


08:55

Ilyas Iyoob
And then finally when we, you know, as our startup was maturing and getting to the point where it was ready for acquisition, I was building financial models for the company, valuation models. And then, you know, successfully, we sold the company to IBM you know, a few years ago and, you know, so through that whole journey, there’s a combination of the business skills and technical skills that I’ve been fortunate for me to be able to get me to where I am today.


09:24

Ellie Tehrani
I would touch upon two of the things that you mentioned. You mentioned that it’s, you know, seeing AI as just another tool in the toolbox and you compared it to, for instance, just having a hammer. A lot of people today in business, I think, have the perception that AI is the sort of Swiss army knife, right, that you can use AI for a bit of everything. What would you say to those individuals that have that perception and how do you see it differently?


09:55

Ilyas Iyoob
First of all, I have to agree to some extent with them, right? For the longest time, AI was just another tool in the toolbox. Yes, it wasn’t anything special, but when OpenAI through A, you know, a simple chat interface in front of these models that have been there for a while, that’s what made it consumable by the masses. And so, yes, it is in many ways a Swiss army knife that can do multiple things. We all find ourselves going to IT for many different things, right? For personal, for work, for travel, for booking tickets, for, you know, for so many. We do find ourselves going to it. So it would be remiss if were to say that it’s, you know, it can’t do many things. But we also, we’ve also heard the phrase that, you know, don’t take a knife to a gunfight.


10:47

Ilyas Iyoob
So there’s also the other side of it, which is, okay, yes, let’s use it. And, and where, you know, where the gunfight in the enterprise world, again, remember, Kyndryl comes from us, you know, solving problems for large enterprises. And these are mission-critical applications we’re not talking about, you know, if you presented the wrong ad to the customer at the wrong time, you know, what’s the worst that’s going to happen? But if your landing gear failed or your wing, you know, fell off when you were landing yesterday in, you know, Toronto Pearson, you know, on a Delta airline, that would be a huge problem.


11:26

Ilyas Iyoob
So, so in the enterprise world, you want to make sure that when you make errors or when you go off topic, that is bounded and that is taken care of in not just putting guardrails, but can you make those guardrails actually support your decision so that it actually gets better over time. So in the enterprise world, you do have to put a lot of support systems along with that and then, yeah, you can still have A mini Swiss army knife in the middle of the entire engine. It’s fine, provided you have all the supports.


12:10

Ellie Tehrani
Yeah, right. So the reliability piece obviously comes into play. The other question I had, you mentioned your experience working across academia, VC and various different industries. How has that shaped your view of product development and what sort of differences have you noticed between these sectors and how they approach innovation?


12:38

Ilyas Iyoob
Yeah, it’s, you know, one side, I’m on the end of listening to pitches from different startups coming and saying, hey, we apply Genai to solve this problem and that problem, which is in encouraging one side. But at the same time, you know, when you look at, okay, well once you build this, then what, right? It has to get deployed, it has to get implemented. And when you start to think through, well, where is it going to get implemented and why should somebody use something like this when the bar of entry is already very low, right. If you think about it, somebody says that, hey, I built something, it’s just a matter of a few weeks before somebody else automates that. And then now that’s the next thing. And that’s the next thing, right?


13:23

Ilyas Iyoob
So one side we want to appreciate and encourage innovation, but on the other hand the, you know, the duration for which that innovation lasts is getting shorter and shorter. So you really have to think through, you know, how you’re going to monetise this way ahead of time, right before you’re even done with your, with your production. So that’s one side. On the VC side we get to see a lot of that on the enterprise side. What? Because we see what’s happening on the startup side. On the enterprise side, what we’re starting to see is a better result when teams leverage these tools on the back end as opposed to exposing them on the front end. So for example, instead of building a solution where you are allowing. Just this morning I was talking to a team.


14:19

Ilyas Iyoob
Without giving out too many details, this team was loading up private content, internal content, in an effort to speed up future responses to contracts. So for example, an RFP comes out there instead of going. And all of us love responding to the RFPs. It’s so much fun to sit down and write down a 40 page document that most likely nobody’s going to read. So it would be, you know, it would be nice to use all of the previous contracts to be able to generate something like that. But you know, just because you can do that doesn’t mean you should build that and then let the end users, let the people who are providing these RFPs to say, okay, hey, why don’t I upload the RFP and just get the response myself? Why do I even have to send it? Right?


15:09

Ilyas Iyoob
So instead of doing that, the team can keep this as an internal tool and then say, hey, look, I’m the one who has to respond to the RFPs. Let me use this tool on the backend, load it up, create a response and then I can go and tweak it, modify it, make it better, make it more sharper. Knowing the customer, knowing where this is going to go, I should be able to really move the needle on this and then push it out there. Similarly, with code development, instead of using AI to write code, we can use AI to not only write code, but also test code, do all of that on the back end and then present only completed, hardened, well thought out code to the end users.


15:49

Ilyas Iyoob
So that’s the balance between, yes, we want to run fast, but why don’t we run fast in the right places and then shield the end users, the end consumers, from the, you know, the unstability, the inconsistencies of Genai that by the way, inconsistencies, engine AI is a good thing, right? We want, every time we want to talk to our AI tool of choice, we want it to be creative, we don’t want it to say the same thing over and over again. But guess what, if you’re using Gen for code execution, you do want it to be consistent every single time, right? You don’t want it to be created.


16:27

Ilyas Iyoob
So those are situations where those are some of the lessons we learned from startups and from academia that we can then take over to industry and then say, hey guys, when we are building for the enterprise we don’t have, there’s zero margin of error here. How do we cater to that? Right? And those are something. And we have more lessons that we could share over time as well.


16:47

Ellie Tehrani
And in terms of industry, are there any particular verticals that you’ve noticed being slightly ahead or slightly behind in terms of using AI for consumer insights from a kindred perspective?


17:04

Ilyas Iyoob
Because we are heavily in the financial sector, you know, with a lot of banks, insurance companies, things like that. We are seeing any industry that has already has a treasure trove of data that they already have, whether it’s publicly available or whether they have private, they are, you know, as expected, ahead of the curve because they were anyway analysing data prior to these tools and now these tools have just made it easier. And of course the industries that have long struggled, are continuing to struggle in spite of having data. Like the healthcare operations sector, for example, there are a number of companies that are trying to use AI for note-taking at the doctor’s office, for putting the content into the Epic form structure so that they can load it into EHR systems and things like that. It is happening, but it’s very sporadic. Right.


18:06

Ilyas Iyoob
And so there’s a little bit of policy that has to play a role there before we can see more.


18:10

Ellie Tehrani
Right. You talked about industries that typically, traditionally have gathered data. Now if we talk about how AI has changed the way we do research, what would you say are some examples of some unexpected insights that AI has helped uncover that traditional research wouldn’t have found?


18:37

Ilyas Iyoob
Traditional research depends heavily on how well certain topics have been amplified by publication or by media. Whereas with Genai, we are able to go find those hidden articles, hidden pieces of work that may not have had the opportunity to get published at a tier one journal, or maybe it wasn’t done by very popular researchers out there. And so they kind of. So it helps uncover work that was done by smaller players. Secondly, it also gives a fairly equal weight to work to products and solutions. Code that was written by somebody anywhere in the world, you know, geographic location has no impact there anymore. So from that perspective, we’re able to find, we’re able to do, you know, speed up a lot of that work regardless.


19:46

Ilyas Iyoob
So there’s a good amount of, you know, debiasing that has already happened before we pull the, pull the information. And then secondly, the ability to test that, you know, if we’ve seen some of the later, newer tools, we’re able to, you know, test some certain scenarios and then say, hey, so far, I’ll give you an example. We are currently working with the University of Waterloo, and this is public information. We are working with the University of Waterloo in testing how well quantum-safe algorithms work. So everybody says, hey, you know what, increase your key size or, you know, there’s all these different solutions to just beefing up your data from, to be protected from quantum hacking. Well, when we first started that activity, our first step was to do research on what has research on research.


20:40

Ilyas Iyoob
So were just doing research on all the different benchmarks that everybody else has run already and then just aggregate that. So survey papers are easy to be written in hours now, as opposed to weeks or months that way. Replication of work is very rare now. Right. You can’t hide from the world and then say, okay, I’m going to sit over here and do some work and it would be considered novel because it’s not anymore. So I’d say the speed of research is a little bit faster. We’re able to find information from hidden places and then any new work also gets shared very easily with the rest of the world. And hopefully with more data lineage, we’ll also start to get credit for some of that. Right. The right people get credit for their work as opposed to otherwise.


21:28

Ellie Tehrani
Right? Right. One of the issues that comes with AI, but also traditional research obviously is gathering too much data and not really being able to make sense of it and connect the dots. How do you think that companies can ensure that what they’re gathering is meaningful consumer data rather than just more data?


21:53

Ilyas Iyoob
This actually goes back way before Gen AI or even AI. Right. So, and there’s no shortcut to this. As much as we would like to speed things up, taking the data that you have and putting context around it, mapping them, tagging that data to some business KPIs that you really care about. That tagging of information can be sped up, by the way, it can be sped up. With Genai, you can train a Gen AI tool to go and speed up your tagging, but the initial work has to be done by humans to be able to get business valuable information. So that piece, there’s no way around it, and that’s where you have to pay the top dollar. The best data scientists out there are the ones that never take their hands off the data.


22:50

Ilyas Iyoob
You won’t be able to pry that data out of their dead hands even if you wanted to, because they made sure that they understand exactly what’s happening. And that is where, that is how they understand. If you know, for those who are familiar with AI terms, you know, feature engineering, knowing what data is going to connect with what outcome. The only way you find that out is if you are playing with the data and you are continuously morphing the data into something different, is when you’ll realise, hey, you know what, this is really the. If you guys remember the Meet Silver’s book Signal from the Noise, you know, how do you identify the signal from all the noise? Right? So too much data is the noise.


23:34

Ilyas Iyoob
And if you invest in good data engineers or data scientists, they will be the ones that have their hands in the data and be able to detect that signal from the noise and then make sure that it shows up everywhere. While AI is good at highlighting the signal, it is not good at identifying that something could be a signal. You need the human to say, look among all of this could be a signal. This could be a signal, this could be a signal. And then AI can then come and say, yeah, among all of those, this is the best one. So that investment in domain knowledgeable data scientists is something you really want to get ahead of early on, right?


24:18

Ellie Tehrani
Which was actually the other question regarding trends and being able to identify actual upcoming consumer trends from more short lived fads. How do you use the AI agents and the humans and the data scientists, as you mentioned, to best determine what’s an actual emerging trend?


24:44

Ilyas Iyoob
On one side it’s getting harder, to be honest. It is getting harder to tell the difference between something that is generated by AI versus otherwise. So I’ll give you an example and where agents and how agents can help. Recently, again, I’m giving consumer examples because it will be easier to understand and then you can kind of extrapolate that out into enterprises as well, you know, so AI gives choice, but for some reason it’s also very good at recommending, you know, products, recommending solutions, recommending anything like that. So for example, I was, you know, buying a pair of running shoes recently.


25:35

Ilyas Iyoob
I’ve been buying, you know, Brooks Running for a long time and as it was about time for another one, I get these recommendations of course from my browser and all my browsing history as to here’s the next one or here’s the new one that you should try. Well, it would have been nice as a consumer to know what is new in this area. Is there some new breathable technology? Is there something else that makes these shoes lighter? And with that, what are some other features that I should be looking for as I’m going to buy? Yes, I know that AI could tell me exactly the best thing that I should get, but I would like to be informed about, you know, some of the measures that were used along the way.


26:27

Ilyas Iyoob
And so you have things on one side and on the other hand there’s also sometimes AI tends to be a little too quick to jump to conclusions. For example, you know, I mean it serves us well in the sense of, you know, it gives us instant gratification, right? It’ll, it’ll kind of jump and give you what you want. But sometimes you, a little bit of time is not a bad thing to, you know, to space things out a little bit. That anticipation sometimes helps. There’s a, you know, Soul Guard. Are you aware of the brand, the backpack brand, Soul Guard? They’re really, you know, popular among frequent travellers. And so I was, you know, I saw recently that they, you know, they had a short term.


27:23

Ilyas Iyoob
They were selling this one, you know, travel backpack, and they made such a fanfare about, you know, Simon Sinek. The, the, yeah, the, you know, public. The speaker on, sort of motivational speaker is designed by Simon Sinek. And so he, because he travels a lot, he’s like, you know, hey, you know what? You know, here’s a little pocket for this and here’s how it should be. And all of that. They made such fanfare about that, you know, it sold out in hours. And, you know, those of us who bought that one specific bag, it didn’t matter that it took weeks to come. Right. The anticipation kind of built the brand. And so from back to your, you know, the topic of this podcast, the consumer behavior. Yes, AI can provide, you know, personalised ads.


28:16

Ilyas Iyoob
Sure, AI can recommend the best solution it thinks you should take, or it can get it to you right away. But sometimes as a company, you want to take your customers through the experience. You want to build the anticipation, you want to retain the brand of why they should be with you instead of falling for it. And back to the whole where you, when and where you want to hand over the baton to AI to take.


28:44

Ellie Tehrani
Over that whole personalisation journey and, you know, balancing the need for personalisation with privacy concerns. Do you think consumers of today are generally more accepting of their data being tracked if they know that the outcome is a better experience in return?


29:11

Ilyas Iyoob
Comes and goes in waves, right? You’ll see there’s a wave of, you know, people giving up or willing to give up some information in, you know, in return for getting something else. And then as that becomes more and more popular, you say, okay, hold on, you know, I can’t give you any more information. And then so it goes up and down. And I think we are now in a cycle of where we’re trying to protect some of the information. So were maybe early on with GPT, people were willing to, you know, as we get lazier or willing to put more and more information out there. But then, when we realise what that information is getting used for, we kind of scale back.


29:51

Ilyas Iyoob
But this is where, Ellie, this is why you want to partner with companies like Kyndrol, whose job it is to protect the large customers whose job it is. So, you know, security, privacy is not an afterthought. That is by default. So without that, you don’t even, you know, your data doesn’t even make it. Yes, you might have to take a hit on Performance, you might have to take a hit on the results that you get. So you want to start off with something like that. Whereas if it’s, you know, if you’re looking for a recipe and you want to take a picture of everything that’s in your fridge, you know what, maybe it’s okay. Right. So that’s how you draw the line. You know, depending on the criticality of the mission you’re working.


30:38

Ellie Tehrani
Right, right. And in terms of that oversight that’s needed, what role do you think human oversight will play in AI-driven personalisation? As AI turns into AI agents and you know, the expectation is that they’re sort of doing tasks on their own. And as it becomes more and more complex, what sort of level of oversight do you think companies will need to implement?


31:10

Ilyas Iyoob
It’s a good question you ask because sometimes people confuse. So we all know that with more automation there’s less that needs to be checked. Right. Because so you have an agent that’s writing code, another agent that’s testing the code, and another agent that’s testing the tester, and another agent that’s writing the test cases, and you have all of that and then you have, you know, same example in any industry. So just because there’s less to test doesn’t mean that the impact of not testing is the same. So the importance and the risk of getting something wrong is higher and higher the more you give off to agents. Right. So the amount of work that a human needs to do to test would be less over time, but the criticality of those decisions is also getting higher over time. Right.


32:09

Ilyas Iyoob
Think of it like a, an air traffic controller. Yeah, an air traffic controller. One tiny. By the way, I don’t know if you noticed, but this is, I was doing a project with an airline here in the United States and one of the toughest jobs that they are having to find replacements for is in air traffic control. Because the level of stress and the level of attention that a person needs to have doesn’t exist in our generation of ADD filled kids running around. And so in those types of, more and more and more jobs are going to become like that, which are high stress, high risk evaluation.


32:51

Ilyas Iyoob
Now I’m sure better tools will come up over time to be able to catch human errors more and more, but it will I don’t think it will ever get to the point where, again, in mission-critical situations, you can ever take the human out of the loop. And even when systems do take a human out of the loop. A human is always able to intervene where needed and take control of the system when things don’t go right.


33:21

Ellie Tehrani
Speaking of humans, one of the core subjects that always comes up when we talk about AI is the resistance from internal teams, sometimes even from company leadership. You work with enterprises in your role at Kyndryl, you work with startups in VC. Do you see that still today, in 2025, that there is resistance from either leadership or internal teams in trusting AI with their data and the customer insights?


33:55

Ilyas Iyoob
The resistance is there with respect to trust and it should be there, right? You always want that. You want that resistance. You want. Anybody who’s building a solution should be expected to prove themselves. So it should be. I don’t know if you saw these posts that I had put. It’s guilty until proven innocent. In today’s world, we just have to agree that any data, any content we see out there is fake unless it proves itself to be true. So similarly, anybody that builds any solution asks if the burden on proof is on them to have to prove that. And so that healthy distrust is important. However, I think there’s something else going on as well. I think everybody, however senior, they are in, you know, in what they do.


34:48

Ilyas Iyoob
I think on a personal level, everybody’s playing around with these tools already all the time anyway. And so yeah, while at work I might be, you know, because I’m dealing with mission-critical applications for large enterprises, I have to put all these guidelines at the same time. I also know the art of the possible, right? Because I’m in non-risky situations. I am playing around with these tools. So previously it used to be that maybe leadership would fight a certain technology, but they are also not looking forward to it. They’re finding reasons not to do it. But I think today’s leadership, they have resistance, but it’s for a good reason.


35:38

Ilyas Iyoob
Recently our CEO of Kyndryl Martin was on, I think it was on Yahoo Finance and he was talking about, this was right after crowdstrikes, you know, debacle where they, you know, had a patch issue and then brought all the airline systems down. He said something very interesting. He said, you know what? At Kinrew were able to, by the way, bring a bunch of our customers back online within 24 to 48 hours, get them all back on. Because that’s what we do. We have to, we have to keep them running right, Regardless of what they use. And so but he said something very interesting. He said that, you know, AI and genius, you can Apply them anywhere. But if you take, I’m paraphrasing here, if you take tools that are meant to be, you know, garage experiments, they’re not ready for enterprise grid implementation yet.


36:31

Ilyas Iyoob
And so one of my responsibilities is to, you know, take these garage experiments from where they are and then find the right place, harden it appropriately and then bring them in the right places to be used at the enterprise level.


36:44

Ellie Tehrani
And that takes me to the next set of questions I want to ask around product development and how these agents are transforming the product development process. In terms of using AI for product concept testing, is there any particular methods that you recommend or you’ve seen it work exceptionally well?


37:10

Ilyas Iyoob
Unfortunately, no. I’m still trying to find the answer to that. I see the ways that it is being done today. I wish I had a better answer, but I don’t. The only tool that I see everybody use is, you know, every time I want an opposite or another Persona, I just throw another agent at it, right? So I have one agent that’s collecting user, you know, user input requirements and another that’s converting the requirements into stories and another that’s converting the stories into, you know, actual functional requirements and non-functional requirements and then another, blah, blah. It just keeps going on and on. But I’m not seeing the end result any better than when humans were doing it either. So, so my running hypothesis is, I think as humans were doing those same tasks, humans had the ability to change the hypothesis very quickly.


38:16

Ilyas Iyoob
Many times while during the process, for example, somebody might say, hey, I think I want to build something like this and this is what I need. And this is what design thinking sessions were meant to be, right? You go in and then you try to see what they’re thinking, what they’re feeling, and then get to the real root of what is really bothering them and why do they really need this and then take that and then convert that into. So I think AI has a long way to get there before it can get to that point. It can, it can gauge simple products. I think it would do very well, I don’t think. But when it comes to complex products that require a team of people from different roles to work together, I think they struggle there.


39:01

Ilyas Iyoob
And the only thing I see people doing is throw agents at it, which I don’t think is solving that problem. I mean, even when they do that, they don’t end up with anything much better than otherwise. I think this would be a, this is a good place and I’m not seeing any Good tools there yet, by the way, for those companies who are looking for ideas and, you know, something to solve, being able to, you know, gather requirements in an intelligent way, capture the intent of what people are really concerned with. Separating out what they say versus what they really mean. Right. Again, remember, AI can only understand what it can read, right? It can’t sense it. So whereas humans are able to do that.


39:43

Ilyas Iyoob
So I think this is one of the fields probably where you probably are going to continue to need humans early on in the process. And once a product is built, then, yeah, you can automate the heck out of that. Right? You could put in, you know, monitors everywhere you want for feedback and automate the, you know, the development, automate the deployment and keep going on and on. But I think that initial still requires heavy human investment.


40:10

Ellie Tehrani
And then another sort of danger often talked about in AI-driven research is, you know, the point of bias. What do you think can be done to try to watch out for known biases in AI driven research?


40:28

Ilyas Iyoob
I think this was something during my time at IBM that our director of research at the time would say, there’s no good bias or bad bias, there’s just bias. And because it’s a function of time, what somebody thinks is not very good at a certain time all of a sudden is fine and then becomes not so fine again. And so trying to chase after the bias is a tough problem. Not saying it’s right or wrong, it’s never-ending. But if you flip it on the other side and then say, you know what, let the model build itself from the data that it has and instead of fixing the model, find data points that address the bias issue.


41:26

Ilyas Iyoob
So you have a certain amount of data and if the data says that’s, you know, this is all that’s happening in the model, that’s all the model is going to say. But you have monitoring put in place and you have measures to keep the models in place, but then work on making sure the data has more variety in it so that it applies across multiple contests and contexts and things like that. That would be the best practice anyway. But in general, I think models have come a long way already. There are already a lot of provisions. I don’t know if you’ve tried, you know, asking or trying, because as researchers we are always, we’re always ask you, we’re always pushing things to the limit, right, Just to see what it can’t do.


42:13
Ilyas Iyoob
And so there’s a lot of issues with models early on and we’re starting to see that Many of them have already been taken care of. But think of, I would encourage companies to think of, you know, debiasing from the perspective of their business as opposed to trying to deep. So sometimes companies will come and sell you tools that say, hey, you know what our models are? You know, they remove bias. If somebody says that inherently is problematic because what bias did you remove if you don’t know my business or my data. Right. So leveraging the existing tools out there and then taking bias as an internal issue and then saying first of all, let’s see how much bias we really have. For example, if it’s an HR solution, yes, it probably needs to have a lot of cross-checks for biases.


43:05

Ilyas Iyoob
But if it’s a messaging tool, what are you going to be biased to? People who send messages at 11pm versus 8am Nobody cares. So keep putting bias in its place, not overemphasising it. And this is putting the guardrails should take care of it for the most part. And also never expect to outsource your bias avoidance. That’s always something that has to be done in house.


43:32

Ellie Tehrani
Right. And also always question where the data originally came from. Right. Like where are these algorithms built on? What sort of data fed the algorithm? In terms of companies building their new teams for the future, in terms of using both AI and continuing to use humans, the data scientists, the researchers, how do you think they should structure their teams to best utilise AI in understanding consumers while also making the most of their resources?


44:08

Ilyas Iyoob
There was a problem when were scaling AI teams in the previous iteration about seven, eight years ago. If you were to walk into a conference, a data or AI conference, half the conversations would be about how to deploy and scale, deploy and scale. You know why that was? Because you had data scientists that were building models and then they would throw the model over the fence to engineering and then say here you go, have fun with it. And engineering would, first of all, they wouldn’t understand IPython notebooks, they wouldn’t understand jupyter notebooks. They didn’t even know how to. Well they knew it was in Python but they didn’t know how to use it and they didn’t really care. And so there was a big problem of deployment. And so machine learning engineers came, there was a new role to solve that problem.


45:05

Ilyas Iyoob
But in this cycle with gen AI that is, it’s actually the opposite problem. The problem is not that you’re not deploying, the problem is you’re deploying way too much. You’re deploying Stuff that’s not even baked yet. And you’re throwing stuff because it’s easy to deploy, right? Everything is available as an API. Use Genii to write garbage code that can script something and then throw that out there and see what happens. And so the problem is less about deployment now. It still has to be organised or the deployment has to be organised, but the problem is more about what you should build and how you should build it. And so a bigger role.


45:48

Ilyas Iyoob
So previously teams used to have maybe 30, 40% of the team would be data engineers and then the rest of them would be data scientists and maybe one DevOps person, poor guy or girl who had to deploy all of that stuff. But now the teams, you know, you probably need less of the hardcore AI modelers now because they, you still need them, but they themselves can work much more efficiently. They’re much more faster at what they used to be able to do. So you could probably cut down some of your traditional data scientists. And that doesn’t mean that you have to hire new ones, you just have to retrain some of them, if they’re not already on it, to be able to be good prompt engineers, to be able to build modular agents in the right modular way.


46:44

Ilyas Iyoob
Everybody can build agents, but doesn’t mean you’re building it correctly. They have to be fairly modular so you can reuse them over time. And then of course, you still have the poor old DevOps person. But the one piece that I suggest, teams don’t cut out or in fact maybe expand on, are the data engineers and the data analysts, because those are the people, again, like I said before, those are the people that know your data. They know what’s happening and anything that comes out the other end, they’re the only ones who can say if it makes sense or not. And so I actually know a lot of data scientists who went back to school to learn their data pipelining skills just because they realised they weren’t becoming very good feature engineers.


47:32

Ilyas Iyoob
They couldn’t identify the signal because they didn’t touch and play with the data, and they couldn’t do that because they didn’t have those skills. And so over time, having teams that are heavily data engineering and a little bit of the rest with enough scripting and leveraging tools, I don’t know if teams are used to this, but using tools like N8N for building agentic workflows, things like that, not starting from scratch, you don’t have to go down to Python libraries anymore. Hugging face connections are available by these other, you know, Low code platforms, people being able to do that quick, rapidly prototype with guardrails again would be the way to go. And I think most companies already know this. It’s not very surprising.


48:27

Ilyas Iyoob
The only thing that I would add and net new to these teams is a, you know, a small but strong testing team. Because these guys, I mean, you’re not testing the code anymore, you are testing the output because there’s no problem with the code syntactually, there’s no problem with code coming out. The problem is it’s just not functionally correct. It’s syntactically correct, but it’s not functionally correct. And so having strong testing teams. This goes back to what you were asking me earlier is, you know, hey, you know where the verification validation, right. Humans still have to be part of that. So you may start to see a bigger chunk of people towards the end of the pipeline now who have to now validate a lot of things before they go out.


49:14

Ellie Tehrani
Very interesting. And I know we’re running short on time here, but is there any other advice before we wrap up that you would want to give to companies just starting to use AI for their consumer insights or any other topic that we haven’t touched upon today?


49:32

Ilyas Iyoob
Yeah, one of them is find a good partner to keep yourself up to date. I see a lot of companies, you know, fall into the FOMO mindset, right? Oh my gosh, I need to. Every day there’s three new tools and new LLMs and every time they talk to somebody they say, oh well, have you heard of this? Have you heard of that? And they’re constantly depressed. And you know what? They shouldn’t be, they should focus on their business. They shouldn’t focus on the latest LLM from the latest provider. So finding a good partner who can take that over from you and who can keep you know, who can save you the time, you can rest assured that they’re handling this for you. And even they’re not going to just take anything that comes up.


50:20

Ilyas Iyoob
They’re going to take it, test it, validate it, make sure that it’s good for you before they even introduce it to you. So, this is something I tell a lot of companies, just because you can learn something over the weekend, great, go do use it for your science project with your 12-year-old. But please don’t bring that to work. That’s great. It’s good for your own knowledge. It’s nice, like we said before, for the executives to know what the art of the possible is. But don’t mix that up with enterprise grade production. That really requires all of these other pieces.


50:58

Ellie Tehrani
Absolutely. Thank you so much, Ilyas for your time today. It’s been an absolute pleasure having you on our podcast and we hope to see you again soon.


51:22

Ilyas Iyoob
Sam.

About Our Guest

Ilyas-Iyoob

Ilyas Iyoob is a distinguished technology leader and AI expert with over 20 years of experience in research, analytics, and artificial intelligence. As the Global Head of Research at Kyndryl, he leads a team of senior scientists developing Generative AI solutions with a strong focus on ethics. He is also a faculty member at The University of Texas at Austin, where he teaches Data Science and Artificial Intelligence. As a venture capitalist, he advises multiple technology companies, bringing his expertise in AI implementation and product development. His unique perspective combines academic rigour with practical business applications, making him a valuable voice in the evolving landscape of AI technology.