Episode 3 – Preventing Human Bottlenecks
Chapters
00:00 Introduction
01:00 The Impact of Human Bottlenecks on AI Implementation
03:23 Signs of Human Bottlenecks in Organizations
06:16 Capacity Challenges and AI Implementation
15:44 Resistance Challenges and AI Implementation
29:25 Building a Culture of Optimization for AI Integration
35:11 Conclusion
Welcome everyone to the Humans and AI in the Workplace podcast. Over the last few years, it's become clear that artificial intelligence, AI, is one of the most impactful and disruptive transformations in the workplace. As a leader, you may be wondering how to get started and how to do it in an intelligent way. Or you may be stuck on how to overcome some of the people issues and human bottlenecks your AI has crashed into.
We are here today with Dr. Debra Panipucci and Leisa Hart from AI Adaptive to discuss today's topic on how leaders can address the human bottlenecks that cause problems for AI.
Thank you for joining us. I'm Leisa Hart. And I'm Dr. Debra Panipucci. And in this episode, we're going to share our insights about how human bottlenecks hold back AI in businesses. That's going to be a great topic.
Many leaders that we meet are under the impression that the people side isn't something that they need to put money and time behind. It's just something that will happen. They don't need to invest in it. What they really need to invest in is the technology, the infrastructure, the platforms, the data center. But in reality, the human bottlenecks make the implementation of AI take much longer than it should. And it can sometimes result in the actual failure of the AI project. If you're looking to bring AI into your workplace, there are some human bottlenecks that you can start removing beforehand so that it doesn't stall your progress. So I guess we should start with what is a human bottleneck? This is something that we refer to ourselves in our business, but many people out there would be listening going, I've never heard of that before. It's that bottleneck in your organization where things crash into or slow down or stall.
And when it comes to humans and AI working together, it's from the human side. All those people issues and complaints and resistance that come out. For us, when it comes to AI, these are the types of bottlenecks that your AI is going to crash into. So you don't have to wait till this is a problem. There are many bottlenecks that you can pre-emptively look at in your business. And that's what we'll cover in the episode today so that you're in front of.
What this might mean for your people in your organization and the efficacy of the efforts of what you're trying to do with AI in your business so that you don't waste that time and energy and effort and enthusiasm. And Lisa, I think we should talk for a minute for our listeners around how do they even know they have human bottlenecks? You know, what are those signs and signals that are in their organization that kind of signify?
or symbolize that they have a challenge that they need to start looking at. So I guess, you know, just kicking one off off the top of my head, the first one that I see quite a lot is that duplication and rework. You have same but different data across teams. For example, we worked with an organization where the finance and the customer service team were measuring the same metric but it came out with different answers all the time. It didn't really bother anybody until you're looking at something like AI where you want one answer, you want one source of truth. And then you need to really delve down into why are we getting different numbers? And often it's because it's from a different source and the different sources exclude certain factors. If you're looking at employee numbers, one team might be gathering information from payroll, which only includes permanent employees.
Another team might have a spreadsheet where they're actually tracking who's physically present that day, which can include contractors and casual and non -permanent staff. What about surprises at the leadership level? That's a good one. Yeah, we unfortunately see this a little bit in organisations and it's when things are not dealt with proactively and all of a sudden they just appear as a surprise. And unfortunately they generally escalate into a crisis within that project, often referred to as rosy reporting or sometimes called a watermelon. It's green and everything looks great and reports on the outside. Everything's green, but when you really dig into it, it's quite red on the inside. And the challenge with those is that quite often they don't need to get to that point. If they're looked at and things are addressed and people have the skills to have those crucial conversations early.
Or to have mechanisms to feel safe to say, hey, look, I don't think this is right. What can we do? If you don't have those systems and processes in place for people to address it, then that's quite often why these things get to a point where they escalate and they just pop out of nowhere and they become a surprise for leaders. And that's going to become more and more important as businesses bring AI into their organization. Because the more you experiment, with AI and other intelligent technologies in your business, the more you need to quickly understand how it's tracking in order to pivot or change the use case or look at the ethical considerations of what's being presented or where's the data coming from and is that the right data? Is that what we're expecting to see? So you need to really up the pace on...that feedback loop and acting quickly and as proactively as you can for some of those things as well as trying to address it as they arise. And if you have rosy reporting where people are watering down information as it gets higher and higher in the organization, then you're less likely to hear about some of those issues with your machine learning algorithms. What about decision -making Deb? That's one of your hot favourite topics.
Yeah, definitely. Slow decision making and wasted energy and resources that are spent on workarounds and workshopping approvals across the business are also human bottlenecks. These are areas that can slow down your AI and cause it to stall. So there are some of the signs that there are problems and some of the symptoms of what might occur in businesses.
So if you're a leader right now listening to this, you might be saying, that feels very familiar or I've seen that before, I know that will happen in my team or in my business. So then let's get into what are those human bottlenecks that are causing this. Yes, yeah. The first one is one that we see the most, which is capacity challenges. Organisations are doing a lot, they're doing it at pace. So capacity challenges in businesses is a huge issue that we see and one that will be accentuated through the introduction of AI and intelligent technology into businesses. And you're probably thinking, yeah, everyone's busy. How's this going to be a problem for AI? Yeah. So specifically, capacity challenges around AI for leaders will be that sense of overload and overwhelm.
People are overloaded generally with the volume of information and how quickly it's delivered and how we can navigate sources of truth. That's right Lisa and leaders need to have capacity as well to address the fear of AI. Yes, because one of the big opportunities in addressing that fear is building the digital literacy of the business as well as leaders themselves understanding what the technologies are.
What the language is, our first way of learning anything new comes from a base of language, understanding what terms are, what people are saying, what the meaning of different phrases, words, acronyms are. So to be able to have some level of influence on the capacity, there needs to be a really clear communication based on understanding of what that technology is and that absolutely needs to come from leaders, role modelling, sharing. Yeah, that's right. And...
Leaders today can't hide from the fact that they need to be really strong change leaders. They need to be aligned as a group and on the same page in terms of what the business is trying to achieve with its AI because far too often we see that capacity human bottleneck appear in the form of leaders just don't have the capacity to really
Get clear and aligned on their communication or their expectations. And so then there becomes this miscommunication across the business because people are hearing different things from different leaders or they have mismatched expectations depending on which team they're in, in relation to their impression of what the AI will do or won't do. And so I think that generates some of that fear as well in terms of, well, it's going to take over my job. Actually.
The machine learning algorithm is only programmed to do a particular task. And sometimes people don't understand the decisions that are being made around the AI because the leaders or themselves don't have capacity to really have the time and attention and to sit in all the meetings where all the decisions are being made. A big issue that we've seen around capacity too is that key person dependency. Yeah, we do. And you know, a lot of pressure on one or two people to represent a whole department. Whole knowledge source. Yes, yes. So that puts a lot of pressure on that particular person. Quite often those people like that because that's an acknowledgement of their expertise and sometimes their tenure. But that can also work against an organization, specifically in the context of AI, because you need the knowledge to be shared across at a level of depth. You don't need it to be hoarded or held specifically in one area. So it needs to be uplifted and shared more broadly. You have to acknowledge that sometimes initially those people might feel even more overloaded, but the long game is that it's better for everybody, including them and their capacity.
Yes, yes. And we know that you're probably out there listening to this and thinking, yeah, this is great. We all know this. We've had these problems for a long time these challenges for a long time, I should say, but what do you actually do about that? And the first thing we often do is really look at that knowledge management. You need that single source of truth, that knowledge to be shared across the organization so you don't crash into that bottleneck in particular, because data and knowledge is essential for AI to work effectively.
And we know that there's just not enough subject matter experts in your organization. So the first thing we try and do, hopefully before you put the AI in, but if you've already started putting AI into your organization, getting in and starting to address that challenge is one of the most important things you can do. It'd be great if we had a phone in episode where people could ring us and share with us their challenges and their insights around this particular topic, because it is a hot topic in a lot of organisations because there's pressure on productivity and resourcing. There always has been. So to address that subject matter expert capacity, what we do is we start by spreading that knowledge around. Find somebody in the organisation who can shadow them. Sit with them, follow them around, join them on projects just start to soak up some of that knowledge. Consider it, at least, paying them a bonus to train other people in their knowledge. A training bonus, in a sense, because even just that little bit of incentive to spread some of their knowledge around can make a huge difference.
It will definitely be cheaper than replacing that person or dealing with the impacts of that person hoarding their knowledge, guaranteed.
And then, you know, if you do have that leadership capacity problem. Sometimes they turn up, but they don't have the capacity to absorb or to participate. They're physically there, but they might be on their laptop or they might be checking their emails on their phone or they might just not have the capacity to truly be involved and participate. We see this quite a bit as we get closer to go live type scenarios, it's when they truly start to pay attention. It's the nature of human beings. And then you might see that some things come up that could have been helpful to have earlier just because they didn't have the capacity to truly participate when you needed them to earlier. So you really need to look at are people not just there physically present, but are they actively engaged? Are they thinking through what's required of them? What Do they need to contribute to what's happening? Otherwise, that's a human bottleneck that you will crash into at a later point, because that's still around their capacity if you haven't proactively dealt with providing them the capacity to be able to participate. What you can do is build a really good change strategy, which maps out how you are going to approach and target these people and these groups that you need to be participating in some shape and form. And you need to do it in a way that fits into their schedule. Often we see projects, they have their own schedule and they try and get leaders and employees across the business to fit into the project schedule. We actually advocate for the opposite. You need to, instead of getting them to bend to you, you need to bend to them. You need to fit into their schedule to make it easier for them to participate, to make it easier for them to access the information to know what decisions are being made. It's not about it being easy for you, it's about it being easy for them. And also if they understand why this is being prioritised and the why of that particular introduction of that technology, what the use cases are, what the hopeful outcomes are, they're more likely to bend to you as well. Because if you can capture the hearts and minds early for it at that leadership level and the story and the narrative and the really important why, they're more likely to move and move in the direction that gives them some capacity as well as you bending to them as you just said Deb. Yeah, this kind of leans into our second human bottleneck, which is resistance challenges. Because many people out there are probably listening to this and going, oh, that's just resistance. We get that in the organization. But it's not actually people aren't actually resisting.
I really struggle with that word being used consistently to describe a whole range of behaviours around change and change adoption. Quite often it's not resistance. It's just capacity. People don't have the capacity to be a part of it and they haven't made that decision in their brain to resist. But in some instances they do make that decision and that is our second category of human bottleneck. Yeah, so that active pushback that we see it in the context of fault finding and that perfectionism and all of a sudden if it's not perfect I'm not going to support that technology or the outputs of technology because it's not perfect first go. They don't want it to be seen to be something that augments their work necessarily. They see it as a threat. It's a whole mindset behind that. I've been working with an organization who has this challenge in their culture around this, where they say that we mark each other's homework. So they go to meetings and there is this fault -finding tendency of trying to pick out all the errors and flaws in what each other do. And this is that active pushback, that resistance challenge that is going to be a problem for your AI program as you try and put AI in. You want people to be helping you to solve the issues and to make it better and to get that algorithm right and the training data right and not try and find all the reasons why it's not going to work and all the reasons why the old way is better than any new technology could bring. People definitely need to see that it's in service of them and their work or their customers get a better experience.
So you have to do the groundwork to get them in that right mindset and to feel like that's the case. The other is that passive avoidance where we see. Yeah, that's a big resistance. Yeah, it's sort of like an underground, well, yeah, I get what you, I get what that's doing and I'm still going to just have my own little Excel spreadsheet running on the side here. I might've downloaded that to my desktop so it's not on the SharePoint, the enterprise solution. That's that real passive, I'm not yet,fully bought into this, nor am I going to be vocally resistant, but I'll do it in a much more subtle and passive way. And sometimes people get stuck in that, and sometimes they change that over time based on their level of confidence around understanding their role in the future being something that's better as a result of this technology coming in into the workplace.
That's right. And another reason why these types of resistance areas come out is when there's a history of poorly managed change. And, you know, it's far too often when we're talking to an organization, they say, yeah, we need help with change because we don't do it well here. In this day and age, you should be doing it well. It's not rocket science, right? It's just about, you know, at least of what you say.
Go slow to go fast. So get in there, do some thoughtful planning, thoughtful assessment to really work out how do you make this change roll out in a way that's going to be easy for people and successful for people. Like you were saying earlier, Lisa, is you really need to have a clear reason why you're doing this. Because far too often we are still seeing organizations trying to put AI into their organization part of that fad status. There's so much media attention around AI at the moment that organisations are looking at this like, oh, we should just bring in the AI platforms and let our people just go nuts with it. They can just experiment in this sandbox and do whatever they want. Or we heard of a director requesting at a board meeting, can't we just bring chat GPT into our business and get rid of our call centre effectively?
Gartner has this great research on what they call is the hype cycle and they have one for each particular technology. There's a generative AI hype cycle and basically it's triggered by the release of a new technology. We would have all kind of felt that hype of chat GPT but what happens is it creates this really steep slope up to what's called the peak of inflated expectations.It's at a point where people expect this technology to be all things to all people, solve all the wicked problems in a business predominantly. You know, it's the classic, this will be a silver bullet. This will deliver so much value at so little cost and so little effort for us to be involved. And that's inevitably followed by a big dose of reality once things start to be introduced into a business if it's not thoughtfully planned for in advance. And then that's called the trough of disillusionment and it's where reality really sets in, okay, this is much harder. It's not as easy to have it implemented within our architecture than we thought. Or most likely what we see is this is where it crashes into those human bottlenecks because there's not been that thoughtful planning because it was all the excitement that led to it. those inflated expectations. And so on the people side of that, that's where we see that's that really good point to avoid that fall into that trough of disillusionment. What also happens, according to Gartner's research, is in that trough, two things occur. Companies either give up, and what we sometimes see in organizations, we're just pausing for now with no sort of definite plans for making any progress.
It's just the messaging as I were just pausing. Or they learn how to harness it and they learn how to iterate and choose the right use cases proactively planned for engaging their people. And that's then called the slope of enlightenment. So that's that slow learning of actually what are the realities of this intelligent technology? What does it mean to our data? What does it mean to how our people need to interact differently? What the mindset is that we need from them?
And then over time it kind of hits that plateau of productivity where you either get the productivity or the growth outcomes that you need. I'd encourage you to look at that Gardner Hype Cycle because it does start to help manage expectations and quickly as a leader you can look at that and go, wow, this is where we are as an organisation. You might be in the trough and at that point of making a decision, do we stop or do we re -evaluate and get more realistic about our pathway outwards and choose different use cases or choose a better way for our people to engage with the technology and with each other. That's right. And also make sure that you're really clear on the reasons as to why you're even embarking on bringing AI into your organization and trying to integrate it into that human and AI team. And it's not just because it's a fad in the market and you want to You don't want to be left behind. We see this as another great opportunity for leaders to step into this part of their role, which is absolutely fundamental, not just for introducing intelligent technology into their business, but any kind of change. If you've got leaders who have great ability and skills to lead themselves through change and to lead others through change effectively based on really clear ability to help articulate the why, really strong engagement, create safety for people to raise concerns, be really clear about the pathway, what's going to happen as much as you know or when things will be updated. And also what the consequences are for not changing both at an organisational level and an individual level. That's where we see leaders who've got those skills as a core foundation. They'll always thrive and particularly in the context oof intelligent technology because they'll be looking at those bottlenecks early and going, okay, how can we proactively address those? And Lisa, we've talked now about human bottlenecks that are driven by capacity challenges and resistance challenges. But by far the biggest and hardest to shift is the human bottleneck related to culture challenges. Yes, culture.
You know, that notion of how we work here in an organisation. It's the shared way that we deliver work. It's the way that we agree we'll communicate with each other, we'll engage, we'll support each other. Minimum standards. It's all those things that chunk down into habits of individuals that are the same across the business. So culture is one of those things that's definitely hard to change.
And then it's not if you have a really clear strategy and pathway and take time and be really targeted about where you're putting the efforts in order to help people shift the right habits at the right time collectively. And culture can either be an enabler for your AI program or a massive bottleneck. And the first one that I want to raise is silos. If you have silos in your organization,
your AI program will crash into this and stall and possibly even fail. Yes, because now more than ever, you need teams to be working across departments, across teams. Across a single source of truth. Yes, and you need them to be sharing information, sharing their concerns, but doing it collaboratively.
And silos in organisation are often characterised by protectionism of one's team. You often see duplication of data across different teams or duplication of metrics that have different results. You will also see different leadership priorities. You know, there's different agendas coming out of different teams. So there's more of that distrust in between teams and things break down in between teams.
Sometimes we hear complaints of things being thrown at other teams or thrown over the fence at different teams without proper consultation and engagement. And, you know, coming from a bit of a HR background and a real focus on engagement surveys, we often see indicators in these surveys that suggest employees are really favorable of their own team. Their team is great, their manager is great.
But other teams in the business, oh, there's so much work for them to improve. You know, they're really critical of other teams in the organization. One of the other human bottlenecks we see in relation to the culture challenges is that lack of accountability. It's quite often where you'll see employees effectively sitting on the sidelines and throwing their opinions and advice, but not necessarily willing to get in and have skin in the game themselves. And...be part of the solution versus just wanting to poke holes in it kind of points to perfectionism a little bit as well as that fixed mindset. Watch out for leaders in relation to this is around the behaviors that you're rewarding. Often we see individuals who hold back their knowledge or throw opinions on the table without really leaning into their responsibility to help avoid problems because they get a sense of reward and recognition from being the hero, from coming in at the end to solve problems and not taking on any level of responsibility or being linked to the problem itself. If they stepped in and helped at that point before it became a real problem or crisis, then things would be a lot smoother and easier for your AI. Yes, you effectively trying to avoid the, I told you so moment. When you do engage those people early and give them that responsibility and that accountability for helping design and be involved from the get go, sometimes that requires work to help that person integrate into a project team if they've got a reputation of being the person that throws the rocks at the bus versus getting on the bus with everybody else. But if you can make that happen, you actually get the best outcome for everybody. So what is the best culture to have for your human and AI integration? We advocate for a culture of what we call the optimization mindset.
and it's the opposite of perfectionism. So far too often organizations have built up this culture of perfectionism where they want their employees to do amazing work with no mistakes and people pride themselves on doing reviews and getting rid of all the errors and flaws before they send it to their boss so that their boss sees that there's something that is developed that is perfect and can just be used straight away.
Whereas when you're bringing AI into the organization, you really need more of an optimization mindset where you're building something that's based on a set of training data. And over time, as it's being used, that's when it gets better and the humans are there optimizing it, they're governing it, they're making sure that it's doing what it needs to do. But it's learning and developing and evolving over time.
So it's that notion of we are always in a state of optimizing and getting things better and we are never in a state of perfection. So what to do about it? The first thing you need to do is really work out what you're tackling. Be really clear about what are those human bottlenecks in your business. So if you're thinking about introducing AI into your business, get really clear about why it's coming in and what purpose.
But also what does it mean? What will you need to address in preparation or during that particular change coming into your organisation? Then start to build those interventions to start to shift the behaviours and the culture of those habits over time.
I'm not going to lie, it does take time, but it is doable. And it is doable with thoughtful planning and being really clear and mindful of the things that we've already talked about. Far too often we see reports and surveys go out that come back to say, culture and resistance are the biggest challenges for AI. So you can start today as a leader with the things that are in your control.
So look at your leadership shadow. What's your knowledge of AI? What's your AI fluency? We've talked about that a lot in a previous episode. How are you making sure that the language that you're using isn't introducing more and more fear into your organization? You're sparking curiosity. You're starting to build that understanding and knowledge across the business. That's all within your control right now. It's been a big episode. There's so much.
in this conversation that we've had today, Lisa, you know, I think if we sum it up for everybody, there are human bottlenecks that your AI can crash into in your organization. But the good news is you can start working on removing them. You don't have to wait until it's stalled the progress or even stopped the progress of your AI program.
And we categorise these three human bottlenecks as capacity challenges, resistance challenges and culture challenges. Some of the things that you'll see in relation to these challenges are key person dependencies or subject matter expert problems where you've got limited number of people or they've got their knowledge stuck in their heads and it's not spread across the organisation.
You might see leadership and employee capacity problems where they just don't have the time or even interest to pay attention because it's not a priority for them or they may not be able to get involved. Or you might have emotional baggage from previous changes that were poorly managed. You could have a real lack of understanding as to why you're even bringing AI into the organization or mismatched expectations or like you were saying Lisa, that hype cycle of people really having the wrong impressions of it and then that crash at the end. You might have silos that your AI will crash into or that lack of accountability or that perfectionistic culture. You know, your AI is going to crash into all of these and it's going to make integrating that human and AI element together into a strong team so much harder.
So if you recognise any of these in your business today, you can start addressing them today so that AI is landing in your business in a way that you really get the benefits of it and it doesn't harm your people or your culture.
Our experience is that if you set people up for success, you engage them early, you keep them informed, you give them opportunities to contribute, you help them understand how to be successful in the future state, you make it really clear about what support's available, but also what the consequences are and where the accountabilities lie for not changing. Generally people do the right thing. This change of intelligent technology just makes all of that way more important because of the impact of AI and intelligent technologies on workplaces, on roles, on tasks. So it just heightens all of those important steps to take to set people up for success, to make sure that you're addressing the bottlenecks proactively. There's also more resources on our website as well for leaders who are a bit more curious and want to take further action today beyond what we've suggested here. So thanks for listening. Thank you.
Humans and AI in the Workplace is brought to you by AI Adaptive. Thank you so much for listening today. You can help us continue to supercharge workplaces with AI by subscribing and sharing this podcast and joining us on LinkedIn.