Chapters
00:00
Introduction to AI in the Workplace
03:07
Human Bottlenecks and Learning Curve of AI Tools
06:22
Governance and Security Considerations
08:38
Cultural Shifts for Successful Adoption
13:09
Privacy and HR Implications of AI Tools
18:05
Resistance and Willingness to Experiment
20:09
Training and Democratization of Learning
26:00
Different Applications and Use Cases of AI Tools
30:47
Challenges of Data Storage and Collaboration
37:20
Timeframe and Investment for Adoption
41:38
Risk Framework and Mitigation
45:12
Wins, Opportunities, and Watch Outs for AI in the Workplace
Welcome everyone to the Humans and AI in the Workplace podcast. Over the last few years, it's become clear that artificial intelligence, AI, is one of the most impactful and disruptive transformations in the workplace. As a leader, you may be wondering how to get started and how to do it in an intelligent way. Or you may be stuck on how to overcome some of the people issues and human bottlenecks your AI has crashed into. We are here with Dr. Deborah Panipucci and Leisa Hart and a special guest, Matthew Shadwick, program manager to talk about the opportunities and challenges when introducing Gen. AI tools like Microsoft Copilot.
Microsoft Copilot is described on the Microsoft website as supercharging productivity, amplify creativity and trusted security. But it's important to recognise that without a thoughtful plan for sustainable adoption, your organisation will crash into human bottlenecks and will struggle to take advantage of what that collective intelligence of human and technology can deliver in the workplace.
For those that don't know what Copilot is, it's an AI assistant that combines large language model technology with the organization's data. You can ask it to do a whole bunch of things like summarize notes or team meetings, draw up actions, regular everyday emails. It's now available within your Microsoft applications such as Word, PowerPoint, Excel, Outlook, Teams. And according to Microsoft's own survey results from last year, November last early adopters were saying that it was delivering up to 70 % productivity by some of the early copilot users. So today we're delighted to have a chat with Matthew Shadwick, who will share a program manager's people change insights for introducing Microsoft Copilot. So a little bit about Matthew. Matthew is a highly skilled program manager and product owner who's always been at the forefront of technology delivery for the last 20 years. Is that fair, Matt?
Yeah, it's pretty fair. And Deb and I had the pleasure of working with Matthew for about five years when we worked at UniSuper and Matthew, one of the things we always enjoyed about working with you was your ability to understand the importance of the people side of whatever was being implemented and engaging people early and managing change proactively and you're one of our early participants in our change leader internship that we had running at the time. Matthew, it's lovely to talk to you today. It's great to you both too. excited to hear some of the learnings that you've seen in implementing Copilot into a workplace. Let's start with a quick question. You know, we often talk about bottlenecks, the human bottlenecks that AI can crash into and the limitations that it can place on getting out of the investment that you're making. And we know that you've recently rolled out Microsoft Copilot in an organization. So we would love to know when you witnessed some of those human bottlenecks, what did they look like? With M365 Copilot or any AI, generative AI, sort of tooling at the moment is, it's a big learning curve for end users, ultimately we're teaching people new skills, basic things like how to write a prompt. You know, we've got a workforce who are quite skilled around things like Google, Google searches, keyword searches, that kind of thing. With generative AI, people have to rethink about what are they trying to achieve with the tooling and how do you prompt the system to get to the end result that you're kind of anticipating. So it's learning a new skill. So ultimately with the workforce is getting people on board with a new skill as a starting point. So the bottleneck is essentially right there. So you're talking about getting people to move away from going to a web browser like Google for an answer or even a large language model like chat GPT or one of the others to using copilot as their preferred source of help and support, but to do that in a much more sophisticated way. Can I add to that? Cause it sounds also like when you put something into Google to search, you just put words in there and it comes back with a whole list of options and you pick the one that is most relevant for you. But when you're using, prompting a copilot or a large language model, you put words into it and it creates something or summarises it for you. So you don't actually get a whole range of options, do you? Correct. So with If you think of a Google search, sometimes less is more. if you're what you search information you're searching for or the outcome that you're trying to achieve, the more words you put into it, sometimes it it will constrain the output to a point where it's, you might struggle to actually get to that end point. Ultimately with a large language model in play, you have to actually think about the objective that you're trying to achieve and prompting the large language model in such a way with that outcome in mind. So being more prescriptive the way you talk to it. You have to think of it as like an intern or a graduate that you've got sitting next to you that you're instructing to do something. You have to be very prescriptive about that based on what you're trying to achieve. Otherwise the result could be quite varied from not helpful at all or not getting the right result.
The whole concept of prompt engineering as a discipline is something that's, I suppose, come up with generative AI tools and it's something that is where a lot of the investment of time is retraining the workforce away from how they'll deal with a keyword search type engine versus a large language model. if you think of it as being like chat GPT, but embedded in the Microsoft Office suite is essentially what it is to with your introduction when you consider that it's also got some access to corporate data, which helps ground what it can actually do. So it can be a bit more contextual as well based on where you sit in the organisation and what data you've actually got access to. So you had to do quite a bit of work around that in preparation, didn't you, in terms of putting fences around particular kinds of data that are sensitive in organisations?
Yeah, so if you think of when you start, like I suppose the thing with the technology is new. So there is a risk associated with the adoption of the new technology. If you're speaking to the likes of Microsoft or the vendors, like the end result of what you're likely to achieve is still, this is still essentially software under development, rapid development at the moment. So the importance of governance around how you would consider adoption at a starting point is to make sure you're doing things safely, you're not getting ahead of yourself until you're really comfortable with the technology and what it can and can't do, what's good at, what's not so good at. So that all needs to be given some consideration, but at the same point, you want people able to experiment. You want them to do it freely, but also be considering when they're reviewing the results of it that they're not just taking things verbatim. You have to actually be very considered around the response of the large language model and is it like are you looking at valid output or are you experiencing hallucination in the result set. So that's something that needs to be considered. So guardrails, absolutely. It needs to be considered though that you don't stifle people innovating with it as well, because the whole thing of getting people to learn with this stuff is getting them to experiment, but they need to be aware of the limitations or the risk of what they do with the resultant information coming out of it. It's one of the things we've talked about in a previous episode actually came up when we were talking to Kylie, which is that whole notion of now more than ever, critical thinking as a skill has to be something that people are encouraged to develop in the context of this new technology. So making sure that what it is delivering is actually valid, correct, to a point. And that's one of the key skills that people need moving forward. The governance topic that you raised as well, I'm super curious to know, is that a challenging conversation to have to get people to understand that, you we call it the go slow to go fast later, but to be really thoughtful about.
Okay, how do we set this up from a guardrails perspective with those conversations? sounds like guardrails and skills, because you're saying that it takes quite a bit of work to retrain the way people think about these types of things to get it set up for success going forward, which all says to me, it's not just about efficiencies and cutting costs and productivity straight out. There's actually a slow piece that needs to be done first.
The key thing to start off with, think, is that development of skill and to your point, Leisa, critical thinking around what you're actually seeing in the resulting data that you get coming back from using something like Copilot or any other GNUv AI tool so that...
I think there's some good case studies at the moment, so much property in Australia, but also I've seen from overseas as well, where people have accepted the results of generative AI and got themselves into some serious trouble from the perspective of not being critical around what is it telling me? Is this actually valid from a business point of view? So I think having that risk had on is really key. And from an organisational perspective is making sure that you've got all levels of the organisation on board to consider the adoption and how do you do it safely is really important. yeah, something that can't be accelerated from the perspective of, I know from the experience of having gone through an initial rollout of this, the initial thinking from executives and the like is benefits and how do we make things go faster or get efficiencies in various teams, which is all good. It's a good reason to be exploring technology, but at the same point, think it's the biggest link for me is it's not something that can be pushed too far. You need to actually accept the fact that you're bringing a workforce along a journey of getting to do something actually very, very new and there's going to be mistakes made. People need to be critical around the way they think about what they're trying to achieve with the tooling and also making sure that the organisation is equipped possibly go a bit wrong. We were talking the other day around if you do have AI transcribing all of your meetings and that all goes into a database that people can search whether, you know, I could go in there as an employee and start searching for all the instances where somebody's mentioned my name, whether they were aware that they were being recorded or not. And some of the HR implications for the kinds of things that could come out from that kind of search and discovery. So I think certainly on that particular topic.
It's education for people around. So if you look at M365 Copilot, one of its core uses, it integrates with Microsoft Teams. You can get it to record and transcribe a meeting that you're in. So everything is there in black and white, and there's a certain level of etiquette and training that needs to be instilled in people that things are being persisted. Some organizations have gone down the path of blocking the use of meeting of the fear of, I suppose what you're talking about Deb, I think is a necessary, a sensible way of going. The risk needs to be considered, people need to be educated on the use of the tooling in those sort of use cases and to do it responsibly. And I know that, yeah, certainly the technology will mature. It's also policies from an organisation of if you're using it, when's appropriate, when is it not appropriate? What is the retention policy around that sort of data as well? Because you're generating more data out of this, utilisation of this sort of technology. that something that needs to be thought about as well because I know a lot of the organizations that we've worked in. Meetings aren't necessarily the most productive part of your organization. They're often full of lots of talking and not a lot of action. So, you know, I imagine that there's some leaders out there who are probably rubbing their hands together going, great, we can now assess the productivity of every meeting in our workplace because we've now got it all transcribed and we've got AI can compare meetings across and we can hold people to account on whether these meetings are worthwhile or not. And that could have some major implications too. For me, it's another example of there's so many benefits of various forms of intelligent technologies and organizations. And also it will put more pressure on your human to human relationships. And that's another example of how that will come about. That's another use case example all of a sudden if someone's in a meeting and someone's late and they're talking unfavourably about the person who's late and they can rock up to their meeting and go, hey, pop into Copilot, I was late to the meeting, was my name mentioned and what did they say about me? And there it all is, you know, could be a whole raft of things that have been said. If you have a culture where people don't talk favourably about each other or they don't speak about the person as though that person is in the room then this type of technology is just going to blow that up and make that transparent to a degree. So again, that's that pressure on human to human relationships because the technology has gone, hey, here's, you know, without any emotion, this is what they said about you. And so I think for me, this is another example why getting the foundations right is really important. And what I have enjoyed about our conversations with you, Matt, about your experience with 365 co -pilot is that you did the work to think about the human impact of this technology. I know you understand technology and you're really good at it, but you also are very thoughtful and very good at engaging stakeholders. So you're straddling both those worlds. And still there were things that you were learning as you go, because it's always going to be context specific in an environment, but it highlighted for us again, the importance of looking out, doing the ghostly thinking upfront about the governance and the guardrails. And for us, it's what's in your culture now that will enable this technology and what's going to be disrupted by this technology or disrupt the adoption of the technology and those human bottlenecks. So I think the privacy one's a really big one as well. From the perspective of surveillance, people feeling like they're being watched and they're being monitored. Did you come across that in your experience, Look, not so much. I think if you look at the, I suppose, the use case of meetings, like one, I suppose one call out is that the ability of sitting with teams or other meeting platforms, the ability to record a meeting or transcribe a meeting has been around for quite some time. It's not a necessary a co -pilot thing. Co -pilot now allows you now to then use the large language to scan the transcript to then pull out pertinent information. So it sort of amplifies because that's quite powerful. like I don't know many people certainly not in the work I do as a program manager or somebody who's in a lot of meetings like.
The usage of having a robot to go and pull out your meeting minutes is quite a powerful thing. saves a lot of time. also used correctly can actually change the way meetings actually run where people can be far more fluid in a meeting, not having to think necessary so much about capturing every last bit of information or actions or whatever that can be done at the end. But I suppose the key thing there is, you know, that you're educating people around the etiquette of a meeting and the fact that think is actually doing that work. It's an educational piece as much as anything else. suppose also organisational culture is an important factor. The technology being used responsibly or in a considerate way of the people in meetings. So it's used to spy on people and...
Yeah, look at productivity is probably not such a, you know, not a great, you know, I'd be questioning the organisation's culture before anything else if that was such a concern because the technology is so powerful. In the context of meetings, you can actually get to a better outcome of a meeting, make the time far more efficient with the use of the technology. And I think that's where organisations should be focused on exploring, not using as a means to spy on.
Tell us about your experience with resistance when you were rolling out the copilot. had very little resistance. think generally speaking, I'd say the general awareness of people with AI technology being very much in the media, being very topical that people for the most part are accepting of the fact that this is the future. you, people needing to retrain to utilize the tech. think there's a general sense of that in a couple of instances is more people's time to spend. that a human bottleneck around capacity, but they just don't have the capacity to take part. I think as I progressed the projects, what became more apparent and from a resistance perspective and I think this is more about with some users their willingness to experiment.
and be accepting of the fact that the technology one isn't perfect and it's still under rapid development. And also at the same point as an end user, there's a lot of time investment to actually skill up. So actually thinking about how do I solve, you what problem I'm trying to solve and where does the generative AI tool sort of fit in that? So I found with some users, they weren't quite so willing to experiment. They just wanted to be told ultimately of how to use it. yeah, so I'm what I'm imagining as you say that Matthew is that, you know, people are, you've got a number of different roles and people doing different things. And sometimes people need help to contextualize it to how the tool can support their particular role and how they're delivering work. that fair description? That is a fair description. think going back to the point of the adoption of generative AI is all prompt engineering at the end of the day is learning how to get the system to achieve an outcome that you're sort of having to visualize upfront.
That's actually quite a leap for most people. know for myself, like when I first started playing with things like chat GPT or, and started throwing my first prompts in there so far off being able to really have reasonable skill on, on being able to get the tool to do what I wanted to do. And so takes a bit of time and being able to succeed and fail and keep at it. Some people in their day jobs, they're so busy with their day to day, like to invest in is hard. Sometimes it's also the time that you're not spending at work, but if you've got some time at home where you might play around with it, use it for things that you might be topically interested in just to get that creative process going. Because it is very much a creative process using the tools and also the general acceptance that you're not always going to be successful with it and you need to keep chipping away to improve the way you prompt
So that takes a lot of time and you get to a point where you get better and better at it. And then suddenly that's where you start being more productive with it. Or you can see that seeing the, you know, where you get those productivity gains with it. They certainly don't happen upfront unless I think it's more a bit of a fluke if people do it straight up. And did you notice any differences in willingness to get in and have a play across different generations working in the workforce? Did you notice sort of
anything that was typical of this generation or this generation. I don't want to pigeonhole people, but we also are mindful of the fact that we're at a point in time where we have four, sometimes five generations in the workplace, all of which have had a different desire to play with technology in different ways. So we're kind of curious as to what you're seeing versus what we're seeing.
Yeah, it's quite an interesting one. I wouldn't say necessarily the youngest generations in the workforce are necessarily the ones who pick it up the quickest. That certainly wasn't my observation. I think people who've gone through iterations of change or seen, you know, with the adoption of different technologies in the workforce over time, some being a bit more willing to experiment or being accepted the fact that the technology is not perfect. People who've grown up with the, I suppose, the smartphone generation maybe not necessarily the best adopters early up, because they expected to be perfect first up. And certainly the generative AI tools and certainly something like Microsoft Copilot, or even with ChatGPT, they keep saying their products are under rapid development. So you have to be accepting of the fact that they're not perfect. And this is a very long, long journey. Yeah. I just think there's so much about thinking what you can provide people to give them options to feel like they can upskill and opt in and experiment in different ways that get them to be more comfortable with the technology over time. frustrated when it doesn't go completely the way that they want in the first instance. Like one of the things I was just reflecting on, Matt, you saying before around the difference between Google search and these generative AI tools is when I put words into a Google search, it doesn't matter how many times I do the exact same words in that Google search, I'm going to get the same answers coming out. But the other day when I put the words into Chat GPT, I put it in once, I got one answer out and I'm like, hmm.
Not real sure about that. Let me try that again. I did it another time and I got different answers out. So it's like each time it's trying to reformulate its answer for you. And so I did it a third time and I got a slightly different answer again. And then I went through and took those three answers and merged it together to the answer that I actually wanted. If you play with M365 Copilot, it's exactly the same thing. So if you think about it in terms of, I've got a prompt and I've got five people in a room and I ask each person to respond to that prompt. Each response back being humans, there's going to be a little bit different each time. The generative AI tools are very much the same as that. It's not a Google search which will give you consistent response with the generative AI tool, it can be a little bit different each time. Sometimes you may use a combination of the responses in what you're trying to achieve. So yeah, it's quite interesting. It's quite a nuance there, which is kind of also comes people understanding that the generative AI tool works that way, that it's not a... a might correct answer? No.
And also that it's real -time information. So if someone else in the organisation has input something into that data set, either directly or indirectly, it's going to pick it up real -time, isn't it? Yeah, I think it really comes down to the... If you look at the use case of generative AI, some of it, you may be training a large language model around a particular business domain. That's certainly one use case where you're wanting it to have very intimate knowledge of that and to be quite consistent in the responses that you get. That is certainly one usage of it. You've got a large language model embedded, sitting behind your Microsoft Office suite. The extent to which it uses the organizational data will provide some contextual stuff, but...at the same point, will, you you will get some variation in their responses coming back out of it, depending on which aspect of the tool that you're using. I suppose it's important to call out like because you, it's embedded into things like Word, PowerPoint, Outlook, Teams, et cetera. Each application has its own nuances. So if you use the use case of Microsoft Outlook, Copol has very intimate knowledge of your email data.
So the responses you get from that will be very specific. If you're using it in Word and you're asking it to help me build a business case for the rollout of M365 Copilot and do that a few times, it will be quite varied because it's actually using the large language model in a different kind of way, different context, different data. So one of the big challenges from getting people to adopt that is the fact that it's a suite of tools. It's not depending on how you're using it or which application you're using it, it behaves a little bit differently in each case. So there's quite a learning curve from that perspective. Yeah, I've had that experience too, just going, okay, right, this looks different and it requires you to think differently specific to that application. What I loved was that the integration though across applications. So I could take something from Word and put it into PowerPoint and create a PowerPoint using the Word. It can do all of that. So just as one use case. But I think also along those lines, for me, there's a bigger mindset shift for people to start to realize that this is the beginning of our own agent effectively, whereas before other things weren't agents. It's like a personal assistant. Everyone can have their own personal assistant. It's like having a Microsoft employee that you can call up and go, hey, I've just written this in Word. Can you turn into a PowerPoint for me? You know, the best way to do this. So it's kind of like the infancy of agents in organizations in that way. And one of our previous podcasts, we spoke with Patrick Perot, works in the Middle East and he was talking about will, that's the future, as humans will have agents working for us and we'll have probably a group of agents, maybe a master agent, but I'll be able to sell my agent, you what my agent does, I'll be able to sell that to other people, et cetera, because you've trained it. But taking a step back, this is getting people to think about it differently as a step into the future of having that personal agent or a set of personal agents and knowing, and I liked the way that you described it before, we kind of have to take a back and go I have to train this to some degree so speaking to it like it's someone new in the workforce or someone who's new to you in the workforce and how you work doing your work you kind of have to think of it as it will learn quickly and evolve but you do have to do some level of learning and evolving with that as well. Quite a few years ago we were working on a modern workplace initiative. And one of our biggest challenges was getting people to stop saving stuff on their local drives and save it in a space where other people could collaborate on that together and there wasn't a lot of openness to being that transparent and collaborating with other people in that way. Did you experience anything, any resistance in those kinds of spaces? Absolutely. Look, I think like in order to get the maximum value out of the Microsoft suite is it does expect things to be stored in a certain way. So documents need to be on things like SharePoint or in your OneDrive, cloud hosted which is a big deal for organizations where that may not have been considered. So if you've got an organization with big repositories of or big file shares on premises storage, Copilot's not necessarily going to be your friend in that.
So that needs to be considered as where does the data sit so that it's got a good chance of being able to index it or also from the perspective of being able to collaborate with it. So that is an education piece for the organisation. Certainly needs to be considered and the risks around shifting stuff into the cloud as well as end users. So to your point, Deb, with the modern workplace example, it's an ongoing education for people getting used to putting data in a way that aids collaboration ultimately. Now off the back of that, there's also a risk associated with that that also needs to be considered is what data do you want to share and what don't you want to share? So people actually understanding how things like permissions are set on documents, actually need to start appreciating that a bit more because things like copilot sort of amplifies, I suppose, the risk that already exists today for most organizations is that people quite often have access to more stuff than what they ever realize. And especially if they've been in an organization for a long time, they could be moving around between roles and have access to all kinds of things, which they might, you know, arguably in their current position may not or shouldn't have access to. So being able to start using a generative AI tool to look for information that can certainly be something that's amplified or, you know, that risk realized people have actually got too much access. And that's some great technology to help identify those vulnerabilities now from both an internal perspective and you know who's got access to what relative to what they should have access but also from an external cyber security perspective and information security perspective it's a very important issue. It's one of the things that it's good hygiene factor organizations need to be across well before they even start with a co -pilot sweep. It's also hard for employees to accept as well though. I know, you know, having worked in a lot of different organisations that you always want to make a really good case to get access to the files and drives. you're in, if you're an executive assistant or if you're in HR and your reach is across the whole organisation, you want to have access to the information, the data, the systems.
But now is the time where that's not actually a good thing so much because that gives you, like you were saying Matt, too much ability to be able to search and discover things that you probably shouldn't have visibility over. So there are some roles that reach across the organisation, but still are going to have to get comfortable with not having the access that they have had in the past.
And look, think it's like if you took the approach of we're not going to roll out generative AI until all those things are sorted out is not the right way to go either. think it's important for organizations to understand the risk of the adoption of the technology and have considered the What your mitigation strategy is if that risk is realized in order to respond to people possibly seeing stuff that maybe they shouldn't do. I'm just very conscious of the fact of if you tried getting it perfect, the world perfect before you start adopting the tool, you've generally got organizations with such a long history of permissions and the access control around some of this information that it certainly needs to be considered very, yeah, certain areas that you want to go in and do a bit of a sweep on to make sure that you're not opening yourself up at the same point. It's very important that with the adoption of these tools that you don't go so far the other way where you become so risk adverse that you really slow down the possibility of adoption as well. I think it's a bit of a balance there that needs to be considered. And it sounds like also as an initiative that's being signed off in a business case and then implemented across the business that they're also, it's bit of a different type of initiative for the senior leaders. Whereas in the past We've always seen projects are started and if at any point it looks like they're not going to deliver according to their specifications, it gets stopped. Or there's some intense scrutiny on it and it gets re -ramped and relaunched. Whereas this feels like it needs more of an optimization type of mindset at that senior level as they're watching this rollout to go, okay, we've got some risks here that we need to mitigate against. Let's deal with that, but it doesn't mean we stop everything.
So you kind of have to think about it as, yes, there are some things that you can do preemptively to set things up, but you also have to be prepared to be responsive and adapt in the moment at the time with something that comes up unexpectedly. you know, that's often what we're talking to organizations about as well as how to build that adaptive capability in their team to go, we've done the best that we can with the information we have or the time that we have. But In that scenario, something might be presented. So how do they as a team respond to that quickly? It doesn't turn into a fault finding or a blaming session, but it turns into, this is what's happened. Here's some options that we might've thought about in advance that we could pull, or if we haven't thought about anything, that's okay, because we've got the right people in the room to quickly adjust and make the right decisions based on that new information.
One thing I've heard about, is probably quite a good idea, is where people considering an adoption project is getting a bit of a SWOT team together to go and basically run some prompts that you're looking for stuff that people shouldn't have access to, but from the perspective of actually uncovering some of the, I suppose, the higher risk things of seeing what, I suppose, in the real world, playing with the tools to see how good or bad the permissions are, which is maybe putting together as part of the project team a cross -section of people from the organisation to have a couple of days where you lock them in a room and get them to...
have a bit of a play, but from the perspective of just seeing how good or bad the situation is. Seeing what they could break. so actually what they break, what they can see. Yeah, they can get exposure to. And is it appropriate? Like treasure hunt. Yeah, exactly. That kind of thing. It's a great idea. We've certainly seen that before with clients as well too. So these things we know realistically take time in an organisation. How long, Matt, did it take to, you what was your experience of how long this sort of took to get comfortable in the organization. some of those are from the very early days to people going, yeah, no, I've got this now. This is fine. This is just part of my everyday work. So the project I ran was essentially
The main part of it was over a three month test and learn project. And that was getting across every single business unit, people involved to experiment with utilization of the technology and start uncovering high value use cases. At the end of the, that sort of 12 week period, I wouldn't say that essentially a workforce suddenly trained up to the point where you want to let them loose on that. suppose what it really uncovered was that there's good value the tool set, but then it's the next step of how do you really start embedding the use of the tools and for the purpose of trying to look at specific business benefits from a benefit realisation perspective of how do you start really mining those things out. And how do you choose what you want to scale across the business? Yeah. So I think if you look at something, so I suppose where we spoke, you've got executives who want to see certain things being true or getting productivity gains, how do you start looking at those use cases where you can actually start mining out where there's inefficiencies and where the AI tool set really helps and makes a difference. That is quite an investment of time. So one of the things for organisations is making sure that you have a sufficient project team where you've got people who can look at your business processes that exists today and where the AI tool sets actually make a difference. And then consideration of how you actually start embedding the tools in and also having the mechanisms in place to look at existing, like the as is today versus how productivity is improved with the start introducing the tools in. That also how you consider prompts as well. So how do you get a library of prompts that are safe to use across various business to start building up those skills. think that one of the things of getting benefits out of the generative AI tool set is equipping people with prompts that they start seeing immediate benefits from. That's something that needs to be thought about and give them a good starting point of where they can start iterating from there and start being creative with it as opposed to just throwing them a tool and hoping for the best, because that's a very long road and probably a very successful one from my observation. We typically see it takes two to three months to get a true sustained shift in behaviour. So when we're talking about shifting the way people work, it's going to take time for them to build the habits as much as the technology is also learning as well. The habit of not going to a web browser and going to copilot for that first port of call for help, just little things like that. I think the other thing is people building a level of trust in what the tools are doing as well. So as people go through that journey of adopting the tool. like initially, and it's a good thing is you don't want them to trust everything that the tool's doing. At the same point, you want them to be confident enough to keep experimenting with it. So I think we're getting that balance where you may consider having some project resources that just help guide that process and experimentation early on will aid adoption. So that's an important, important thing. I don't see it being successful journey if you just give people M365 Copilot and hope for the best. It needs to be little bit more considered considering the development of new skills but also just making sure that safety around how an organisation is adopting the tools. It's a really, important thing to be thought through.
You don't want to be the organization, the media where somebody's thrown out something to a client or customer. And you don't want to be saying, but the AI tool did that. Key thing with copilot, I think Microsoft pushed this point to no end, that as a copilot, it's not autopilot. I've heard that thrown around a few times. it's, yeah, the human is needs to be in the loop the whole time and being the critical thinker in that of what the tool's actually doing. So in essence, the human is accountable for what they're using and they can't just blame the software. In the case of a copilot tool, that's, has to be the way. Just getting back to a point that you were raising before about takes time to learn and to get there. And one of the things that we always do in organizations is that whole social learning pace. So making sure that we help democratize the learning process so that peer to peer and, you know, across the organization, there's the social element to people helping others learn. It's not always just a trainer who's training people. It's actually everybody. It's the democratization of learning experience because there's such richness in other people sharing their experiences and their quick tips and for how they've used it. I think if you think of projects where you're putting a brand new system in or like a workflow, there's probably a good example where you've got a process guide and people have to learn to do things in a very descriptive kind of way and Copilot or the Gen AI not that kind of thing. from the certainly the perspective of the learning curve, I think is going to be virtually, you know, it will outlive the change managers or project managers or resources working on embedding it. It's you need to get the organisation invested in with the community of people using the tools to actually share learnings, getting a change champions that long running sort of culture in the organisation where that sort of knowledge and learning is shared to build that up, to be really successful. Because I certainly, I suppose the question before is how long do I see it take? Well, yeah, three months, I was doing it for nine months, but at the end of it, you've got some skills, but it's a long way to go. And would an organisation fund it the whole way? I don't think there's an end point with this technology is always at the point where it's rapidly evolving and the organisations need to, I suppose, become a very self -sufficient learning organisation where it's shared and you've got that ongoing evolution there that it's not a project where it's just a finite start end with a change management piece. The adoption of AI technology is something that's going to go for a very, very, very long time. Yeah, it's about those cultural foundation pieces. Yeah. Hey, what's your wow? We always ask every guest what's your wins, your opportunities, and your watch outs for AI in the workplace.
Look, think if I was looking at just purely the rollout of something like M365 Copilot or the Microsoft offerings. Yeah, the immediate wins, you know, is getting people using things like, you know, the team's functionality and not having to write another lot of meeting minutes ever again is just like that. It's just like, that's an immediate, it's it's a no brainer. I had people asking me for co -pilot licenses just so they saw it in action and just like they wanted. So go for those quick wins. Go for the quick wins. Absolutely. The other.
thing I suppose is opportunities is like where people do want to see business benefits out of it is like, well be prescriptive about those benefits are that you're going for. If it's in a certain area of the business, you need to start thinking about it and also investing in it. So it's not just enough to go and buy some licenses is that it's a educational shift for the organisation. having appropriate resources to aid that adoption has to happen. Like you can't just throw it to chance. So yeah, the opportunities are there. Certainly saw some really great ones where I was working, but requires investment, so it doesn't come for free, especially at this stage where you're trying to develop skills. think for organisations to watch out for is things like having a good risk framework wrapped around the adoption of these sort of tools. the risk framework governance certainly needs to be considered, especially people who in financial services or their regulated business. Your regulators are very much all over the adoption of this technology and some thought needs to go around it, not just from a regulatory perspective, but also making sure you're doing the right things to your customers as well. it's certainly thought needs to be put into that and it needs to be well considered. But at the same point is just getting the balance right between how do you achieve a good cadence of adoption for the tools and realistic from that perspective. I think the benefits will certainly come. I have no doubt about that, but it's patience around how quickly those benefits will be realised. But in order to get there As I said before, doesn't come for free, it needs a level investment.
And the erosion of benefits as well, I imagine, if they start locking things down because people are not using the system like they were hoping they would use the system and they start walling things off, then I expect those benefits will start to be eroded. And sustain it as well. Both how do you get the benefits of how make sure that you stay on the same suppose the ability to find out who those people are and how do you bring them along the journey, because it's here to stay. You want to see people be successful with the utilisations of the GEN .AI tools, certainly for their longevity in the workforce. It's not going to disappear.
Thank you, Matt. That's been a great conversation. Lots of insights. One of my favourite things that you've said is don't leave it to chance. Yeah, me too. Why would you do that? It's such an important time in our lifetime in organisations to be looking at how we set people up for success, give them the skills in order to get access to the benefits and those guardrails and the guidelines and frameworks around it that need to keep people safe as much as you need to keep your IP safe and your customers safe as well. Doing that slow, thoughtful planning beforehand to not stifle things or stop things, but to do it in a really sort of considered way, just like you want people to be critically thinking about what they're getting out of the gen AI technology. the critical thinking needs to happen in multiple places now across the business. So thank you so much for coming and hanging out with us today. It was great. Thank you so much. That's been really awesome. Leveraging your experience and knowing that we know that you care just as much about that people experience as what we do. So it's nice to hear from your perspective as well. So thank you. Thank you.
Thank you.