Season 1 Episode 2 AI Literacy and Responsible AI with Kylie Walker,CEO, ATSE
Chapters
00:00
Introduction: Importance of AI Literacy and Responsible AI
01:22
ATSI's Role and Purpose
06:18
Business Leaders' Acceptance of AI Agency
08:07
Pace of AI and the Importance of AI Literacy
12:12
Building Digital Literacy and Fluency in Leaders
15:39
Upskilling and Career Progression in the AI Era
21:29
Leaders' Duty of Care in AI Change Management
23:31
Inclusive AI Design and Data Sets
27:06
Impact on Call Centers and Customer Service Teams
29:03
Respecting and Supporting Employees in AI Transition
32:13
Strengths of Marginalized Individuals in the Workplace
36:00
Supporting AI Literacy and Learning from the Team
40:36
Wins: AI Applications and Transcribing Meeting Notes
41:57
Opportunities: Innovations in Research and Public Transportation
43:22
Watch-outs: Privacy, Data Sovereignty, and Unintended Consequences
44:58
Conclusion: Building AI Literacy and Supercharging Workplaces
Welcome everyone to the Humans and AI in the Workplace podcast. Over the last few years, it's become clear that artificial intelligence, AI, is one of the most impactful and disruptive transformations in the workplace. As a leader, you may be wondering how to get started and how to do it in an intelligent way. Or you may be stuck on how to overcome some of the people issues and human bottlenecks your AI has crashed into.
We are here today with Dr. Deborah Panipucci and Leisa Hart from AI Adaptive and our special guest today, Kylie Walker, the CEO of the Australian Academy of Technological Sciences and Engineering to discuss today's topic of the importance of AI literacy and responsible AI.
Thank you for joining us. I'm Leisa Hart. And I'm Dr. Debra Panipucci. We're excited to talk to Kylie today as she's an innovative and disruptive thinker and a distinguished expert in the intersection of technology and society who connects scientists and technologists with leaders in politics, business and the media. We heard Kylie speak about the importance of AI literacy and the responsible AI essays.
And we were interested to explore this pivotal topic with her, in particular, why AI literacy and responsible AI are crucial for leaders in business. Hi, Kylie, welcome to the Humans and AI in the Workplace podcast. How are you? Great. Thank you so much for having me. Today, we thought we'd check in with you and get your insights as thethe CEO of the Australian Academy of Technological Sciences and Engineering, obviously abbreviated to ATSI, A -T -S -E. Tell us a little bit about ATSE's role and its purpose. So we're a learned academy. There are only five learned academies in Australia and we've got a fellowship of around 900 people and theyre leaders in applied science, technology and engineering.
So they're people who've made a significant impact in those fields and who've taken really strong leadership in those fields. The way that we work with our fellows is they volunteer their expertise so that we can advise decision makers on how to apply science, technology, engineering evidence essentially to tackle complex problems.
And we also have a range of programs to support a diverse and thriving STEM skilled workforce, working with people from year seven right through to mid and senior career. That's great. And when you say decision makers, who are you referring to? Well, everyone from federal politicians and state politicians through to business leaders, community leaders, people who are in charge of legislation, in charge of funding, in charge of policy making to really supporting those people to bring technology and applied science to support complex problem solving. That's a really important group. Yeah, it is. It's a big job. I've got just a really privileged position. I work with these incredibly clever people and a really committed and dedicated group of people. You know, most of our fellows are really invested in making the world a better place. So it's really exciting. I'm learning from them all the time.
Yes, what a lovely talent pool to be part of. Yeah, indeed. Recently, ATSE and the Institute of Machine Learning released the Responsible AI Collection and I've read through them and there's some really interesting insights there from key leaders in this space around intelligent technologies and the different considerations, particularly from the lens of responsible AI on society as well as in workplace. What are your hopes for people reading the essays? I guess we really want people to understand the ways in which AI is already being applied in our daily lives and sometimes I think in really surprising ways for people who've only come to AI through the advent of things like chat GPT and those text -based large language learning models.
And to, I guess, improve understanding of the vital role that AI has to make as a tool that's evolving very, very quickly for human decision making. What we'd love to happen, not just by leaders and decision makers, but by society more generally, is to understand that we have an opportunity and we have agency in terms of shaping how this very rapidly evolving technology goes in the future. Where are we going to apply it? What are the questions we need to ask ourselves in order to make sure that it is a tool for good? How do we craft the guard rails to ensure that it is being used as responsibly as possible? What are the things we need to be aware of?
where the potential pitfalls, but also where are the enormous opportunities and they are enormous. And I think as well, I acknowledge that there's a rapidly closing window of opportunity right now to think about who are the people, how are we crafting the workforce that is building these technologies and that is evolving these technologies. Because we know that they are only going to be as good as the people who build them and the people who use them.
I have a question. We really care about the people side. So it was lovely to hear you talk about that quite in depth, but I'm really curious as to whether you think business leaders are ready to accept agency for the AI in their business. Agency for AI, did you say? Yeah, for that you mentioned really just then about how there's the opportunity for agency and AI is permeated throughout their business now in different forms.
And so connecting the two, do you think that business leaders are ready to accept the agency for the AI that's in their business? I mean, I was really talking there about agency for people, but are business leaders ready to accept that? Look, I think they know they have to. It's kind of not really a choice anymore, I have to say, because this cat is well and truly out of the bag.
So you either get with the program and work out how to use it to advantage and how to guide it and how to keep pace with the possibilities it brings as well as the risks that it brings. Or it overtakes you, I suppose. And that's not just the case for business leaders, that's the case for educators and for governments and for councils and for communities too. One of the essays that we were particularly drawn to in the Responsible AI essay collection was your essay around an AI -literate community will be essential to the continuity of our social democracy. And obviously that pervades into organisations as well. I think what you were just saying was if you don't get into this now, if you don't start building your literacy, you'll be left behind. Now that sounds very forceful and dramatic, but it's just the reality because this is moving at a pace, particularly the last two years, let's say, at a pace that we know leaders aren't ready for in their organisations, which is part of the reason we've been looking at and working with organisations in this space since 2019. What are your thoughts on the pace of that and where that AI literacy comes into it from that perspective, Kylie?
So regulation and community uptake always lacks technology. That's just how technology works. You know, that there'll always be a lead time and a catch up to be done. I think it's particularly acute with AI technologies because they are evolving very, very quickly. There's a whole suite of digital technologies that are moving much more quickly than technology has in the past. The pace of technological change in general globally is getting quicker. And so those big bureaucracies, which governments necessarily have to be, and I also count really big corporations in that category as well, any big organisation or institution is necessarily going to move more slowly than the pace of technological change. So just acknowledging that and recognising that and thinking about, okay, if we know that that's the case, how do we then lean in and interact in a way that enables us to make the most of this? Firstly, to mitigate the risks. Secondly, and to understand what the implications, the opportunities are. I think it's a really important decision for leaders to make.
I remember actually growing up, my dad used to tell me this really apocryphal story. I'm sure it was completely made up, but he had this beautiful idea about the last annual general meeting of Cobb and Co. And, you know, these folk on the floor, the shareholders on the floor, you know, asking for change and railing against the introduction of the motor car, because, you know, the horse and cart was about to go out of fashion. And People who were investors and had stayed true to the horse and cart were determined that they were going to stick by this old technology because they knew it and they loved it and they put their money into it. And a small group of people stood up and said, well, that's lovely, but actually, this is going to be obsolete in a couple of years' time. So if you really want to make the most of your investment, what you do is you shift it and you focus on the new technology as it's emerging.
This is really no different. I mean, I think people are a little, potentially a little frightened of AI because they don't understand it. And for that reason, it's, and you know, because of certain movies that may have come out in the past that potentially paint a very dystopian idea of what an AI driven future might bring.
But rather than be, I mean, there are two things you can do when you're afraid of technology. You can run away and pretend it doesn't exist, or you can acquaint yourself with it and equip yourself with that knowledge. And that's the agency I'm speaking about here, because it is incredibly empowering to learn about it and then think about how you want to respond to it once you've learned about it. You don't have to engage with it necessarily once you've learned about it, but at least then you've got an informed choice to make.
So I think that is the case here. AI is already changing the way that we live and work. It is going to keep changing the way that we live and work, whether we like it or not. And there's always the choice to make to just kind of be carried along by it. But if you are a business leader, if you are an educator, if you are a government decision maker, then I think that would be a very foolish choice to make because you will be left behind and it will overtake.
I'd really love that and there's a well needed shift in business to really understand the difference between repeating back information that you've found versus comprehension and building knowledge. And I think that as generative AI has emerged and become part of the landscape and the way that we work and live, there's a new way of working that's on the horizon and for people in business, because that would be our audience who's listening to us today, they would be listening to this and going, that's a really interesting point on how do we make sure that we're not just asking people to play back things that they've read, but they're really understanding it.
And look, it's a hugely powerful tool as well because I think about the applications in research and there are all kinds of scenarios in which you might need to do a bit of research and there's plenty of business where you need to do some research in order to inform decision making. But you can do it so much more quickly and more comprehensively if you know how to use these tools really well. And so then rather than spend all of your time and your energy finding all of the information you need, you can have it at your fingertips and you can spend your energy, your intellect, your time on taking it to the next level and really supercharging your creative response to a given situation rather than kind of burning yourself out, just gathering the info. So that's just one example, but, and certainly we think about it in terms of R &D, oh my goodness, the potential to leapfrog development is incredible.
And once these tools are really up and running and are starting to reach their full power, I think it's going to accelerate things like drug development to a huge degree. And even just little examples that we come across where the number of people hours, if you do this sort of checking in an analog way, it can be so people intensive.
But if you can take that away, so for example, if you're examining photos and you're looking for a particular piece coming up in those photos all the time, you might be looking at an invasive species. At the moment, or until recently, people have done, individuals have done this, you devolve it out to students or to interns or whatever to physically look one by one at photos and identify which ones have that invasive species appearing in them or which ones don't. Now, an AI can do it in seconds, something that might have taken weeks or months previously. So that opens up so many more opportunities, I think, to build and to innovate and to develop new products, new tools, as well as just finding really quick solutions and shortcuts to that sort of everyday mundane drudge stuff that is part of any business. Yes, we've seen quite a few use cases where AI is doing a lot of that initial image recognition work and making a recommendation, whether it be in the healthcare setting or whether it be in insurance around claims. So we get quite excited about that as a way to do some of the more repetitive higher volume work that the technology systems can take over. I guess the trick with that and from a responsible lens is to make sure that the people whose roles are being disrupted, there is a clear pathway for them for career progression and use of their skills in a more sophisticated way or upskilling. We know that we need a lot more digitally skilled workers over the next few years and particularly.
I head a crazy figure the other day about the amount of data scientists we still need in this country and for the next few years, as well as just general digital workers. And so there's a really good opportunity and we advocate for leaders to think about this in their workplace, be really clear about why you want this technology in your business, be realistic about how it can deliver on your why, but more importantly, think about Well, what does that actually mean to my business, to my skills, my culture, my organisational structure and have a really clear, respectful, thoughtful plan for that from a change perspective? Because there are so many opportunities. We did a whole episode on fear of AI because it's so prevalent at the moment every week. And yes, you mentioned some movies, but there are increasing stories, but a lot of that is driving fear and we're hearing people very concerned and we advocate for leaders to be able to have conversations with their team that's grounded in that leaders fluency of AI. And so it's a similar concept to your digital literacy. It's develop your understanding as a leader of what this technology is and what it isn't yet and be able to confidently have that critical thinking mind and look at what vendors are possibly wanting to deliver to you, what you've already got in your organisation, how is it already appearing, what are the impacts? And so we really strongly advocate for them to be able to have those conversations proactively and thoughtfully, but that it's grounded in their understanding and their critical thinking about the technologies. In terms of where this intersects with your awesome essay,is what are some of the ways that you recommend for leaders to build that digital literacy or to develop their fluency so they can speak the right language to their people? And also how they can help navigate that fear and that level of job insecurity that it's starting to drive. So I mean that fear that the robots are coming for my job has been around for a long time actually, you know, since the advent of manufacturing and It's not a new fear, the technologies are changing, but that underlying fear and threat is an old story. And actually it's never quite been born out to be true because yeah, what it will do is, as you mentioned, I think is to change the nature of those jobs, but the jobs themselves or the number of jobs doesn't change all that much. In fact, as you mentioned, I mean, the tech council is predicting 1 .2 million jobs more digital jobs by 2030. That's a lot of new jobs and we've got a lot of training to do if we're going to meet that demand. And so training and change management 101 really are the keys. So communicate, communicate, communicate. You cannot be too transparent about and too inclusive about how you're going to manage that change if you have a business that has a reasonable prospect of digital transformation.
There are lots of different ways to upskill your workforce and to find pathways for existing workers whose jobs might change to be able to continue to stay abreast of the skills that they're going to need to be able to evolve with that change. I do think that leaders have a responsibility to equip themselves with the knowledge and the tools to be able to instil that confidence and to make sure that they understand what tools are out there and what benefits they might bring to the work that they do and the work that their team does. And then make the time and the cultural conditions right for people to be able to engage with them and to upskill. Obviously that's really, really important. You can't expect somebody to change their job skills overnight without support, without time, without investment.
I'm lucky enough to be surrounded by leaders in AI and leaders in technology more generally through the fellowship of our academy. So to me, I see this as very exciting and very filled with opportunity. And it's in a whole range of different sectors as well. We sort of tend to think about technology as the tech sector, but we see AI in action every day when we go to the supermarket and go through the self -managed checkouts which are getting better and better at detecting misuse, shall we say. They're not the ripe bananas. These are the ripe bananas. Indeed. We see it in action in the mining sector at the moment. There's this incredible centre in Perth where it's the remote management of those gigantic machines which used to be driven by
by people on site. And the benefit that that brings in terms of workers' safety and in terms of the ability for them to stay in the city where their family is and not have to fly in and fly out, it's not always a negative equation. It also opens opportunities. And it can be, potentially, it can be a game changer in terms of work -life balance as well. So I think understanding the benefits as well as the risks and not being afraid to communicate those to your team. It's really important for leaders. You don't have to know how all of the programming works, of course, but being a little bit literate is a good thing, I think. Yes, and we really push hard on leaders for them to take that seriously. And we call it a duty of care that they have to not only be using the right language grounded in solid understanding versus some hearsay, but also to think about the impact of that and what their words and actions mean on people and to take people through that experience of change in a really responsible way, which I see paralleled in some of the Responsible AI Essays as well around thinking about the impact on biases already built into processes potentially, looking at the data. We talk about it from a perspective of leaders being more inclusive and having to work as a tighter group of leaders in an allyship sense because they have a responsibility to be more inclusive across the organisation and to break down silos and that perception of perfectionism that people want to hold and knowledge forward.
And, you know, because sometimes they're doing that because they're fearful of their role being disrupted significantly without a clear way forward. So we really sort of advocate for leaders quite strongly in the words of this is your duty of care as a leader. You are the steward of that organisation. So it comes with that responsibility and that duty of care to think about the impacts.
I think one of the lovely things about such rapidly evolving technology is that there really are no excellent experts in many organisations. And so you can, to an extent, quite safely put your ego aside and admit that you don't know everything. And that's actually, that's a really, really important part of that duty of care that you're speaking about, to be able to be generous enough and confident enough to say, I don't have all the answers. And we actually all need to work together on this. This is a shared responsibility. I'm really pleased that you raised the concept of inclusion because I think it goes on, it's important to hear on a couple of fronts. One, that we know that organisations are going to be much stronger if they bring everybody along on this change journey. And there will be opportunities and unintended consequences that will come to the fore if you do genuinely consult with, listen to, and bring everyone along on that change journey. Because no one person or no small group of people can know everything or see everything.
So the more people who have those different experiences of working at your company and interfacing with your customers or looking at your supply chains or whatever it is that they're specialising in, they're going to bring valuable perspectives to you that you will never have thought of as a leader. So it's always important to be inclusive and to listen. But inclusion, I think, on a second front, and I did allude to it a little bit earlier, and that is that we've got this really important window of opportunity right now to get AI right in terms of who we include in the building for the technologies itself. What we know is that once you set machine learning on its journey, it will continue to iterate, it will continue to learn, and it will continue to respond to the information that it is fed and to the rules that it has been governed by. So getting those rules right at the beginning getting that information set, that data set right at the beginning is crucially important to building inclusive and responsible AI itself. And I think, you know, if we're looking at a society in the near future in which AI is utterly embedded into our social systems, into our education and our health systems, into our, I don't know, our public transport systems, you think about logistics many aspects of our lives already or very soon will be touched by AI and increasingly managed by AI driven systems. And so if they're not built with that inclusion in mind, with the perspective of people who've been marginalised included in that build, then the potential for them to increasingly exclude is going to continue to grow. So...
I think, you know, when we've got marginalised populations, we really need to have them at the table. Having those people who are marginalised at the build table and thinking really critically about the data sets that we're training AI on, I think are really crucially important. And we do have a window of opportunity right now. You know, we think, for example, about the fact that most...
Giant data sets of voices are based on the male voice. And so you have a situation in which cars, some of the really new technology in cars that is voice activated, it's not going to respond so well to a higher register. So that's just a really kind of blunt force example of what can go wrong when you don't include the right people at that build table and you don't include the right kind of breadth of data when you're teaching your machines how to think. It's a really interesting point and it's something that Lisa and I have been looking into in a sense of the impact on particular groups in AI entering into the business. And one use case we've been looking into is call centres and customer service teams. And as conversational AI comes in and takes over the agent's role.
A lot of the lower level agents are students, people who have not a lot of tenure in the organisation, they're part timers, mums, women, migrants, minority groups tend to be in those roles. So if they are pushed out of the business by the technology, then you do have less voices in the organisation designing and building the rules for these technologies in that workplace? Yeah, it's really important to consider and we know that whenever there's a big change or a big challenge to a sector that the first people to go are often the people who are the most vulnerable. They tend to be on those short -term contracts or those casual contracts. And, you know, we saw that through the pandemic recently. A lot of the people who lost their jobs first were people who were already vulnerable. So it is absolutely important to think about that when you're managing that transition. And sometimes those people in those roles know a lot about your customers. So they're the perfect people to be pivoted into design thinking roles, roles centred around experiences or new products and services centred around new experiences based on their experience of what their customers have struggled with previously. That's one of the things that we really advocate for is not just looking at, okay,significantly reducing your head count if in fact you will get that outcome. There's also that unrealistic expectation sometimes of technology being a silver bullet. But to actually look at things and go, okay, well, what skills have we got? What knowledge do these people have and where can we value add on top of what we're doing as if we've got the technology as that intelligent digital worker, that teammate, well then where can we harness the potential of that human genius and sort of take it up a notch to that next level of, okay, they've got that background and that experience in our business. And part of what we were saying before about the duty of care is if you go through that whole process and you've still got people who you no longer need in your business, there's still a very respectful way to do that. There are pathways to help people pivot into other organisations or other industries. You can.
You don't just have to say, okay, off you go. You can actually make that a really respectful transition. And we've done that before in organisations. And the thing that that does is not only gives that person the best possible experience of transitioning from one role to another, and they feel supported, but the people who remain look at that and go, our leaders putting people at the heart of what we do here. So it preserves the culture around care and respect for people, even if the people do leave the business. So it's one of the things that we always advocate for strongly. It's like, be very clear. If you, if you do want to reduce your head count, that's okay. Be realistic about that, but also do that in a really respectful way. So I think that's one of the things that we want leaders to really, to grab hold of, but it's, that's all part of them being able to understand realistically what the technology can and can't do and how to set it up responsibly, how to be inclusive through that process. And just because some people haven't got the AI machine learning skills yet, it doesn't mean they can't get it. They may have just never had that opportunity to learn. So, you know, there is always, you know, the opportunity to build skills when you free up staff from their existing roles.
Absolutely and I think you know on top of the advantages that you've already identified for those people and what they might bring to a business, I would add a couple of things. You know I think people who have a big juggle, you know people with caring roles, people who are in part -time work, people who've returned to the workforce after a long break and people who've had to break down barriers in order to attain those jobs, they are incredibly resilient and resourceful people.
Now that's a huge strength that they bring and I think is often an invisible strength because it's tempting to think of people who've had to break down barriers to enter the workforce as, I don't know, the equity folk and to be condescending about it if I'm blunt. But really, honestly, the depth of resilience required to push through biases, discrimination or challenging life circumstances and show up and do a good job, that ought to be, I think, respected and it is potentially a huge strength for an organisation. That's so powerful. I really love that because it is, it's one of the things that can just skip along and be under the surface and just be taken for granted in terms of what someone contributes to an organisation or what their perception is of their contribution. I think that's really powerful. I think about it a lot because we've got a scholarship program for women and non -binary people to go to university and study STEM in subjects where women are underrepresented and where there's a great workforce need. So there are a lot of folk that we're supporting to do engineering degrees and IT degrees, for example.
And because we're deliberately filtering for, well, we have a huge range of different people who apply for these and get these scholarships, but we're deliberately looking for people who might otherwise struggle to go to university for various reasons. They are just incredible in terms of the strength and the resilience and the creative thinking that they bring, because they've had to be. If you've grown up in a remote community, if you've never encountered anyone in your family who's finished high school, let alone gone to university. If you've spent 15 years out of the workforce caring for someone with a disability or if you've decided that you want to go and get a higher degree after working as a truckie for 15 years and all of these are, you know, those examples are drawn from people who are actually on our program. Then you are somebody who's got imagination, you're somebody who's got determination, you're somebody who's got flexibility and creativity and strength and they're pretty incredible traits. We always say at our organisation, you can teach skills, but you can't teach character. Yeah. We always say you can teach skills, but you can't teach mindset. Yeah, same thing, right? Yeah, exactly. And absolutely, if you've got that solid foundation of curiosity or really okay with having a go and stuffing something up and going, okay, ouch, that might have hurt my ego a little bit, but I'm going to get on with it and have another go.
If you've got that growth mindset, Professor Carol Dweck's work and grounded in growth mindset, well, you can learn anything. That's the whole preface of that mindset is, you know, at any point you can choose to be or learn whatever you want. It just, you know that it will take effort and perseverance and persistence and you know that it will take stumbles and getting up and keeping going. So yeah, we get excited about seeing that potential in organisations and.
That's usually one of the places we start to do work with leaders around change and particularly AI change is around look at the mindset, look at your mindset, look at the mindset of the people around you. How do you foster more of that curiosity? Because curiosity displaces fear in the brain. So we know that if you can get people to a place of being curious, they're less likely.
to be worried and fearful and lose access to the beautiful parts of our brain that help us navigate change. When we're in a state of fear, we lose access to our ability to think more broadly and to collaborate effectively, to think creatively. So we want access to that. So the pathway to that for leaders is around stimulating curiosity and inviting people in, helping people feel included and heard, but also really encourage them to play and experiment and to have a look and learn and share as they go. I really like that. I'm going to remember that bit about curiosity displacing fear. That's great. So we use a lot of integrative neuroscience principles in our work. And because it's so practical, once you understand it, it's a really good foundation to layer on other practical skills and learning.
As a leader at the Academy, what are you doing to support your people to build their AI literacy and how does that come to life, Kylie? Well, look, I mean, we're a pretty privileged and small workforce at the Academy because we do have the opportunity to work with them and learn from the people who are actually leading in this space. You know, having said that, doing the research and the theory and the advocacy and the roadmaps is quite different from applying.
AI in our day -to -day work. I am always, always interested in learning from my team. I never imagined that I'm the person with all of the answers. And I think that anytime you walk into a room and think you're the person who's got all the answers, you've stopped learning. And you've really, you've cut yourself off, not just from other people, but from also from your own development and opportunities.
So I learn from my team every day. They push me to use new technologies every day. And some of them, you know, work pretty well for me and others I struggle with and I have to really make a bit of an effort to get to know. But we have regular sessions as a team where we teach each other and different members of the team from all levels actually hold professional development sessions for the whole team which is a nice opportunity to hear from people at all levels in the organisation and to remember that we all have something to bring, some value to teach. We're very interested in trying new technologies when they come. We don't always stick with them, but we always try them. And as I say, we're very, very privileged to be able to have access to some of the absolute leaders globally in this space. And so,
In terms of the theory and the big picture stuff, we learn a lot every day. And I think that's been pretty exciting. We've been trying out a few different tools over the last year. We've been using transcription tools. We've been using some interesting Photoshop and other kind of associated tools. We're not using ChatGPT to write our reports, but we've tried it.
Actually, we did a strategic plan, which was pretty interesting. But it's good to interact with the tools as well to understand what the outputs look like. And I've found that quite illustrative because it doesn't take you too long to develop a bit of a filter around what's come from a human being, what's come from a machine. You're not going to develop that filter if you don't play with it yourself. So jumping in and having a bit of a go is always interesting. And I think a lot of what we do though is really open the door for our incredible fellows to be able to share their messages and open their opportunities to other leaders and decision makers so that we can amplify the potential benefits of understanding how to engage critically with these tools. That's excellent. We advocate for people to play, especially with the large language models that are publicly available as well but to be very cautious and don't expect it to deliver you, you know, 100 % of what you need. It's just not realistic. It might give you, depending on the complexity of your task, maybe, you know, 70, 80%, but you've got to make sure it's factually correct. It's not hallucinating. You've got to contextualise it. You've got to make it sound like you, if it's something that you're putting your name to, but it can give you a really good heads up or start at something if you're stuck for how to write something and it'll just kick you off and then you go, actually, I don't want to say it like that, but now you can know what you do want to say, so. Yes, it takes away the fear of the blank page. But I also always ask the team to, and as I do with my teenage kids actually, don't forget to read it really carefully as you say and personalise it because gee, some absolute corkers can come out. The technology is by no means perfect. So.
So you really have to know what it is you're submitting before you submit it. And it's a pretty easy catch. Yeah. So one of the things that we ask each of our guests on the show is the thinking about humans and AI in the workplace, with the audience being our leaders. What would be your WOW? And the WOW stands for the wins, the opportunities and the watch outs. Yeah.
So in terms of humans and AI in the workplace, I think a couple of wins. One is in the everyday, like we've just been saying, you know, the ability to transcribe meeting notes so you actually can focus and pay attention to what people are saying and trust that you can go back and check the transcript later. They always need fixing up, but it really does free up that brain space so that you can actually be a part of the meeting. So I think that sort of work, I think that's a win.
But I think the big picture, really exciting potential stuff for me comes with that research, and in particular scientific medical research, to be able to accelerate the pace of innovation hugely by decades overnight, essentially. I find that incredibly exciting because we've got so many people out there working with so many big ideas. If we can harness their imagination and connect it with the power of rapidly evolving AI, I think we're going to see the pace of innovation leapfrog even faster than it has over the last decade. That kind of brings me to the opportunities part. And there are some opportunities that are exciting like that. And there are also some opportunities to take away some of the daily kind of hassel of life. So, you know, the idea of applying really terrific, incisive artificial intelligence to something as simple as coordinating an effective public transport system, I think will be a game changer for many people. Like we know that public transportation systems have mostly grown up organically and they've been designed in the deep dark past around a model of life that isn't relevant so much anymore where you've got a homemaker at home who's perhaps not leaving the house so much or has the car to do so and you've got your commuter coming into the CBD to do their job and then come back again. And many around the world, our public transport systems have been designed around this way of life, which isn't actually really the way that people live and need to use public transport. And increasingly, particularly if we're going to get more environmentally responsible and people are going to reduce their car use and increase their public transport interactions, we need to be able to respond to the way in which people need to use it to get to healthcare, to get to the library, to get to education, to get to work and to get to play. And to do that at different times of the day, not kind of glutting it up at sort of eight to nine and five to six. So I think that could be really game changing in terms of saving people money, time hassle and environmental footprint as well. And on top of it, I guess, sort of when it comes to the watch out, the final W in the wow, it really is about making sure that we don't kind of get too taken in by the allure of that shiny tool, that we really think critically about what we want out of this technology. How are we building it? How are we applying it? What are the unintended consequences or exclusions that we might be embedding through the way that we're using it?
We need to, I think, very much remember the importance of personal privacy and data sovereignty. We need to remember the importance of empowering that critical engagement through education, not just at school, but for citizens more generally. And we need to make sure that we've got the appropriate checks and balances in place so that when the technology is used for purposes that are not perhaps aligned with our values as a society, that there are ways to address that. That's a great wow. I'm going to take away lots of notes on that one. Thank you so much again for your time, Kylie. We really appreciate it. Thank you both. It's been a real pleasure. For our leaders listening, we hope that this episode with Kylie has given you some actionable insights for how to build your AI literacy,so you can unlock the superpower of humans and AI in your workplace. We have more on our website about AI literacy and a link to the responsible AI essays. Thank you for listening.
Humans and AI in the Workplace is brought to you by AI Adaptive. Thank you so much for listening today. You can help us continue to supercharge workplaces with AI by subscribing and sharing this podcast and joining us on LinkedIn.