Chapters
00:00
Introduction to Responsible AI in the Workplace
02:50
Defining Responsible Technology
06:03
Atlassian's Approach to Responsible AI
09:06
Creating Accessible Guidelines for Teams
12:11
Principles of Responsible Technology
15:00
Implementing Responsible Technology in Practice
17:51
Continuous Improvement in Responsible AI
21:09
Engaging Teams in Responsible AI Workshops
24:00
The Future of AI Literacy and Governance
26:50 Conclusion and Key Takeaways
Welcome everyone to the Humans and AI in the Workplace podcast. Over the last few years, it's become clear that artificial intelligence, AI, is one of the most impactful and disruptive transformations in the workplace. As a leader, you may be wondering how to get started and how to do it in an intelligent way. Or you may be stuck on how to overcome some of the people issues and human bottlenecks your AI has crashed into.
We are here with Dr. Debra Panipucci and Leisa Hart and a special guest, Anna Jaffe, Director of Regulatory Affairs and Ethics at Alassian to discuss Alassian's responsible AI approach. Artificial intelligence is rapidly evolving with organisations harnessing its power to transform industries and improve lives.
AI has the potential to enhance well-being and quality of life and boost economic growth. However, as AI becomes increasingly pervasive, concerns arise regarding its impact on society. Evidence suggests that AI can pose risks to individuals, community groups and society as a whole, leading to harm. So the importance for companies to build responsible technology is critical.
One example of a global company that's leading in this space through creating transparency and accountability internally, defining and sharing their learnings more broadly to drive thoughtful and careful development is Atlassian. Today we're super excited to be speaking with Anna Jaffe, Director of Regulatory Affairs and Ethics at Atlassian. So welcome to the Humans and AI in the Workplace podcast, Thank you so much for having me. Awesome. So.
For our listeners, we'll kick off with a of a foundational question. How would you describe responsible technology? I think it's a really great question because we see a lot of different areas blooming, I guess, as we start to see AI and sort of grappling with the risks of AI that you outlined, but also the opportunities of AI. A lot of different terms and terminology for this. We chose responsible technology because we know that AI is not the only emerging technology that could pose some of these same questions or issues or concerns. Right now, we are thinking a lot about responsible AI and about AI as one of those key technologies, but this isn't going to be the only challenge that we encounter as individuals, as communities, as societies. And so we really think broadly about responsible technology as the way that we think about how we as humans and as teams interact with influence and are influenced by technology. Yeah, great, because AI is obviously the poster child of intelligent technologies, but there's many versions. So I like that you're thinking about it more broadly with longevity to that conversation and that purpose. So I recently had the opportunity to hear you speak about the responsible technology principles and the No BS guide, which I love, and the reviews as well, the template for the reviews.Which Alessian's made freely available. So why are you and the team advocating for this so strongly and really putting it out there and trying to get it on everyone's radar as well as using it internally, obviously? I think, you know, one of the things that we started from is this isn't just us sort of acting alone and that's one of our principles actually. This is certainly something that a lot of people are grappling with, a lot of companies are grappling with, a lot of...organizations, governments, regulators are all sort of encountering in similar ways and forms. So that there is some form of consensus out there about at minimum the principles and key ideas that we need to bring to how we work with AI and how we think about AI. But for us, mentioned we're a global technology company. We are by no means small, but we are not a big player in this ecosystem by any means either. And so we were really engaging quite deeply with what was out there, what we saw, a lot of different principles, a lot of emerging regulations and rules and guardrails and a lot of ideas on what kinds of practices and processes we should be bringing to bear when we're working with AI and similar technologies. But we sort of thought actually that there's an abundance of stuff out there. And how do we first of all make that present, translate that for
our people and also, for companies of our size and smaller, make it make sense, you know, kind of cut through a lot of that and understand how we can do it in a way where we see the same outcomes of driving towards responsible technology, but perhaps with not as many resources as some of the larger players or having necessarily dedicated staff there to be thinking about these sorts of issues.
as on a day-to-day basis in a big team. If you're the only person in your organization, how do you start thinking about responsible technology? And so we sort of approached it from that perspective of saying, we're not reinventing the wheel here. Nothing in here is particularly new, I would say, but it is very us. And I think that that's kind of the way that we approached it, as well as sort of saying, look, if we learn something from this process.
let's share it because we know that we can't be the only company saying we can't do the 50 page PDF with the five resources attached to it. So can we do something that still achieves those same aims in a way that works for us and also meets the requirements that we're going to have to meet? Yeah, amazing. And I love the place that Atlassian plays in and this is going through the research around.
the responsible technology after hearing you talk about it.
It's so congruent with the way that you build stuff internally, experiment with it, refine it and then make it available in all the team plays and things like that. It's really congruent with that. I find that really exciting that Alessian consistently plays that role and tries to punch above its weight in the context of giving to society in improvement, not just technology, but the human experience around that. And with your materials,
they're also, they're very people friendly. you, is it like a long refinement process that they go through to they get to the point of, cause I know I've worked with so many organizations where everything starts out so formal, particularly when it comes to things like ethics and responsibility. But when you look at your materials, you read them and you're like, my God, the language is just so accessible and fun and friendly. And is it a long refinement or is everybody at Lassie and just in that head space of let's just
talk clearly to people. Yeah, I think there's a couple of elements to that one. One is an element of our values and our culture and the way that we as an organization operate. And a little bit of that is unique to us, I would say. But in terms of these particular materials and these concepts, what we were really conscious of when we talk about this in the longer form guide is meeting teams where they are. You can't come to a team and ask them to conduct this responsible tech review.
in a way that is impenetrable that asks them these questions that they either don't know how to answer or, you know, are then inclined to answer it a certain way because they're concerned about it or, you know, they don't fully grasp the concept. We wanted everything to be able to pick it up, take it as a team and do it with minimal need to consult an expert necessarily. We want to make this something that the team can feel ownership over. And so to do that, you really do have to adopt that tone and
and think about how do we make these concepts accessible? And that's a little bit about what sort of the working group that I co-lead is part of. It's about making these concepts accessible, translating things that can feel very abstract or very high level or not necessarily relevant to the day-to-day life of someone working in a different craft or on a different team or on a different project, making it present and making it make sense.
Yeah, I love it because one of the things that we often talk about is the human bottleneck that we see in lot of organizations around people are scared too.
have tough conversations and they're very protective of themselves and their brands and their reputation in the workplace. what you just said and what I see in your materials is that language and tone that's used really does remove the bias and the ego that need to go, no, I need to show that I'm good all the time. And whereas the language and the questions are just really, you look at them and you go, yeah, I can answer that. That's not threatening kind of thing, which seems really important in this topic. Yeah. And I think it's something that we're all we're always working to improve as well. You know, I could, we could even say in the template today, there are a couple of questions that we would rephrase given the opportunity to give people the permission to answer in a way that, you know, if you'd phrased it a different way, they might not. One example that we like to speak about a lot in our template, we ask teams to complete, you know, what's the worst case scenario of this particular use case or this project or this tool. And we don't just stop there. We sort of then say, if you were a super villain, what would you do? And that we love that question because it really kind of gives teams permission to say, well, I'm not me designing this tool or this particular project or this feature for this ideal use case that I love. And I really think people would love and want to use. I'm now going to take a step back, take a step out and think about it from a completely different perspective where I have the permission to think, okay, how could or would this be misused if I didn't think through this answer? Yeah, it's great, isn't it? Like we often advocate for organisations to do that as well when they're thinking about, you know, introducing intelligent technology, put themselves in the place of going, how could this be used for evil rather than good and do that sort of pre-planning in their organisational context. So Atlassian's responsible technology principles are open communication, no BS, build for trust, accountability is a team sport, empower all humans and unleash potential, not inequity.
They're pretty awesome principles. How did you and the team arrive at these principles and kind of talk it through the process and how you kind of, because I can imagine there'd be many, right? Yeah, and I think I mentioned, you know, none of this is necessarily, the core concepts at the heart of each of those principles, none of them are necessarily new. We deliberately didn't want to go out there and say, actually, despite all of these years of research and all these other principles that everyone's come up with, we have
the better, most unique, most different principles, because that's not what this was about. This was really about, first of all, know, showing our commitment internally and externally. We had a lot of people internally who were thinking about issues of responsible technology, of responsible AI, of being ethical in their approach to AI. And they would...
cite often our company values, our company mission in a way that made it very clear that they held them very closely, that it is something that again is quite specific to our culture, but maybe they didn't necessarily have that translation mechanism or know exactly how to bring those values into their work with technology, with responsible technology. And so we saw that as a bridge. These principles are really a way to sort of say, okay, well, here are our values and here is what they mean.
when you apply them in this context. And also to say externally, you know, absolutely we are committed to these core concepts, which are very much the similar core concepts will repeat themselves. You'll see them in a lot of other contexts in the industry, but also in various government and other organizations, very similar concepts, you know, the concepts that underpin each of those principles, which are obviously expressed in Atlassian language, very much like our values. They are generally
fairly well accepted that there is, you know, good consensus around them. Open communication, no BS, is a transparency principle. Build for trust is really about trust and privacy and security. Accountability as a team sport is about accountability over the life cycle of a system. Those sorts of, you know, empower all humans is about being human centric. Those sorts of things are very much present in a lot of what's out there about responsible technology and responsible AI.
but we wanted to make them digestible for our audiences, which very often are Atlassians, our people. And so the way that we sort of went through it is we did take a big look at what was out there. We looked at the landscape. We looked at where this consensus was forming. And we said, how do we bring this to bear for us and for our people? So that means what are the core principles we want to accept and commit to? What do those mean to us? Describing what that actually means in our context.
and what kinds of commitments are we going to make around that. And so that's kind of the three things that we wanted to address with each principal. And so my team, which is our responsible technology working group, we really sized what we saw, the straw person, the first draft. And we went out to a lot of teams around the company. We said to them, hey, we're doing this process. Will you help us? Will you give us feedback? Will you come in and sort of test this with us, workshop it with us, tell us how you feel about it. And again, we were very grateful to those teams and very lucky that our culture is one that is quite open and where people were really willing to give their time and their effort. And we really had a great open process of developing these principles out. And so that we felt really confident when we published them that we'd really spoken to a lot of teams who really had kind of taken the pulse of where we were at, but also where the industry was at, where the broader landscape was at, and we were able to carve our own space within that. Great. And how has it been received? Like what's been the response from internally more broadly now that they're out there and also from customers? I think there's sort of a, there are a lot of different things that you can say about having principles. I think...
In general, our view and we speak to a lot of experts in the space is that they're necessary but insufficient. And so I think having the principles was really important for us internally more so than anything else. So as I said, to have that kind of bridge from our values to our work on responsible technology, to have that kind of guiding light that you could point to of, know, this is our vision of what we see, what we're committing to, what's important to us in this context. But they're not enough on their own. If you just have principles and stop, they're just words on a page. And so I think we all recognise that it was a great process. was really interesting for all of us who were involved in it. And it was really helpful for a lot of people to be able to through that and workshop it and understand what a lot of these concepts mean to us, particularly as they're now taking their way into regulations and rules and much more concrete guidelines.
but we certainly didn't just stop there. The guiding light is a really good metaphor because I guess the opposite is reaching in the dark and hoping that you hit something. Whereas if you've got that guiding light of the principles, you can always check back and go, have we achieved this? Have we not achieved this? So for the leaders out there that are listening, coming back to if you really are looking at the responsibility of your technology and the AI. Having the guiding principles is a good way to start. One of the things that I find super helpful around the guiding principles is that it's creating clarity. Upfront priming people's brains to know that in the moment is this in line with one of our principles. Either explicitly or subconsciously you're creating that opportunity for them to make a better decision. And also I hear what you're saying about you don't just stop there.
And one of the other things that you have created is the guidebook and then the practical templates which bring it into the work. Now this is how we bring those principles to life, not just in conversations at the water cooler, but actually in the work. If we take one of those principles like building for trust, can you tell us how this plays out in the tech review? There's the template which goes through each of the principles and asks those guiding questions. Just focus through how you've seen that used in the workplace and within the teams and how that's come across. Yeah, so I think if we take this principle around build for trust, it's a really interesting one to use as an example because it is one that intersects with a lot of other areas within a company, particularly legal areas, right? Because trust is where you encounter all of those other rules of the road and frankly laws and regulations that help to build trust including in technology and so that includes things like privacy and security. So when it comes to this principle it's not a substitute for, and we say this quite explicitly in the guide, it's not a substitute for necessarily whatever processes and procedures and guardrails and policies you may have around matters like privacy and security in your organization but it helps to kind of pull that up a little level and align with expectations. So in the template here, and when we ask teams to conduct their responsible technology review, we really ask them to think about does the way that this project or tool intend to use data align not just with those rules and policies, but with expectations that we have set about data use. So when we ask about expectations, that kind of pulls it out a level, not just about cross-referencing a specific policy or a specific commitment or a specific contract, but thinking what are the expectations that we have set with the user base of this tool or project or the stakeholders that will ultimately be affected by it? What do they expect based on what we have said and done in the past and will continue to say and do? And how does this particular project either align to or not align to that?
And so that allows you to ask teams to cross reference any of those internal policies and procedures. And certainly they can think about how those other, maybe a privacy impact assessment or a security review might be relevant to their answer to this question. But it also lets them take a step back and think about what is the audience going to expect? If we're thinking ultimately about the human at the other end here, how are they going to approach this? Is this going to meet the way that they're coming at this tool, this project, whatever it is, or are they going to be surprised by something there? Yeah, I love that. For our leaders out there listening, I can't recommend enough going to the Atlassian website and reading through the principles and downloading the guide and the review templates because what I love about it is what you touched on briefly there, is it actually forces whoever is part of building this to stop and reflect and take a different perspective. And that is really hard for us to do as humans, particularly if you're building something that you've been building for a while and you're super passionate about and really excited about and you go down this rabbit hole of what you deeply want it to be and do in the world and can sometimes lose touch with. There might be something that's not quite aligned to our principles or what the customer experience could be or should be. And so...
This is a really powerful tool to help force that perspective taking, which again, as human beings, we don't naturally go to a place of perspective taking. That's why there's whole industries around helping us do that, like design thinking and experience mapping, et cetera. So I love that this resource fundamentally supports it from that really practical perspective. There's also a line in the intro to the guide that says strong technology governance is built from the top down and the bottom up. This is such a loaded question. What's your ideal look like for leadership in this space? Yeah, I think it's, I mean, the ideal, I will say, you one of the things about our approach to reviews and to the guide and to releasing them as well is like, there isn't necessarily one ideal. What works for one organisation may not work for others. What works in one team may not work in other teams. And I think it's really understanding the context of your organization and the way that teams within it work together and making that kind of headline statement, which we believe to be true for everyone, work in that context. So for Atlassian, we are very distributed, but we also have a very open culture, very values aligned. And so the things that we adopt and do might be different to what other teams adopt and do. For one organization that might look like a combination of policies and stronger rules of the road and very clear guardrails to work within. And that's kind of the tone from the top combined with bottom up sort of cultural and values based initiatives that sort of everyone's encouraged to do. Another organisation, it might be more focused on AI skilling across the board.
I think for everyone that does look like a combination of just acknowledging and being clear about the fact that responsible technology is important and it's not just the domain of one team, one person, one process, one step in the process. It really should be something that's present throughout the organisation and throughout your processes of developing or designing or deploying or purchasing technology.
And I think the mindset of no surprises, like you were just saying, it does apply to everybody. Cause I know in all the organisations I've worked in, everybody hates surprises. Unless it's a birthday party. If it's in your work, you don't want a surprise. Cause the surprise is usually, Hey, we haven't told you about this. And guess what? It's not very good. So, yeah, I think that applies to everybody. If that mindset is across the organisation, everybody's looking to create no surprises then these types of tools will help you get that. I think the pressure now is higher than it's probably, well, it feels to me higher than it's ever been around negative stories hitting not just the front page of the paper which we talked about with Stephen King, we talked about that was a test. Is this going to make the front page of the Herald Sun as a bad story? Actually, that can be quickly amplified globally and in so many mediums now, but the pressure for things not to get to that is probably higher. And again, what I'd say about the review template is that it is accessible to anyone in any role that they're playing in the organisation to pick it up and check through the principles and check through the questions and to be able to go, okay, well, this piece of technology that I'm procuring, how do I know that it's in line with our principles? And it's just super practical in that respect, which I think is really easy for people then to pick up and as we like to say, you you have to make it easy for people to do the right thing and that's just a way of helping people do that. One of the other things around is really explicit as well in the guide is that continuous improvement, you know, it's that progress over perfection mindset which you guys obviously have within the organisation as a strong theme.
And it says here as the statement to continue to evolve them as the AI landscape changes. And that's okay. Unlike a JIRA ticket, responsible technology never moves to the done column. I mean, again, I just love the language and the way you guys write. Why is it important to continuously improve and iterate the reviews and look at the principles? Like, what are you thinking about that over time as well?
Yeah, I think coming back to the way that we described the principles, we said that there are guiding light. And so I think the core concepts at the heart of the principles will probably never change. But as we as an organisation mature and as the technology changes over time, you can easily see those other elements of the principles that I spoke about, what it means and what commitments are we making. They're going to have to change, they're going to have to evolve and they're going to have to frankly improve. And so we wanted to really acknowledge that and breaking that down all of these things we see are going to change over time. So the nature of the technology, know, the AI that we are talking about today is not the AI that we were talking about two years ago. And it's not the AI that we will be talking about in two years. And maybe it won't be AI at all. It'll be a different technology as well. What our customers and users and people expect of us are going to change over time. What the law expects of us is going to change. That's what that's always sort of evolving and moving and down to the level of an individual.
project and we talk about this in the principles and in the reviews and in the guide that that has its own life cycle where you do constantly need to be because it can evolve because it can grow because it can do things maybe that you didn't expect when you started because the uses that you put it to might also change over time you do need to keep checking in and so we talk about the review as a living document it's not one and done you should be coming back and looking at your reviews and saying, well, we said we were going to make these improvements. How do we go with those improvements? We said that this was our expectation of, you know, how the user base would use it. Is that true? We said that we would be collecting and acting on this feedback. And have we done that? And what kinds of things have we learned? And do we need to be adding to this? Do we need to be revising some of our initial assumptions and expectations? How do we keep taking this forward?
And if you set something formal in place for that, like if you said, I will come back in six months or we'll test it and come back in a quarter or a year or have you got any thoughts about that sort of cadence or frequency? Yeah, I think we talk about this in the guide when it comes to individual reviews about the expectations for how often you'd revisit it. And sometimes that, you know, there might be triggering events that mean that you should come back to it sooner. And sometimes it's just a...regular check-in where you're like, how is this going? Have you updated your review? When it comes to the principles, that is something that again, you know, we're checking in on and revisiting and the review template and the guide on a pretty regular basis. But you know, when it comes to pushing an update, doing a comprehensive update, that's something that we want to obviously give a lot of thought to and make sure that we sort of also adopt that workshop approach that we took with the original set of principles and with the review template as well. So that's a slightly more involved process, but certainly, you know, we have plans in train to take another look at the guide and at the template, which are both coming up on about a year and think about, yeah, what's the next stage look like? Great. So you presented this at Atlassian Team Conference.
What's the response been in the room? Like, has it been, this is awesome, this kind of helps us ground our work, or what have you seen? What have people come up to you afterwards? Yeah, I think it's been a really interesting process. First of all, just work out how to pitch something like this, because you don't always come to a tech conference and decide that you're going to go to, as we say, an interactive workshop about deploying AI responsibly. But we found it really interesting because we have run it as an interactive workshop where we we don't just say here's our template go download it we sort of get teams to well we get teams to form themselves in the room we ask people to make a friend in the room and really talk through something that they might be grappling with in some scenarios we give them a scenario but it's always sort of more fun and more important and impactful when you bring a real life use case and we say let's talk through some of the questions let's think about this and so it's had a really good response from attendees who might not have been expecting something like this, which can be quite dry to be an interactive session. And I think our real goal here is it's not to say, please go take the template, use it exactly as we wrote it. You know, just, just fill it out like, like it's like it's been downloaded. Our, our intent there is to actually really inspire teams to take what works for them and apply it in a way that works for them. And so I think that doing it as that kind of interactive workshop is, a good way to sort of introduce the concepts and introduce that idea of being able to go, actually this particular question is being able to crystallize something that we're worried about and we haven't had a way to do that yet. And having been in a room where you've done that interactive experience, the person that I was partnered up with was building a product and they were like, this is awesome because these are things that I haven't thought about this explicitly.
That is a good way to get people to think about the content and the principles but the practicality of applying it to themselves, which is the way that we generate insight and hopefully if it's strong enough that translates into action. Having been on that, I could see that happening in the room. You can see those sparks of those conversations and people going, wow.
This is actually a lot more practical in that sense. Our last question is one that we ask every guest, which is, you think about humans and AI in the workplace, and this can be as big as what you want it to be or about the responsible side, whatever works for you, and thinking about humans and AI in the workplace, what would be your wins, opportunities, and your watch out? And we call it the wow, because...wow is you know wow can be all the things it can be wow it can be what wonderment or it can be wow underwhelming or it can be wow i'm not sure and i think you know that's that can be the experience for any one of us at any different point around intelligent technologies especially in the workplace but coming back to what's what's your wow for humans and ai in the workplace yeah i feel like i might be that last in between
Wow, I think from my perspective, the wins were already sort of things that we're starting to see about how you can work with AI in the workplace, just in any team. I think that's what's really interesting about some of the sort of more recent generative AI solutions is that they are available to almost anyone in any team. And that's super interesting because there are so many different ways that you can use them to deal with a lot of say repetitive tasks. There are certain fields where these sorts of AI solutions are so helpful in terms of bringing new people up to speed or not having to ask a question that you know has been asked a million times before. They're having more easily accessible knowledge or having that sort of not having to ask, you know, a colleague or a coworker, hey, can you just read this and help me work out how I can rewrite it to make it sound a little better. A lot of tasks, individual tasks that are now that we can now ask of AI in a way that's collaborative and lets you have a more natural language back and forth. I think that there's a lot of winning in there and also opportunity. And I think the opportunity I would say is tied to it, which is that now that AI can really be accessible to so many people, there's also a real opportunity there to help all of those people understand what it means for them that they can now work with AI in this way, how best to work with AI, what it's really good at, what it's less good at, you know, what are its actual limitations? Because I think once you really understand the nature of the AI system that you're working with and the things that it does really well, and also the things that it does less well, that equips you much more clearly to actually maximize the opportunities to reach those wins. I think it really is about understanding and not just thinking about AI as this big monolith, but saying, okay, with this system, this is what it's really good at. This is what it's less good at. And therefore I know I can give it these tasks and this will get me the best result from this particular system. We're doing quite a bit at the moment around that old school skill of critical thinking.
And I was really super excited the other day to see on the TV that there was a school that was helping their students critically analyse and question what technology was presenting to them, whether it was on social media or whether it was through a large language model or generative AI. And I think if we go into the workplace, we've probably not empowered people to do.
that as much in the past as what we actually need to bring back to the forefront now in a very explicit way. And people often just need to bring critical thinking. Let's break that down. we're helping some of our clients break that down into actually this is what it means for people in the moment when they're making the decision in the flow of their work. This is what critical thinking actually is now in this new landscape. Yeah.
And I think that that is the real opportunity space is AI literacy in the workplace because now is the time, now is the chance because AI is becoming more pervasive in the workplace and it's not just the domain of a particular team in a particular solution. We're talking about these sorts of foundation models that can do any number of things. And so really knowing.
how to use it, how best to use it, what the potential issues and risks and concerns could be as well as what the upsides are. I mean, that's a little bit about what we try to do with the responsible technology review as well in the sense of it's for every person in every team to be able to pick that up and understand then when they're dealing with an AI system, what are some of the downsides, but also what are the upsides and how do we maximise them? And then watch out, so not gonna necessarily surprise anyone from me with coming with my legal background, but and also responsible technology is really just that the landscape is always changing. And so that watch out would really be needing to be on top of what are the rules of the road, what things are changing, what concerns are emerging, what risks and harms are feeling very present in the AI that you're dealing with, where are the riskier areas that you might be deploying AI in certain circumstances where the AI that you're using is involved in more consequential circumstances or decisions. Is it affecting people's lives? Is it affecting their livelihood? Is it affecting their experience of work? And knowing where those hotspots are, what you need to watch out for, what you need to be aware of in those areas. Such a good call out because it is constantly changing, isn't it? As the technology evolves, we need to be readjusting what we're thinking about in that context and that's really cool. I have another question which is kind of not on the list.
which is kind of in line with the watch outs in some ways, which is...
The EU AI Act obviously is in play now and there was lots of talk about that and what I loved about it was the requirement for within the first six months of any organisation in the EU or shipping things to the EU, having their teams with a level, a medium at least level of maturity around AI literacy and fluency. And I think that's a really big opportunity in any organisation and recently advocate for that really strongly because that's your ticket to participate, isn't it? That's the inclusion ticket for me. If you're understanding what the terms are and therefore the risks associated because you do understand what's at play around different types of technology. What are your thoughts on what we might get in Australia? Because there's been some grumblings about something happening in Ealingrad and I think it is really important for us to be thinking about it.
What are your thoughts on this? This is very futuristic.
So we at Atlassian do engage with a lot of these processes around the world with the European process, with the Australian process. In Europe, we recently announced that we signed onto the EU-AI pact, which is a bit of a precursor to the AI act. And it's where we're making very core commitments around that literacy point, around AI governance and around AI transparency that we're going to be held accountable to report on in about a year's time.
And honestly, we're kind of excited about that report because it feels very much like the way that we like to approach our, you you said our playbook, our responsible technology work to date, other publications like our sustainability report where we like to really share how we've progressed, where we're at, know, what we have to learn, what we have learned. And so we're really keen to be part of that process. We're really excited to be part of that process. In Australia, we've been engaging in a lot of these consultations around, know, what do we do to prepare and ready Australia's legal frameworks for, you know, the challenges and the opportunities of AI. And certainly, we're really supportive of the direction that that's gone. It's one that has taken an eye towards how do we fit in with what, you know, governments and regulators are doing globally, but also make sure that this works for Australia and within the context of what we already have, but also, you know, where we need to go, where the gaps are.
How does this fit in with other reform processes? We spoke in the trust principle about the intersection with topics like privacy and we have privacy reforms underway. So all of that is to say we're watching the Australian process with interest as well and we're very keen to see where that ends up and remain really supportive of that process. That's really encouraging. was hoping you'd be sort of participating in those, developing conversation because it is so important to get that diversity of thinking across industries in the early stages of putting something like that together and getting it out. And there's a really good opportunity for us to learn in workplaces about intelligent technology in a way that we are still learning about how it's impacted our lives around social media and the type of technology used in a slightly different context to what we're talking about in the workplace.
And so there's some really good opportunities to take that learning moving forward, which I know we've seen our government take some action on that around social media. So it's great to hear that it's being thoughtful about how do you bring something that is still fit for purpose and robust enough with the protections that also still creates the opportunities to invite innovation in Australia because economic welfare depends on that as well.
Anna, thanks so much for coming in and sharing your ideas and the thoughts around the responsible technology principles and the guide and the review template. We really appreciated your time and sharing with us how this came together. yeah, thanks. Thanks so much. No, thank you so much for having me. This was really fun. Humans in AI in the workplace is brought to you by AI Adaptive. Thank you so much for listening today.
You can help us continue to supercharge workplaces with AI by subscribing and sharing this podcast and joining us on LinkedIn.