Transcript

This is a transcription of the provided video excerpts, presented in a readable format.

Hello everyone, I’m Stuart P. Her, welcome to The Flow State Podcast. I’m joined today by Jen Arnold, who I’m sure you may remember from previous amazing conversations. Welcome back, Jen, pleasure to have you on again. It is always a pleasure to join, Stuart, we always have great conversations. Yes, we do. Hopefully, it’ll be another one today, and also hopefully, I’ll manage to not swear or say anything that I shouldn’t, given that we’re now live as well. So, bearing that in mind, we’re getting back together to talk about the implications of AI again, specifically with the lens of the relationship between our friends in the government and private industry.

To set the scene before we dive in, we’re obviously at a bit of an inflection point industry-wise at the moment, socially and economically. I don’t want to get too much into the politics; let’s remain neutral. But obviously, there’s a lot of pressure on various economies globally and locally at the minute, a lot of contraction in markets, some turbulence thanks to certain people in certain positions. And all that means a really super heavy focus on ROI, which obviously naturally leads to pressure on sales and marketing, as I’m sure everyone is familiar with. And that coupled with obviously the continued explosion of AI means that previously safe jobs like, you know, white collar jobs like development for example, are also now in danger. And our friends at Microsoft have just announced a huge round of redundancies and firings, which is a great way to, you know, kick off July.

Now, where this is going, Jen—I’m not just going to talk the whole way through—is what is your opinion on both where the government could or should be stepping in here? Like, is this good, is it fine that private industry is driving the direction of, you know, the uptake of AI and how we’re all using it, or is that super dangerous, and should we be seeing more regulation and protection given that we might all be like losing our jobs tomorrow, and you know, society could collapse and we’ll all be replaced by robot people, just to kick things off?

Yeah, just kick those off. There’s a bit to unpack there. From a regulatory environment, as you said there, there’s a lot to keep up with, and it’s moving so fast that to be able to put government regulation in place to keep up with the pace of technology is a huge challenge. Right now, they’re looking at, you know, we already have privacy legislation. We already have other laws that protect people from how and when their data is used. We started to see more use, you know, where people are creating deepfakes and it’s harmful, you know, kind of government, other laws applying and regulations stepping in to address that. So that’s kind of the immediate thing: what do we already have that can be applied, and then but also then looking at how it’s going to be used in future.

If you look at state and federal government, they do talk to each other, but there are different laws that are being and legislation being put in place across the different state and federal government. Federal government’s been working on it for a long time. They’ve been using AI in its different guises for a long time. I think what most people think of it now, it’s the generative AI, but particularly from a government standpoint, the agentic AI where you’re chatting with agents who are not human to be able to access services, and that’s a little bit different. The traditional kind of machine learning type of AI government’s been using that for a long time. Organizations have been using that a long time for decisioning, for automation of processes, etc.. But the generative and and and agentic AI is what has really been changing so quickly. So federal government’s been working on some regulations for themselves and policy for themselves that’ll be coming out soon.

Australian government, compared to private sector and other governments around the world, has been quite risk-averse in how they are applying it themselves. They got a little spooked by Robodebt, which wasn’t necessarily AI but was a human oversight issue. And so, one of the big areas of focus that government is looking at is making sure that AI isn’t being used in an uncontrolled way without human oversight, that there are checks and balances along the way. And that particularly decisions that are being and information that is being shared is that it’s traceable, meaning that you can see how it came to a decision and there’s accountability. So it’s clear, you know, if the information is wrong, who is accountable for that? That’s the challenge of putting something together because with the AI tools we have, quite often that transparency is not there, and they want to be sure it is before they unleash it broadly.

Yeah, and but look, I think you’ve touched on a couple of the key issues for me there that transcend this conversation and, you know, are now becoming or at least should be, you know, rising to the top of everyone’s list, which is one is accountability. Which is obviously increasingly challenging when, you know, people are getting fired left, right, and center, right? And this role or the roles of who manages like AI is diffuse, I would say, at present. And yeah, the second one’s transparency.

So, like, I think, digging into both of those, and starting with accountability for example, I think there’s a lot, there’s obviously a lot to say on that front. But, you know, hypothetically speaking, if I was someone like Sam Altman and I was like, look, you know, AI is going to change the world, we think we can create sentience in a computer, which is a bit of a stretch in my opinion, but that’s a conversation for another time. You know, like, if I’m all like gung-ho for AI, should be everywhere, let’s just let’s just go for it. You know, what is, is that bad? Is that a bad position to be taking? Like, what are the risks around that? And to your point on who’s accountable, like, is it my fault that if you use it wrong, or is that your fault for not understanding the tool? Or, like, you know, where does, where should that sit?

Yeah, absolutely. And unfortunately, like so many other things we talk about from a sales and marketing perspective, it comes back to your data. Like, if you build an AI tool and you’re not feeding it the right data, up-to-date data, accurate data, etc., then what it’s spitting out on the other end is not going to be inaccurate. So, you know, there’s a huge amount of accountability for whoever is controlling that and pointing it to the right data and making sure that data is kept up to date. Things like pricing information, availability information.

There’s been a whole raft of instances. I think there was one—I can’t remember which one—one of the airlines in the US or Canada basically issued a whole bunch of incorrect information because it hadn’t pointed to the most up-to-date information about rates or whatever it was. And they were actually taken to court. And the airline was saying, “Oh, it wasn’t us, it was our agent”. And it was like, well, you know, you’re essentially using that as an employee. If you gave your an actual human employee the wrong information that they passed on, you would be accountable. So why is, you know, a chatbot replacing that human any different?

Yeah, yeah, that’s that’s quite an interesting point actually, because I think, as you would well know, particularly our friends in the US, big fans of litigation as a solution to all these problems. And I think, yeah, that like blame culture, especially at a corporate level, is not helpful at the moment, is it? Right? Because you can’t, like you say, you wouldn’t fingerpoint something to your own employee and be like, we’re going to sue you because you’ve you haven’t done your job properly. You just either retrain them or fire them, right? So, like, it does feel like that’s a bit of a risk where you’re bringing in a, you know, a corporate replacement for a person.

And like you mentioned this when we were chatting just before we jumped on, like, you there’s a dangerous line of thinking at the moment in that a lot of people are talking about treating AI like a member of the team, which it isn’t. You know, like you wouldn’t pick a hammer up and be like, you’re a member of our tradie team, guy, put a little shirt on it and be like, you know, go get the go get the slab at the end of the week, like, because because why would you? So yeah, look, I mean, do you think that’s that is risky, that sort of that line of thinking? Like, are humanizing something that is inherently just a tool, no matter how humanlike it seems?

I think that the thinking behind taking that approach is that, you know, the way you’re interacting with it is more humanlike and it’s in the right tone of voice, which means you’re going to get the right response back in the right tone of voice. And, you know, and to and to have it sitting there as a point of access for information, like you would if somebody was sitting next to you or like you would, you know, Slack somebody or, you know, Team chat somebody or something like that to get information, to treat it that way. But at the end of the day, you have to remember, again, it’s it’s just a tool. It’s like it’s a tool like you’d use any other piece of software. And it’s also, again, like some of your employees can be unreliable. You don’t, you know, everybody knows, you know, the guy that you’re not going to go to for advice. But again, if, if the right information hasn’t gone into that tool, you’re not going to be able to trust it.

So yeah, you know, and you you need to be able to, and again, is it accountable? You can’t, you know, necessarily turn around and kind of go, well, I made the wrong decision because Bob gave me that information, right? So it’s it’s a little, it’s it’s challenging. And it also, I think, perpetuates the concerns around AI is going to replace everybody’s jobs, right? It by treating it that way, it almost helps make you more expendable because you’re saying, you know, a nonhuman can do this job.

Yeah, yeah. And look, like I think you, that’s pretty interesting because you’ve reminded me actually of like, to to run with the the human analogy, like, I’m sure you’ve worked with people like this in the past as well, where one of the dangers with whether you’re putting in accurate information or not into into an AI tool at present, like particularly the, you know, the conversational, like whether it’s just a model or whether it’s like an agentic type solution, is that they’ll their default position is they’ll just tell you whatever it is that you’ve asked them with an incredible degree of confidence, whether that is actually accurate or not or whether they’ve just made stuff up or not. And that like that really reminds me of like the kind of people that you would that you would hate to work with, right? Where you, you probably everyone’s worked with someone who, particularly if you’re in a bigger organization, you might be dealing with, you know, higher up people who don’t really know, don’t really get the detail, you know, but they’re enthusiastic about stuff. There’s always that one person.

They don’t know, yeah, exactly. Yeah, and there’s always that that that one, you know, person who’s like just running around behind them just, you know, telling them everything that they want to hear and be like, “Oh no, yeah, we can definitely do that”. Like, no, you know, everybody, everybody hates that person, don’t they? Like, so that that effectively is what you’re saying is what the AI tools are becoming in the team, right? It’s just the smarmy, you know, sort of, I try to think of a non incredibly offensive way to say, you know, the the smarmy ladder climber, let’s say, who just, yeah, that’s right, the know-it-all on the team. Yeah, yeah, the ethically bankrupt, you know, career oriented person. But the danger with AI is that, you know, they might, like you say, you might just get fired because if someone turns around and like, well, this this computer can do your job, and it only cost me like $20 a month, and you cost five grand, that’s quite a simple, you know, a simple maths game for someone in a in a sort of position of or a position that’s under pressure, I guess, to to generate a better return.

That’s right. And I think if you look at again back to the government lens on things too, when I talked to, so you know, I do a lot of work with government departments. I work for a professional network of public servants around the world and and have a lot of, you go to a lot of events with them and have a lot of discussions with them. And Australian government, the ones that I’ve been having, whether it’s state or federal, a lot of the discussion is we really want to see this as an augmentation of a workforce as opposed to a replacement of a workforce. We want to, and you’ll hear time and time again, particularly federal government, hear productivity, productivity, productivity. They really want to increase productivity standards. So they also want to be introducing the tools in such a way that it frees them up from the mundane tasks so that they can focus on producing better citizen services. They want to give them tools to make it easier to do them their jobs.

I was talking to a Deputy Secretary or Secretary Department of Education here in New South Wales. He said the intent is not to replace people in education and administration in schools, teachers, etc.. It’s to free them up from that stuff so they can spend more time creating great lesson plans, so they can spend more time being more creative, they can spend more time with the students, they can spend more time, you know, kind of supporting each other. And, you know, from a a, you know, the managers being able to give more support to their employees. So they’re very much trying to approach it as yes, it is it is a tool, it is not a a new team member and it’s all about how do we make our existing people better at what they do.

Yeah, look, and I think that’s a much more positive spin on what’s what’s happening, right? Like that, and all those use cases, I think, are super important because going back to the point, you know, we were making at the start of of all these, you know, there’s obviously a lot of pressure in a lot of places at the moment. And the point you raised around education in schools is a big one because obviously there’s a big still a big teaching crisis here and and in various other countries, right, where we’re short staffed, we’re under-resourced, schools are underfunded. So, you know, there is potential for for AI enabled solutions to, and they recognize that students are already using it. Like teachers, you know, I know we’re going to get onto the discussion of shadow AI in a moment, but they knew the teachers were already using tools, right? So like, let’s not try to fight it, let’s let’s look at it but do it in an appropriate manner. And making sure that it is a useful tool.

And one of the great things I also see coming out of government, which I think private sector can pick up too, is they’re very focused on code design. So they’re like, they’re not doing it to the teachers, they’re actually working with teachers and saying how can we make this work for you? They’re working with teachers, they’re working with parents. How can we make this work for you, not just kind of going out and applying it and, you know, without any consultation with the people who are going to be using it and and where it’s going to get the best ROI. So that’s that’s a big focus for them too. And again, part of the reason why they’re taking a, you know, kind of a somewhat risk averse is because they also want to make sure that they’re not investing in areas that aren’t genuinely going to have a positive impact. They want to do the work first, they want to do the proof of concepts first, to make sure that they’re applying it and putting the focus in the right areas.

Yeah, yeah, and that’s I think that’s really positive, right? Because the same the same seems to be emerging from the from the from the private industry side now where I think everyone’s a bit over the excitement of, you know, huge models that, you know, claim to do everything and be able to solve everything because they they just can’t. Like, they’re so big that they’re, you know, that everything’s pretty surface level and generic. So, like, a lot of the a lot of the conversations we’ve been having around, you know, where the real value is is in those more closed focused use cases of like, look, we just want to, you know, address this specific problem or, like, you know, I’ve been talking to a lot of healthcare technology businesses recently who are obviously dealing with huge amount of very sensitive data who just want to focus in on like, how do we address this one healthcare professional related issue or how do we address this one patient data related issue?

And that stuff, like, I think is where you’re going to see much more value and and more useful outcomes, I guess. But then the, I suppose the flip side of that, which again, we can touch on in a separate discussion, is like, how do you then standardize and, you know, connect everything together again because there’s the same way everything goes on the internet, right, there’s just fragmentation starting already in the AI tool world? And fragmentation times a million has already happened over the last 20 years in like every area of digital technology. So it does feel like there needs to be a big focus on like, where is the, you know, the next mesh that like brings all that back together in some sort of practical sense so we can all use them together?

Yeah, absolutely. I mean, that interoperability element is is really key. I mean, we we were chatting earlier also about CX, right, and kind of the the fact that, you know, there’s been a long discussion about the fact that there needs to be kind of across the top overview of CX, somebody having visibility across an organization of how and when people are interacting with customers. And there’s still a lot of organizations that are very siloed in this. You know, customer support is having one conversation, marketing is having another, sales is having another. Nobody has full visibility. And AI is even complicating that even more because now you have more additional channels that people are interacting and maybe even less oversight of what’s happening because it’s all happening with a chatbot. You know, and who’s monitoring what that chatbot conversation is and what CX they’re providing back to a customer?

You know, there’s a lot more automation, you know, kind of because of the ability for, you know, AI to kind of go next best offer or next best response. Now somebody needs to build that. Somebody needs to do, you know, if this then that. But the more we get down the path, AI is figuring it out themselves. They’re looking at patterns, they’re looking at responses, and they’re kind of going, well, we’ll tell you what the best, you know, if then that is going to be. Unless you have oversight, there’s a good opportunity to just kind of lose control of the CX, particularly across all those multiple channels. And it’s make the tools are making it so much easier for people outside of the organizations that typically had those interactions to now have those interactions, right? Because anybody can create content. It’s complicating the whole CX across the whole customer journey element too.

Yes, yeah. And look, like I mean, like we were saying earlier, that’s that’s an area that’s already not well defined or owned in a lot of organizations, right, because it touches so many different areas of a of an organization. So, yeah, having that, I mean, you you must be the same, right? Like one of my one of my current most hated chat experiences is the Combank assistant, which is if not the most useless, like definitely in the bottom five most useless chatbots because it like never has the right answer even to like basic questions. And like the only time I ever use it is to basically just hammer it with I need to talk to a person until it eventually connects me to like a human agent behind the scenes. But it amazes me with all the information they have that they can’t make it even just find basic stuff that I need. Like, you know, what the biggest bank in Australia, I think, still and just, you know, they just can’t get it right. It doesn’t give me a lot of confidence that, you know, other organizations will be able to do it either.

And my big around those too, as well, and this is, you know, when I was working more in the CX space, one of the things that I always always emphasized that used to drive me, particularly around customer support issues, is that having to then start the whole conversation over with a human, right? So you go into this long complicated chat back and forth trying to get clarifications, etc., with a chatbot, and it finally kind of says like, “I’m going to have to connect you with somebody because I’m not answering your question”. And then you start from scratch. That drives me insane. Like, “Oh yeah, well, definitely pass along your notes,” and they’ll review the conversation. You’re like, “No, they won’t, no, they won’t”. Yeah, yeah, that’s pretty painful, I have to say.

It’s very frustrating. Like, I used to, I actually used to, but way back when I was much younger, used to work for two—I won’t name who they are—very large banks in the UK in their lost and stolen cards department. And like that process was terrible as an end customer because at the time we were still using this like DOS-based interface to like do stuff that connected directly to like the bank. We were in the bank, like, in the bank systems. And like all you could do at the time was cancel a card and then reissue onto the person’s home address that was on their account. So like, obviously, nine times out of 10, this is people who are traveling who’ve like lost their, you know, debit card or credit card. And they’re like, “Can you send me a new one?” And I’m like, “Yep, to your house”. That was it. And they’re like, “I’ll change my address”. And I’m like, “You can’t, you got to go to a branch”. And they were like, “Yeah, but I mean, like, you know, Florence or something”. And I was like, “Yeah, unlucky”. Basically, that was that was the end of my ability to assist. I was like, “You probably, we could probably make sure that no one steals any of your money, but that’s that’s it, done”. It was pretty amazing. Obviously, things have improved a fair bit since then. But, yeah, it was pretty hard then. You know, obviously people were quite frustrated with that response. So they’d be like, “Oh, can you just transfer me somewhere or can I talk to someone else?” I’m like, “Yeah, you can, but they’re just going to say all the same stuff that I’ve just said in a in a longer way, basically”.

But yeah, I mean, obviously there’s, you know, some pretty significant reasons why that was the only way you could do it, but it doesn’t make it any less annoying when you’re you’re there lost with no money and no ability to then withdraw any money from anywhere either. Yeah, absolutely.

Yeah, the fun, getting off a bit a bit off topic, Jen, but like bringing things back around because I think, to your point around customer experience, and obviously just drawing together that point about the pressures on companies to sort of be, you know, generating a return in ROI. I did read I read some alarming comments, by the way, that are related about how the C-suite broadly—not to disparage them too much—are all now talking about EBITDA upside as well as cost savings in relation to AI, which is a a somewhat concerning, you know, change of direction in the conversation back to your point around who are we replacing with computers. Yeah, but I do want to talk about like the the use of shadow use of AI and what that means.

So, tell me, one, what is shadow AI use? And why why should I be worried about it? Or why should we all be worried about it? And like what do, you know, what are your kind of main concerns from this sort of the lens that you put on this from your your chat with your government friends and across the CX world?

Yeah, I mean, it’s, you know, shadow IT has been a topic that’s been around for a long time. And shadow AI and, you know, it started out shadow IT was like, oh, you’re using mobile devices that, you know, weren’t company provided, and you know, you’re using email that wasn’t company provided. And it’s the same sort of thing. It’s it’s basically, you know, just referring back to the public ChatGPT, you know, to to, you know, write company information. But the problem with that is is one again, the, you know, it’s all about the data that you’re feeding into it. So you feed data into ChatGPT, it is going into the pool future pool of information that ChatGPT can draw on. So if you’re sharing, you know, publicly sensitive information or company sensitive information into that tool, you know, that’s that’s a huge risk. So that’s one issue.

The other, again, is if you’re drawing on that for information, you it it’s harder to, again, where is the transparency around where that information came from? How did that decision get put together? How did that viewpoint get put together? How accurate is it because you have no idea whether or not the data that it was drawing from was accurate or not? So, you know, there’s a huge issue again, particularly if you’re just purely cutting and pasting out of there and not, you know, doing any level of review. And, you know, there are instances where ChatGPT has just made upes, right? So give me the reference for that information and they just go, and it goes and makes it up because it doesn’t like to say, “I don’t have one”. So, you know, that hallucinogenic, you know, issue. So, so that’s an issue, you know, as well.

So, more and more, you know, the companies that are using AI properly, whether that be government, whether that be private sector as well, are basically building their own instances where they do have control over the data that’s going into it. They have traceability. They could actually see what information is being shared with whom. They make sure that information that’s sensitive doesn’t, whether that be, you know, customer data would be a huge issue if they’re, you know, putting customer data into the tool and that was going into public ChatGPT type interfaces. So really, you know, it’s it’s the security concern, privacy concern. But even down to having it have the right brand voice, the right brand information, you know, as I said, up-to-date pricing information, etc.. All of that accuracy is the other issue around it. So security and accuracy would be kind of the two big ones.

Where I’ve seen it kind of really being done well—so I was at a briefing with one of the the big consulting system integration groups—and they have basically built their own AI tools. One is a kind of internal one, which like finance HR type of people can access that has very sensitive information, and there’s a lot of control over what data goes into that, what systems are connected to it, who is allowed to access it. But then they also have one kind of for their sales and marketing team that can then use it. It points to their own thought leadership content. It points to their own brand content, etc.. So they can use it to create sales campaigns, marketing campaigns, but again, they know that it’s drawing on the most up-to-date product and offering information. It’s got the right brand voice, it’s got the right brand visuals, all of that sort of thing. So that that more and more the companies that are doing it well are doing that. And it’s connecting into, you know, their Salesforce systems or SAP systems or Oracle systems, etc., to make sure that it’s drawing from appropriate data.

Yeah, yeah, look, I think that’s, yeah, I mean, I agree with your headlines there on, yeah, security and accuracy are the the sort of primary the primary concerns based on what we said already. And, yeah, I mean, like, you know, to your point, I think like it’s just the whole, I know we weren’t going to talk about this in great detail, but the lack of education around how to use the tools is clearly what’s driving a lot of the the issues, right? Like, it’s, you know, a mix of human behavior and just lack of education, I think, are the two sort of real danger zone areas, given that we’re, you know, you’ve people using all this stuff.

A couple of alarming maybe stats along the same lines here. I was reading a report from a UK business called the Obheave report from I think they called SimSafe, Sibsafe, apologies if I’m mangling your name. But in their in their report, it’s like a really extensive report, it’s pretty interesting, but there’s a section on AI. And in there, the the generations that are sharing the most sensitive information with AI tools without their employer’s knowledge are sadly my fellow millennials. Like, kind of letting the team down there. 43% of millennials apparently have shared sensitive information without their employer knowing. And 46% of Gen Z have shared that. And this is only the people who’ve, you know, actually stated that they’ve done it. So I would imagine there’s a much bigger proportion that are that are doing it and not telling anyone.

Or just not knowing even what constitutes sensitive information. Like, what is appropriate? It’s not like you work in defense and things are like classified with strict levels, you know, or, you know, top secret, classified, etc., etc.. You know, is like, you know, when you get into product development, how much of that is sensitive and what can and can’t be shared? So there’s a lot of data classification work that again, somebody needs to be accountable for saying this is appropriate, that’s not appropriate for being shared.

Yes, look, totally. And I think, you know, going somewhat against the grain of the modern conversation in this space, like I think that’s actually where the government has quite a lot to teach people in other industries, because the way that they do manage that that side of, you know, government is usually pretty pretty solid, right? Like, I know there’s, there’s always like, you know, amusing stories about people losing like USBs and laptops and stuff, but if you have to actually lose a physical device, you know, for that to become a risk, like they’re doing quite well in terms of protecting sensitive data and making sure that the policies are very clear about who can access what and why. And I know obviously that’s slipped a bit in various, you know, places due to things like Telegram and like WhatsApp and stuff being used pretty prevalently. But yeah, you know, to your point, every company I’ve ever worked in, like, you join, they just give you access to the drive. Everything’s on the drive. No one’s got a clue like what’s where. Like, I’ve been able to find, you know, contracts, proposals, like private stuff, just you just by browsing around. And it’s pretty amazing how poorly managed a lot of that is in in, you know, 90% of of private businesses. So, yeah, I do think there is a bit of probably a bit of growing up and maturity that needs to be done there.

Yeah, that’s exactly right. But even even then, I mean, big corporates, you know, when you’ve got thousands and thousands of employees, it’s hard to keep track of what every single one of them is doing, right? So, you know, some of that information gets saved onto somebody’s, you know, personal hard drive, and they’re drawing from that to put into ChatGPT, it’s going to be very hard to have visibility of that. It’s it’s different if it’s getting pulled out of straight out of a CRM and you’re, you know, kind of that’s that’s a little bit different. But even then, you have to be careful again, if you’re, you know, all of a sudden giving people access to a lot more of these tools to be able to create content, etc.. You know, what are they, what data are they pulling from? You know, what are they allowed to say, what aren’t they allowed to say? ChatGPT, if you’re saying, you know, create an email for me to go out to a customer base about this topic, you know, is does legal have oversight of that? You know, what you’re allowed to say, what you’re not, what you’re promising, what you’re not? Probably not, I would say.

Yeah, no, it’s a really good point actually. Like, I mean, I think again, like, we probably don’t have time to talk about in detail today, but the, you know, the evolving role of IT is is a tough one here, because they’re they’re always the team that, you know, the team that everyone loves to hate, right? Where they’re like constantly, you know, inundated with requests for, you know, resetting passwords and why people have broken their laptops and and various other things. But I feel like a lot of this stuff will be getting shoved in their direction from a data governance perspective, because obviously, they’re primarily responsible for, you know, the the sort of software side of things as well as the hardware. So, yeah, exactly as you said, like, I don’t think a lot of businesses would have the capacity or immediate sort of ability to just pull in legal or, you know, privacy sort of policies around all that stuff without a lot of like manual work behind the scenes to enable it. And then, yeah, when you’re throwing in like generative tools that you can just download stuff and throw it into, like, it just adds a whole other element of of risk that I guess IT teams will then have to try and manage.

But it just, you know, they just IT teams, I mean, think about your average, you know, average IT support person, they don’t necessarily know themselves what’s sensitive and what they’re not. They’re not the ones who define what’s sensitive information, what’s not. So it’s it’s making sure that the, you know, that there’s actually policy, process, people on the business side working with the IT team who are putting those boundaries up, because essentially, the business side is going to need to be saying to IT, “This is appropriate information, this is not. This is where it lives. Like, this is where we put that information, and therefore, these are the the systems and the databases, etc., that you could point the tools to to be able to pull from that and what’s appropriate and what’s not”.

Yeah, yeah, look, I think that’s, you know, I’m kind of quite an quite sort of armchair follower of the like general like digital security industry because I think it’s quite fascinating like how fast it evolves and the, you know, without scaring everyone, just how how many risks there are like everywhere. But, yeah, I do think that’s going to become quite a big part of this conversation because, you know, going back to what we what we where we started, like the, you know, increasingly sort of rapid pace of like adoption and evolution of like AI tools is really hard to sort of keep up with in a materially useful fashion. And, you know, coming back to what we were saying at the start, like, I think I don’t know if there even is a sort of meaningful way to like really regulate or legislate like the use of those tools at present because next week there might be like a whole raft of new things that have come out that completely change the game again. So it does put governments in a difficult position where you’ve got to try and balance that whole, you know, sort of social contract, best good for the most people position with, you know, how do we actually like let people just continue to use the tools as they are and work out what they what they can do with them.

I I think a lot of it, like I said, comes down to making sure some of those fundamental regulations are in place that in think about it more as legislate the outcome rather than legislate the technology per se, right? Because the technology may change, but the appropriate use of data, the adherence to certain industry regulations, the adherence to data regulations, the adherence to privacy legislation won’t necessarily change regardless of what tool information is getting spat out of, right, or what tool people are interacting with. So I think if you, you know, kind of look at again what they’re they’re kind of doing now, look at what we have and how that can apply through an AI lens, you know, is I think in the immediate term the starting point for where we need to be and what the fastest thing, right? So because from a like if your privacy, if they don’t adhere to privacy laws and your private data is shared inappropriately, do you care whether that was shared by phone, by a chatbot, by, but you know, the thing is it’s out there. So, you know, it’s the outcome of it, it’s not the actual tool that. Yeah, we’re arguing about the cause instead of trying to deal with the effect, right? I think like that’s that’s the what you’re driving it, right?

I mean, somebody needs to look at the cause, but from a legislation standpoint, it needs to be more about the outcome. Yeah, yeah, no, I get I get that. And I think that it links neatly to another another significant sort of conversation that’s occurring at the moment.

So I think to to recap there, if I’ve understood you correctly there, Jen, what you’re saying is, just to sort of wind back through what we what we just discussed, we’re saying look, like security should be top of people’s lists. You know, how are you how are you just addressing that? Do you have policies around it? You know, what are the what are the risks and how do you how do you try and start to manage them? Accountability for for the use and sort of, well, I guess just the use of the tools and and who’s ultimately responsible for making sure that you’re doing the right things with them? And again, you know, do you have do you have policies around that? Have you got guidelines? Like, how how do you manage it? And then, you know, to your third point, I guess a subset of that is, you know, your data governance policies around like, look, what is and isn’t sensitive? Are there different different layers of that? You know, how how should you or shouldn’t you be sharing information? Which I feel like is very under addressed part of this whole conversation. Unless you’re in, I mean, obviously, fairly in startup world where everyone obsesses over the fact that they’re they’ve come up with like a new thing that no one’s ever thought of and you can’t tell anyone. But, you know, beyond that, I think most people in their day-to-day jobs are like, as you said, they’re like, “whatever”. Unless something’s got a big red flag on it saying, “Don’t share this,” like you just don’t really think about it.

Yeah, yeah. I mean, I look at a lot of this again, it’s, you know, kind of it’s the iteration of technology over time. But we were having a lot of these discussions when the MarTech tools first came up because it was the same information about like crap in, crap out from from a data customer data perspective. Same thing around, you know, kind of keeping a consistent brand voice. Being cognizant of of privacy, customer privacy when it comes to their information. Like knowing where that creepy line is. They’re very similar conversations. And again, it comes back to the fundamentals. Get the fundamentals right, and it makes it easier. Whatever new technology you lay on top is going to be easier. But if you don’t have those fundamentals right, there’s going to be security issues, privacy issues, brand voice issues regardless of the technology that you lay over the top. AI is just happening faster. And it’s giving putting those tools, it’s making it easier for people to do it faster and harder to contain because it’s federating that access across a whole organization rather than just within particular, you know, kind of teams and groups.

Yeah, and look, I think if you, exactly on that point, if you look back to other times in history where this has happened, right? Like, you know, most recently, like the Industrial Revolution, where suddenly people were like, “Oh, look, we’ve built, we can build machines that can do, you know, 10 times what a human person could do”. Or like, you know, the invention of the wheel and the combustion engine. Obviously, obviously happened quite far apart, but combining the invention of the wheel and the combustion engine, I should say. Where you were like, that I read, I meant to find these actually after digging back out, I read a really funny advert from around the turn of the century trying to convince people to buy horses instead of cars. And it was like absolute genius copywriting because they were like, “Cars are like dirty, like you could crash them, they go too fast”. You like, a horse will last you a lifetime. Or the car’s like, you know, a dangerous nightmare that you don’t want to be driving around.

And I was like, this is like such an amazing sort of perspective flip. And if cars hadn’t taken off, you could see where we, you know, we could have all still been crazed around on horses, which would to to a degree, I think, would be quite cool. But like, obviously there’s some considerations around how fast you can get places and, yeah, you know, the fact that you would need to obviously camp quite quite a bit, I guess. But it was just, it was really interesting because I was like, yeah, it doesn’t, things don’t just happen like overnight. You know, there was there was a good long period where people were still selling horses, and a lot of people would still have been like, “Well, never buy a car, I’m just going to have a horse”.

And like that obviously still happening now, right? There’s a great, I can’t remember where I saw it, it was a reel or something like that, where people were putting up a whole bunch of quotes and it was like, you know, “The robots are coming for our jobs,” and, you know, things like that. And it actually went back like that that came from 1965, when, you know, so it’s kind of like all of this scar mongering has already happened before and, you know, we have somehow adapted and survived. I think back to when I, you know, 20 years ago when I was, you know, kind of first mobile devices were first coming out and I was working in a an organization where we supported, you know, end-user devices, PCs and things like that. And I was literally taking an iPhone into, you know, large enterprises and government organizations and saying, “Okay, Department of Immigration and Border Control, you know, if we gave you one of these, what could you do with it?” And, you know, in many cases, we’re hearing back from the CIOs going like, “Over my dead body are those devices coming into this organization, right?” You and all the security risks, the, you know, no, no, no, no, never going to happen. You know, and like 18 months later, they’re deploying 100,000 iPhones across organizations.

So, yeah, it’s funny to think about that actually. Like, I mean, again, you kind of forget how much has happened even in even just in our lifetimes. When, like, you know, the the rise and fall of BlackBerry was all tied to that use case in in government, right, because they were like, “Oh, it’s the secure phone,” you know. More secure than, I don’t know, I forget why they, more secure now, they must have done something behind the scenes. But, yeah, everyone was like, “BlackBerry are rubbish,” but anyone who was in government was like, “Well, you have to have one because they’re they’re secure”. Yeah, there’s encryption happening.

Yeah, but I think that’s like my first official job like out of university, you know, I was working at an agency and we had Compact Computer as a client. So that’s going back a way. That was since acquired, acquired, acquired, it eventually became HP. But we had one computer on our end and they had one computer on their end that sent emails back and forth to each other. So, you know, we used to take turns kind of like going to check the computer to see if any emails had come from the client that day. You know, and it was going back and forth. Yeah, and you know, we were sending out media releases by literally printing them out and, you know, putting them in envelopes and sending them off to the media.

I remember that. I was I was cracking up some guy. I wouldn’t dare to comment, Jen, on on how old or not you are. We’ll leave that that would be an enduring mystery of the the podcast conversations. But, I like a couple of weeks ago, I was having a very similar conversation with a guy at the library like in town. So I was like, I thought I had to send something to the US and there was only two ways to do it. One was to print the form with a physical copy and fill it in and post it. And the other was to send them a fax. And I was like, called the library and I was like, “Where might there be a fax machine”? Like, surely there. And I was like, “This is a bit of a left field question, but have you guys got a fax machine?” And he was like, “A what?” And I was like, “Fax machine”. And he’s like, “No, why would you need that?” And I was like, “Don’t don’t even ask, I just thought I’d check”. He was like, “Yeah, there might be one in one of the other libraries in, you know, one of the bigger cities”. And I was like, “I don’t want to travel to, I’m not going to go to a city to find something, I’ll find I’ll find another way”.

Yeah, it was pretty funny. So I was like, when did I, when I was just getting into like digital, we still had to fax stuff around. Like clients had to fax us as contracts and POs and stuff. And I was like trying to explain it to my son as well. He was like, “What is it?” And I was like, “Well, it’s sort of like where you send an email through the phone, but it prints the email at the other end”. And he was like, “Why would you do that?” And I was like, “I don’t know, I don’t know”. It was seemed like a good idea for a brief period for some reason.

It’s what we had to work with at the time. I was just like, look, it wasn’t my favorite thing to do. But it was also an excuse, like, I don’t probably just saying loads of stuff about how lazy I was previously, but I just you could make sending a fax last like half an hour. Just, you know, get a cup of tea. Fax machine’s probably broken. The line at the other end being busy. Classic. At the same time, computer at the same time. So inefficient. I mean, I suppose to your point, yeah, this is where obviously AI does does actually make things a lot more efficient, right? If you were like in that scenario, you’d just be like, “Send a fact to me whenever you can and just let me know when it’s done”. That would be ideal.

I’m I’m interested to see, though, when you look at it, because, you know, it’s like, you know, when when housewives first got vacuum cleaners and dishwashers and that sort of stuff, the amount of domestic work they the hours they put into the domestic work it didn’t actually reduce because the the standards of what was expected increased, right? So it just they were just doing more but within the same amount of time. And so I’m really interested to say kind of thinking, okay, particularly if this is like it’s augmenting, right? So yes, you’re not doing all of those really basic mundane sort of things, but it’s not like it’s freeing up your time. You’re just still putting the time in, you’re just doing different things. So while it’s making, you know, yes, productivity may be great, you may be able to pump out more, but in some cases, more is not always better. And in terms of the quality of what’s coming out, what the customer is getting at the end of the day, and what you’re replacing that time with.

I’m glad that you said that, Jen, both to get back on track, and also because you’ve just reminded me of something else I was talking about last week, because I think there’s actually a bit of false equivalence happening in a lot of places, exactly as you’ve just said, on, you know, people feel like they’re doing more stuff, but they’re not necessarily saving any time. I think people are actually wasting a lot more time, you know, playing around with tools that they don’t really understand, trying to do things that they shouldn’t really be doing. And my current most hated buzzword vibe coding is a perfect example of this. You know, of the, if you’re not familiar with this, it’s people who, you know, like notions, you know, just all about the vibes. It’s not actually about, you know, doing things more efficiently. It’s just about making your project plan look nice. Like, you know, when I used to doodle in my, you know, school workbooks instead of doing the work, right?

Yeah, so vibe coding is exactly the same. It’s like you chat to a bot and it codes for you. Probably don’t understand code. You don’t understand what it’s building. It sets stuff up. You don’t understand what that is. Like, how everyone’s like building prototypes at the moment. So they’re great to just knock together a quick, you know, front end of something that looks nice or, you know, like, you know, I use them all the time to just do, like, concept sorts of stuff to be like, “Oh, you know, if I was going to build this, what would it look like and how would it work”? But they’re not they don’t do anything because there’s no data infrastructure unless you build that first. There’s no like actual diligence around building databases, security, like any of the the core stuff you need an application. You just, you know, kind of painting the house before you before you finish the the structure, basically.

So they do have a place. But to your point around the housework example, and I was having this conversation with one of my friends who’s a cleaner, cuz she was like, “I just like to, I just wash up”. She’s like, “I don’t use dishwashers cuz you spend exactly the same amount of time rinsing stuff and stacking it and unstacking it as you would just washing the stuff up in the first place. So, like, is is it any different?” You like, I mean, you don’t your hands might be a bit smoother if you don’t wash up and you’re not wearing gloves. But at the end of the day, like, you’ve just got a big machine in your house that’s doing what you could do in roughly the same time. And like, I don’t know about your dishwasher, but ours is like, due to economical constraints, takes like a billion years to like wash everything as well. So I’m like, I actually could have had all this done in like half the time if I just done it myself. Now I’m like, no, I’m using the dishwasher, it’s there, I’m putting everything in, right?

And it’s funny, I mean, I like using my dishwasher because it supposedly uses less water, right, than than hand washing. But does it? Supposedly. But this this was like there was a really great thing, this is this is getting completely off track, and maybe again, this is a conversation for another one, but the environmental impact of using AI versus just basically Googling something is astronomical. It’s like, you know, the cost of the energy cost, yeah, absolutely, of of using an AI tool for really mundane tasks is quite considerable, right? So, there’s also a whole other thought around ethical and appropriate use of AI when you take that into account, particularly for organizations that are, you know, trying to hit certain environmental targets. You know, there’s some, we’ve heard about like B Corp equivalent companies that pretty much lost their certification because of their massive use of AI means that they no longer hit their carbon targets anymore. You’re hearing about, you know, big tech companies looking to stand up Three Mile Island nuclear plant again to support the data processing requirements for AI. Like, there there’s like huge implications. And for a corporation that’s saying, “Oh, we’re a green corporation,” like that’s a whole other level of consideration about what’s appropriate and what’s not and who gets to use it for what purposes and what constraints you put around it.

No, no, I I love that. And I think, winding it back to to the the sort of main theme of today, though, and, you know, the sort of government position versus private industry, I think that is exactly where government oversight is relevant and appropriate, because at a national sort of country strategy level, if you look at you like federal government or the the local equivalent, right, that’s exactly where they should be. Like, look, what are our energy policies, like, what are our economic policies? And now we’re getting super high level now, but the the unchecked use of these kinds of tools exactly as you’ve said there is a bigger hidden cost around that to all of us that is not being really considered in the top level discussions, probably.

For it has massive implications because, you know, there’s been a lot of discussion around sovereign capability here, which basically because of all the the, you know, wars and etc., supply chain, you know, COVID supply chain disruption, major concern for governments for large corporations, etc., etc.. And it’s been pushing a lot of this move toward, you know, not just here in Australia, you see it in the US, you see it in in the EU, sovereign capability. So how can we stand on our own two feet a lot more, and particularly around energy is a big one? And it was what was driving a lot of this kind of, you know, kind of green energy versus nuclear energy versus coal, etc., etc.. Again, massive massive environmental impact, massive data requirements that come from from the use of AI tools that are going to require building massive amounts of infrastructure, not only on like the data center side of things, but also on the energy grid that’s going to be needed to power them as well. So there are massive implications for for the Australian economy, for the Australian people, etc., in a lot of different ways that maybe are not being considered.

But one of the things that I think it would be great for government to do, like for instance, government has been pushing out a really big like cyber awareness education campaign, helping people understand how to protect themselves. They’ve done a lot of work similarly around privacy and and things like that. It would be great if, you know, they’re doing a lot of training in education internally within government around AI usage and standards and etc., etc.. It would be great to see them also doing something there and and, you know, there a lot of AI institutes that universities are running and things like that. But it would be nice to see someone step up and do like a public education campaign around some of these AI elements.

Yeah, no, I totally agree. I think, yeah, I, yeah, I think it’s it’s funny how, you know, going back to to again, to where we started, I think that whole the messaging around it and the, as you were saying, like the ethical implications are what I think are very at odds with people’s use currently, because I there’s a funny article about, I think it was actually Sam Altman, I think, talking about one of the one of the tools and saying like, you know, every time you put like you say please or thank you in a, you know, a generative tool that like actually cost us like a ton of money. And you even just, you know, just that one word, right? Like the the bigger the context get, the more the more they need to process. And like, yeah, I think people just don’t like why would you think of that? Because it’s just presented as going back to your point around treating them like people, it’s just presented as a chat. Like this, right? We just just chatting. You know, there’s there’s no nothing further. Nobody thinks about like where the cloud is. I remember like my mom always used to, she’s like, “Where is the cloud?” And like, “Just other computers, like everywhere, basically”.

But it’s if you’re given less choice. I mean, I don’t I don’t know, but like I use the Google Suite. Like, you know, the latest version of the Google Suite has their Gemini AI tool built into it. There was no choice, it was started on. But, you know, it’s it’s the it’s just getting built. It’s not like you purposely make a choice to go into ChatGPT and do something. It’s just like, it’s there. Copilot is just there in your Microsoft system. Gemini is just there in your Google system, right? Are all being applied. I think I saw somebody like OpenAI is looking to build like a Google browser sort of thing as well, right? So it’s it’s becoming it’s becoming so embedded. You’re not going to really be able to avoid using it and therefore avoid some of the downfalls of using it.

Totally. Yeah, it’s funny you mentioned Google actually, because I was I was winging to someone last week because I’ve got about four emails about them putting the price up on Workspace everywhere to to cover the cost of pushing Gemini and everyone. But last year when they were trialling it, they were trying to charge, I say whatever, like $20 a month, like just for Gemini, I think, in its first rubbish iteration. Which obviously, I think uptake must have been terrible. So, yeah, instead they were like, “Oh no, we’ll just we’ll just release it, everyone. We’ll just put the prices up”. But then you’re like, what can you do? Like, I don’t want to I don’t want to waste like a week extracting everything from Google and changing to Microsoft. It’s just a, you know, a futile exercise because I have the same problems there. So, yeah, I don’t I don’t love that it’s getting pushed on everyone for all the reasons that you just stated, as well as like, I don’t think that the behavior should be encouraged, you know, at foundational level. I think it should be a choice that you can make if you want to use those tools and not have them just part of your onboarding. Be like, “Oh, why don’t you why don’t you use our new video tool?” It’s like, “Well, no, why don’t we decide if we actually need that first before we just start diving into like, let’s create videos of stuff”. So I really annoys me that I’m constantly getting prompted to analyze documents. I’m like, “I am analyzing it, I’m reading the information that I want to see, leave me alone”. It’s like Clippy from back in the Microsoft days.

Anyway, a rant for another time. But I think the I I totally agree. I think the point you made is a really important one. And I think definitely next conversation, it’d be great to dig into the like the broader environmental impacts, because given that we’re in, you know, a declared climate crisis, again, this is where private industry seems to be rapidly diverging from our, you know, stated goals of reducing our energy use and, you know, becoming more renewable. Like, I think you can do some of the these things in parallel, but you can’t have like a technology racing away that requires so much energy at the same time as being, to your point, trying to reduce our like energy usage. Because, you know, I’m like, I’m sure you’re the same. Like, you know, I’m like an idiot recycling yogurt cups or whatever, and then you’ve got like OpenAI just like burning through electricity at a rate of knots. I’m like, you know, but back to your point, like, why why am I accountable for, you know, helping the planet when these guys are just trying to burn it down?

Exactly. And for a company, it’s really, you would see like, I think this month’s edition of the Australian Institute of Company Directors newsletter or magazine is all about AI because it is a board level discussion, because it does raise all these massive massive issues that go to the core of, for us as a corporation, what is our strategy? What what do we have the social license to do? Does this adhere to our purpose and our values, etc.? So, you know, for big companies are thinking that way, governments are thinking that way. But again, what do you do? They’re talking about it, but what do you actually do about it? How do you make those decisions? Who makes those decisions? And how do you do that in such a way it doesn’t put you at a disadvantage in the market? So I mean, big big picture issues that are being grappled with. And they have to come up with answers quickly because, as you said, the technology is becoming, you know, kind of standard, can’t avoid it.

Yeah, yeah, totally. Yeah, and look, like I think, I think that’s a good point to to to wrap on, to be honest, Jen. Is like, exactly that. It’s like that that to me is the crux of the challenge, where like you’ve got, personally, I don’t think people take necessarily enough responsibility for their own decisions. Like, I’m as guilty as everyone else here, right? Like, I use AI all the time because of the nature of the work that we’re doing and the company that we’re building. But I’m also very aware of like all the issues we’ve just talked about because I’m talking about them all the time as well, right? Like, you know, I’m in a bit of a pickle, I guess, because I have to use it, but also it’s not super aligned with my personal values of I guess trying not to be super damaging as I navigate life. But like, I’m sort of stuck there because I also have to live and make money. And like, it’s reassuring to hear that that’s happening at the top level as well. But I think those are the two things for me, right? It’s like the company that you’re working for, like, what are your, what’s your personal perspective and standpoint on this stuff? And are you aware of the actual impacts of of what you’re doing? Which is, I think, something just that it’s important to be aware of in all areas of your life, right? But then you have shareholders that you need to answer to either. Well, exactly. You know, I’ve got someone breathing down my neck saying that I’m going to be fired if I don’t say the company line and make them a ton more money for their next holiday home next year. So that’s that’s, you know, good good and bad.

But then, yeah, I think like, you know, like you say, like the other problem now is the economy is pretty crap. And like you, it’s not you can just leave a job because you don’t like the values of your business because realistically speaking, like why would you? You’re going to be like homeless and have no money. So that’s where I feel like we as individuals are a bit, you know, a bit trapped in some respects. It can feel quite frustrating that there’s only, you know, there’s only certain things you can do to really align your work life with your personal values. Maybe or maybe you just don’t care about this stuff. This might all just be a load of nonsense, I don’t know. But like that whole, you know, the point I was trying to make, sorry, without getting too distracted there, was like that whole point is regulation. And whether it’s at a government level or a company level, you know, there’s only so much you can do to train people and inform them, and information doesn’t necessarily change behavior. Right? So I think the point you made was a really important one. Like, we just need to be focused on the outcomes that we want. And if we align the outcomes, then I think the behavior will will change naturally. But it does feel like there’s a lot of plates to keep spinning to try and make that happen, I guess, is my very long summary of what we just talked about. I don’t know, what do you think, Jen? What is your, what’s your, if you’re going to wrap wrap that in a, what do we think about next, what are your your top things from that conversation?

I mean, for me, whenever we have the conversation about anything technology related, particularly in relation to sales and marketing, to me, it always comes down to get the fundamentals right. Like, if you don’t have the fundamentals right around the data quality, the data standards, the data management, the privacy standards, etc., etc., and also a clear view about why you’re using this tool. What are the outcomes you’re trying to achieve? Is it actually going to make it better? Are you going to get ROI? Are you applying it in the right place? Like, again, doesn’t matter if you’re applying MarTech tools, sales automation tools, new types of devices, AI, etc.. If you don’t have those kind of fundamentals right, it’s going to fall over in a heap. So that’s what I always, when I’m working with organizations and what I’m hearing, is like, in hopefully, you know, government is very focused on, and again, there might be some frustrations that things aren’t happening as fast as they want to, but it’s because they’re working rightly so to get the fundamentals right so that it doesn’t blow up in all of our phases. So that that would be what for me it comes back to.

I feel like that was a much more grounded takeaway from that discussion. So thanks for that, Jen, that’s good. Well, look, I think there’s a lot of ground covered there, as usual, Jen. So thanks so much for coming back on and chatting through all of that. I think definitely keen to revisit some of those points. So I think we’ll be we’ll be chatting again if you’re keen to keen to come back. Absolutely. But yeah, look, thanks for joining. There’s a lot to think about there. And, I guess like, just be aware, be aware of what you’re doing when you’re your next dive onto ChatGPT, I guess, is the the final takeaway from this discussion, hey?

Exactly, exactly . Thank you again so much for having me, it’s always a pleasure . A pleasure, Jen, a pleasure . I will chat to you again, I’m sure . Wonderful . All right .