Transcript
What is up everyone, it’s me Stuart P Turner. This is the Flow State podcast running, running, running a tiny bit behind schedule today but, but whatever, it’s Friday, it’s Friday. As I somewhat terribly predicted last week the sun has finally come back out. This is actually tying into the theme of today’s episode in a lateral fashion which I will talk to you about in a minute. After I said the sun had come back out in New South Wales last week it obviously just like tanked it down again for two days, so sorry about that. I want to say that it wasn’t it wasn’t just me and my bad, you know, opening my mouth. I feel like I feel like the weather was partly to blame. But look, we’re back, sun’s back, it’s Friday, it’s nearly the weekend and today we’re going to talk about a a serious topic: are you being cyberbullied by AI at work?
Now I don’t mean are people, you know, going onto your Facebook profile and just putting like yikes under all your old holiday photos. What I mean is there’s a lot of chat at the minute and there has been for the past couple of years about how AI is going to replace a lot of jobs. Those of us in white collar world, especially people who work in digital, you know, like me and my fellow digital people, we were like, look what we do is super complicated and you know no one’s going to be able to understand or do it, so we’re safe right? You know we’re doing mysterious internet things and like people still don’t really know what my what my job is right? And I just I can’t be bothered to really explain to be honest, I’m like I just do computers. But now AI is coming for us, isn’t it? It can theoretically do our jobs.
What I wanted to talk about today—I’m going to do my best to actually explain what I’m looking at in the show for anyone who’s only listening and as usual we’ll share the links—is look, AI potentially is coming for a few of our jobs. But a lot of the discussions that I’ve been having over the past few shows and a lot of research is now suggesting, I think somewhat optimistically, that it might be replacing jobs. On a previous show we saw European media publisher was essentially forcing AI on everybody. C-suite people in your company are probably talking about it. They’re probably trying to force you to do it because someone’s mentioned a new model or a tool or whatever.
So look, if you feel like you’re being bullied into using AI, you feel like AI could actually be being brought in to replace you and you know you feel like the number crunchers are thinking “Oh I could probably just draw a few red pen lines through some of these some of these lines in the old P&L and we could just replace them with a subscription”. I want to talk about how you can fight back against that right now and how you can lean into your human powers, you know the powers of us as human animals, to fight back against the encroaching of AI into your role. Obviously, in order to do that, I’m going to use AI to explain how you can avoid AI taking your job.
Without further ado, I thought I’d do what we all do, right? If you jump onto our old mate Google, still still trucking along at the moment, though they’re shoving a lot of AI crap into their search results. What you can see before you if you are watching now, and what you will need to imagine if you are listening (and as I said I I’ll share the link so you can click along), is this is a set of search results based on the term ‘AI predicted to replace jobs’. Pretty broad, starting pretty broad. What Google has got to say about this based on its own set of results is interesting, I think.
What Google are saying in various different ways is that look, there are a lot of jobs. AI is predicted to replace millions of jobs apparently, particularly administrative, manufacturing and customer service roles. I’ve got a few things to say about that that tie into my current focus in the world of B2B marketing and sales and revenue growth. The one that’s most interested here—I mean this is a bit of a nonsequittor as a prediction right because obviously AI is going to replace or augment a lot of jobs. What I find funny though is that a lot of the augmentation is driven by the behind-the-scenes agenda of a lot of AI businesses which is not to actually help people, it’s just to make a ton of money before the bubble bursts.
If you are, you know, somewhat aged like me, you will remember the great outsourcing of the late 90s into early 2000s. When I was living in the UK at the time we saw a horrifying shift in customer experience as loads of companies realized that they could outsource a lot of their core customer service operations to cheaper markets, right? And this led to a lot of pretty poor racist chat, let’s be honest. The reason for all that was because these companies basically were like—and this happened a lot in the UK—they were like “Oh great we could just outsource all our call centers to, like, India was a popular destination at the time”. You have all these poor fools in, you know, in various huge sweat shop type conditions in India chucked into call centers, super poorly trained, you know, language barriers, all kinds of just basic issues with working across two different countries with two very different sort of languages and work cultures. Then you’ve got like a core frontline service of your business that is irreparably damaged—damage to the brand, damage to people’s experience, customers leaving. But all these companies saw was like, well look, there’ll be a bit of short-term pain but this is going to be cheaper in the long run, isn’t it? They didn’t care about the brand experience, right? They were just like, oh, how can we make this cheaper? We’re seeing the exactly the same situation with AI now, right, because you can even cut out the cheap outsourced people. You can just bring in a tool to do it which delivers an equally terrible experience to your potential and current customers.
What’s concerning me here just at the very, the very first sort of juncture in this discussion, is you see stuff like customer service representatives are going to get replaced with AI, market research analysts, telemarketers. These are very human centric roles that people want people in. So a lot of like value ad now—that pendulum swung back the other way. Like, you know, 10, 20 years later, loads of brands now are like all our call center staff are onshore, they’re not offshored. All our customer service staff are onshore in the same country. It’s funny to me that that is like a value ad sort of proposition now when anyone with any sense could have seen that just wholesale outsourcing very sort of complex relationships with your brand to another country that doesn’t speak your language was not going to go very well.
I guess reassuring point number one for me here is if you need examples to refer back to where this kind of massive like race to save money has gone horribly wrong, offshoring of call centers was a massive one. Totally crap experience for everyone involved, like on the employment side and on the customer side. That is exactly what’s going to happen with AI. You’re going to get poorly trained, poorly deployed AI bots trying to service customers and they’re just not going to do it very well. Probably because they fired all the people who could have managed that process because they’ve been replaced by AI as well.
Now if you’re like “Steu that seems like a bit of a reach into the distant past like nobody’s calling anyone anymore”, I’ll be like “All right fair play i’ll take that criticism, let me give you a slightly more recent one more directly related to a company I was, well, one of my companies from a few years ago”. Step ahead 10 years, all right, we’re over call centers. Everyone’s like “Oh I can do live chat now”. Like, you know, we’ve all got iPhones, we’re all on Android phones, whatever, we can do live chat on websites, this is good.
What happened just before the explosion of large language models was that everyone took call routing from call centers and they just deployed it into live chat. We had like little decision trees you could walk through them, you know, they’d ask you questions and lead you down a little journey. That was all crap as well, wasn’t it, because they were all really poorly thought out and the journey’s often ended in a dead end and then you just have to keep asking to speak to a person. Which, you know, I’m sure if you’re like me and you like the idea of live chat but you just want to skip all the shit and you just want to get straight to talking to a person, I’m just like talk to an agent, talk to an agent. There’s like, what’s your question, talk to an agent, talk to a person. That’s the only way to get anything done, right?
Again, this is a more recent example: that experience was 99% rubbish, 1% good. AI is not going to fix that. AI is just a way to do that at a higher rate in a much worse way. Any company who’s just thinking, oh, we’ll just wholesale outsource our customer support to AI tools, that is a crazy terrible decision. Where there’s some great augmentation here is you should be running some interesting data collection and data analysis behind the scenes to improve your frontline services and make them a lot more efficient. Can you just replace them? No, of course you can’t, it’s terrible, like, just why would you even think about that? I thought that was an interesting one because that’s one we surely have all experienced. I don’t particularly want to experience again, to be perfectly honest.
The reason this is relevant is we talk a lot about sort of two different deployments of AI at the moment in our companies. Like one is the co-pilot approach, which is as it sounds, AI is there to assist you as a co-pilot to help you through what you’re doing. The other is the baked-in AI done properly, actually deployed into the infrastructure of a business. Nobody is really there unless you’re an AI business yet. Everyone’s on that journey at the moment. This is where the real danger is, right, going back to the theme of the episode of you know you getting cyber bullied about this. Are people just being like AI can, and marketing gets a lot of flack for this all the time, right? They say you can just get an AI to do a marketing job now. But you can’t, right? You just can’t.
There’s a few interesting stories here. Forbes have written their usual sort of, oh, I’ll just bang out some probably AI generated on trend articles about this. What I do want to talk about is like there’s a few interesting points on this World Economic Forum story that I’ll share. Just, you know, what jobs is AI going to replace and how. And this one, what three jobs are going to survive AI? Depending on who you read and where, Bill Gates predicts only three jobs are going to survive the AI takeover.
Let me share with you what the World Economic Forum is saying about this because they’ve got some interesting takes on this stuff as well. This is an interesting article: why is it replacing some jobs faster than others? If you are in the digital world like I am, this is probably not super surprising. Data-rich industries are prone to being disrupted obviously they are, AI models need loads of data. Data-poor industries are not.
What is interesting down here though is I found we’ve got a—well maybe not directly—an opposing experience to the World Economic Forum. The World Economic Forum thinks that because the internet is full of data, large language models are like way ahead of the game. Look, they are to a degree, but as we talk about a lot in our businesses, just because there is a lot of data there doesn’t mean it’s actually useful or good. Where we’re hitting the limits of LLMs is like yes, you can get some good stuff out of them, but it still requires a fair bit of time and a lot of critical thinking that cannot be replaced by a robot. You cannot in any sort of serious conversation think that you can replace human centered critical thinking with a language model. Like it just isn’t going to happen in our lifetimes, I don’t think.
This article is interesting because they’re like, look, you know, you’ve got like Waymo, Tesla, they’re investing millions of dollars into AI because they want self-driving cars and all this other stuff. You look at the deployment of self-driving cars. The only successful place that full self-driving is happening is in a very small part of San Francisco at the moment because there’s been intense mapping of the streets, behaviors and everything that happens there. And they still don’t work properly. People were just going and putting cones on top of like Waymo cabs because you could just stop them working because they thought that there was an obstacle. They can’t handle unexpected input, changes to road rules, regulations. They just don’t know what’s going on. This is actually a perfect example of why this isn’t working to me. The ability for any kind of AI model to ingest and use unexpected data is also not there.
Coming back to defending your position here if you’re getting kind of bullied about how AI can be used, you’ve got to come back to this data piece. Like garbage in garbage out, that’s rule number one. If you just put loads and loads of crap data into something you’re going to get a crap output. Rule number two to pick back up on things we’ve talked about previously is data governance and security. Do you want to be putting all of your precious IP and first-party data into someone else’s model that they are using to make as much money as possible, not for you? The bigger your business gets, the more of a concern that becomes, particularly if you’re a data-driven business, like your data is your money, right?
I thought this was a funny one: AI is like that kid in college who’s got access to all the exams and the study guides. Of course they’re going to crush the test compared to… well, not necessarily. Where this analogy falls over for me is like look, you can have access to all the information in the world. If you don’t bother to actually look at that information and learn why it’s useful, that just makes you just as useless as someone who hasn’t studied at all. This goes back to a theme that’s constantly recurring in these shows which is just because you can do something super easily doesn’t mean you understand the why or the how of doing the thing. If you don’t understand that, you’re not really going to get anywhere.
So point number three that I was arriving at here in your defense against AI oriented cyber bullying is you’ve got to actually know what you’re doing. The challenge we have at the moment, and where this analogy is also quite pertinent, is universities in higher education are under massive attack at the moment from AI because people are just cheating their way through the degrees. They’re not really actually learning stuff. They’re just ticking the boxes because a large proportion of people who are going to university, and we see this a lot in Australia, is not because they want to go to the hallowed halls of education and learn stuff. They’re just trying to tick a box to do something else. That might just be get a job, it might be able to be stay in a country, it might be to elevate to like some faster level of their career. But just because you’ve got a degree doesn’t mean anyone in the working world actually cares at all. They still think that you don’t know how to do anything. If you’ve AI-ed your way through your degree you’re even more useless than students from my generation and they were perceived to be incredibly useless.
What I also felt was interesting here is the World Economic Forum also felt that customer support is like another sitting duck, ripe for AI automation due to abundant data. I agree to an extent. However, there has to still be people in customer support. I hate companies where you cannot speak to somebody. The value that we delivered in my previous software startup was 99% based around the fact that if anyone signed up to our platform or anyone was in our platform they could speak to me or one of our team because we were all there in live chat all the time. That is actually a massive differentiator now.
If they do need to contact customer service though, they should be able to talk to a well-trained person who can do stuff. That’s just where it is.
Interestingly, sectors with a lack of data, like healthcare. It’s mad that the World Economic Forum does not realize that the healthcare industry is sitting on an absolutely huge amount of data, it’s just very, very fragmented. The hard job there is that they’re dealing with a lot of very confidential data which again means that you can’t just be chucking that into ChatGPT. That’s where you’re in the ‘solve hard problems win big rewards’ territory, and there’s a lot of interesting stuff happening in healthcare. I actually think that that’s one of the biggest opportunities where AI can actually deliver huge amounts of value in helping us all make better decisions about our well-being and our health. Education’s in the same bucket to be honest. There’s a lot you could do with education that would be very beneficial, but similar to health care, privacy laws, dealing with minors—there’s a lot of guardrails around that that need to be addressed.
So, look, that’s what the World Economic Forum’s got to say about it.
The one thing I wanted to end on here, on a more philosophical note, is we’re really bad people at predicting what’s going to happen in the future. Being an efficient, effective user of AI tools myself, I jumped over to Perplexity and asked why are humans so bad at predicting the future? This is the final reason, number four, point number four, in your defense against AI getting shoved in your face everywhere: there is a big bubble going on right in AI at the moment, driven largely as usual by technology investment and people trying to make a fast buck.
Here’s why we’re bad at predicting the future, right: Cognitive and psychological bias. Everyone at the minute, because everyone else is saying it, is saying AI is going to do loads of stuff. Not that many people are actually really interrogating why they think that.
These reasons include:
- Overconfidence: We’re all kind of overestimating what we know. Overconfidence leads to ruin in some cases. A lot of this AI hype is driven by people who want to make money. It’s not necessarily driven by higher values and wanting to improve the human experience.
- Projection bias: I think stuff, therefore everyone else thinks stuff. That’s just a basic human issue that we all need to be aware of.
- Optimism bias: Often assuming overly positive expectations or you think things are going to be better than they are. Ultimately, we all just think look, everything’s going to be better tomorrow. We’re all thinking AI is going to be here forever now, it’s baked in, but we’ve thought that about a lot of things before and a lot of things have died on their asses.
- Complexity and uncertainty: We are very bad at understanding the factors that define what’s going to happen next. Philosophically speaking, we don’t necessarily want to know because if we could with any degree of accuracy predict the future that risks locking us into what’s called a deterministic way of living. This starts to remove any sort of real sense of agency we have over where things are going. Uncertainty is what makes things exciting.
The other factor I wanted to end on is emotional influence. The jobs that I deal with most directly—people in marketing, people in sales, anyone in a growth oriented role—the areas that I’m operating in, me and my team are all very human centric. We’re trying to market things to people, we’re trying to sell things to people. These are emotionally driven decisions and interactions that cannot be replicated by a machine.
The ultimate, firmest defense of this stuff is that no machine at present can understand my intent and the thinking behind why I’m doing what I’m doing. What ultimately drives me to do something is a black box trapped in my own mind. And that is where the human in the loop comes in. You need people to understand people and the tools that we have are just tools. If people are trying to force AI on you, you’ve got to defend those areas that only people can service and support.
B2B buying decisions right now are all being made on trust and they’re all very heavily influenced by emotion. You cannot build trust and have that emotional connection only using automated AI driven slop because everybody sees through it. You still got to talk to a person at the end of the day.
I just want to be a voice here to give you confidence that you can just be like, do you know what, like I’m just I am a person doing person-oriented stuff. If we’re not here, what is the point of anything, right?
I’ve been chatting to everyone we’re kind of talking to at the moment, doing a bit of market research across three areas:
- The B2B buying journey.
- Specifically identifying and engaging audiences as a precursor to that.
- The actual sort of growth journey and obviously how that’s enabled by AI and assisted by it and augmented by it, not replaced by it.
For now, I once again have been Stuart Turner. This has been the Flow State podcast.