Transcript
The Flow State Podcast – The Implications of AI 2: Robo-no They Didn’t!
heat Heat N What is up everyone, I am Stuart P Turner. This is the flow state podcast. As promised, new format, shorter, more concise, possibly. Uh, returning to the theme of the implications of AI. Look, the subject that people just will not leave alone. AI is still everywhere. We drifted away from it a little bit in the previous conversations, but it was still there in the background creeping around. Um, so now we’re coming back to it a year and a bit on to see where we’re at, what’s changed, what hasn’t changed, you know, how is it being adopted, have any of those predictions that we made came true? We’re going to find out. We’re going to find out together. So look, this is the new format unless you’re listening, in which case it’s the same format going into your ears. We’re going to start doing the show live every couple of weeks. If you’d like to join me for a conversation, please let me know. I’ve got some guests returning from the previous two series, which I’m very excited about. But for now, today, this is a bit of a reintroduction slash update on where we’re at in our ongoing whirlwind romance/loaf affair with AI.
Talking through a few themes that are relevant to us at Flow State um and my businesses as well as our clients and just setting the scene for the next few conversations basically. So uh without further ado let me take you on a a little journey of where we’re at with our friends the artificial intelligence tools.
So bit of a recap um from where we were in 2023 when we recorded the implications of AI. Everybody was going mental for chat GPT. It was like normal people had suddenly just discovered that you could talk to computers and that they could talk back, which was obviously great. There was a lot of weird stuff happening. Some of that weird stuff is is still happening now. Um, but look, it was very exciting, slightly scary. Feel like that’s the same situation that we’re in now, but, you know, plus a mountain of, you know, grifters rushing to to start AI rapper businesses, but look that’s a winge for another time.
So what’s been happening since then and where are we now? So rapid fire, the main themes that have emerged and the big things that have been happening that have been affecting me and the people I work with are these:
- Rapid Fire Enterprise Deployment: We’ve gone from an exciting proof of concept—hey, I can talk to my computer and it sort of seems like it talks back like a person—to pretty rapid fire enterprise deployment. The speed with which AI has been adopted is pretty amazing. To see something be released and then be, you know, taken up so rapidly is both great and quite terrible at the same time. It’s happened really quickly.
- Job Security and Human Cost: There’s been a lot more firing in the tech industry as well and that impacts a lot of people. That coupled with the fact that AI may well be coming for your job is not great. There’s a human cost to this rapid fire evolution and it’s not something to be overlooked or laughed at.
- Multimodality: In normal people language, this means the AI tools have gone from just being text-based or conversational to now being able to incorporate various different use cases. You can produce video, you can produce audio, you can clone yourself creepily. You can also now upload stuff to AI tools and they can do more than one thing, such as producing a video and an article, or a podcast from things that you upload to them. They’ve gone from being a sort of one thing that does one thing really well to now doing many things. This is pretty awesome because that’s really helping to evolve what previously was just process automation to a pretty awesome degree.
- Grifting Intensifies: I’m running Flow State and building a startup, and I’ve seen so many just crap businesses launched and apparently having millions of dollars piled into them that are literally just a wrapper for an AI model. I can’t see many of those being sustainable, but it’s happening everywhere at the moment.
- Sex Sells: There is a tragic combination of loneliness (intensified through COVID) and a lot of predatory people trying to take advantage of that but claiming they’re trying to help the greater human cause. Most recently, there’s been a huge news spate of AI companion businesses. There’s weird thirst trapping going on, such as the idea that you don’t even need real women now, you can just make a virtual one. There’s a pretty horrible side to all this that was inevitable, alongside the usual sort of more exciting frontier side of it that is likely to turn into something useful.
- Agents Have Appeared (Agentic AI): Agents are essentially an evolution of the AI models. People are building virtual assistants—think Siri but actually useful—who can perform moderately complex tasks, like sorting out your shopping list, telling you where to buy groceries, or potentially even making purchases for you (though not yet suggested).
- Interoperability Has Started to Emerge: In any emerging industry, interoperability becomes an essential conversation, and it has started with AI.
- SEO Industry Identity Crisis: My original background was SEO, and the industry constantly needs to reinvent itself. What’s happening now is the battle between old school SEO and “insert new letter EO,” with people claiming they are optimizing for AI, conversation, or whatever. This is relevant because it will hugely impact how we are building and delivering content and how AI models are finding and using that information.
- Unpaid Interns (AI Models) Are Now Available 24/7: Instead of abusing a human person, you can subscribe for about 20 bucks now and have an AI intern that will work for literally nothing. This is good and bad; if we’re training AI to do stuff, we’re not teaching a future generation of actual human people how to come in and help in the industry. This, coupled with conversations like “we don’t hire juniors” from agencies, is not great if you’re trying to enter the digital workforce.
Where we’ve got to now is AI was like a bit of a fun distraction. It’s become an almost fundamental part of a lot of people’s day-to-day lives very rapidly and seems to be embedded in business. A huge amount of risk is associated with all of this because people still don’t seem to understand how these tools work, what they’re doing with data, or the agendas of the companies behind them. Governance and compliance and legality is incredibly important. You can’t just throw some unsupervised data harvesting tool into your highly regulated, highly secure technology stack.
Evolution of Use Cases and Investment: The use cases have evolved pretty significantly. Gen AI spend alone is predicted to increase by around 76% this year from last year, which is massive. $644 billion US is predicted to be invested into the AI industry, just generative AI. Predictive decision support outstrips process automation. C-suite in particular are now unsurprisingly linking Gen AI to EBIT upside and not just cost-saving.
Industry Specific Models: Industry specific models are starting to replace LLMs. The huge broad models hallucinate crazily, make stuff up, and sound so confident that it’s easy to believe them if you are not critically thinking. These broad models are so broad they are almost useless. What is now starting to happen are closed, segmented, or fenced off models that are much more specific, trained on a really tight use case, providing more useful things that aren’t just “motherhood level” advice.
On-Device LLMs: On-device LLMs are now starting to account for a decent proportion of that investment. If you have a Google or an Apple phone, you will now have Apple intelligence, or similar device-level features like Microsoft’s Copilot. These are being deployed to devices to be a bit more personal and useful. The speaker finds these (like Siri and Apple Intelligence) very lackluster, but other AI models are useful for business aims.
Challenges and Concerns: The challenges and concerns include data and security and governance. High-risk compliance is incoming. If you are throwing a third-party AI tool into your tech stack, you should have done serious interrogation of the risk. A big risk is people giving unfettered access to data and uploading all their data without realizing where it is going.
Another concern relates to the vested interests of the lizard people and/or Illuminati. In reality, this refers to shady, behind-the-scenes characters, like Bezos. His agenda is not your agenda; he just wants to make tons of money, suck in your data, and use it to make even more money himself. A lot of people are racing to be on the forefront of technology without thinking about how this impacts their own business strategy or agenda.
Global Divergence and Fragmentation: AI tools are all evolving in isolation. As with social media, the walls have been raised, and nobody wants to let you out of their individual environment. Everyone is building a tool that only works inside their other tools, which is useless to end users and damages the idea of super efficiency. Walled gardens get even wallier because companies have no interest in letting users get out of their environment, as that is how they make all their money. This results in huge fragmentation, and the speaker does not believe claims of “super efficient” processes without human oversight.
Conflicting Predictions: There are a lot of conflicting predictions. Some research agencies (with references included in the deck) are saying that 300 million jobs are going to be disrupted (people may be fired or jobs significantly changed), but they are also predicting a 7% GDP uplift. The speaker questions where the top-level human strategy is for this, noting that if many white-collar workers are unemployed, where will the GDP uplift come from, and how will society cope.
Useful Starting Points and Interoperability Solutions:
The speaker suggests focusing on the nine dimensions of agentic interoperability, described in an article by Forester by Leslie Joseph and Rowan Curran, as a good starting point for thinking about strategies on one’s own AI journey.
Two big companies are trying to tackle interoperability:
- Anthropic’s Model Context Protocol (MCP): Anthropic optimistically used the analogy of the USB for this. Just as USB standardized ports across physical devices, MCP attempts to do the same thing across all the different tools and AI models to allow a standardized way of interacting that will make things much easier and more efficient.
- Google’s Agent to Agent (A2A): This is somewhat more ambitious than MCP. What Google is trying to do is build a way for agents to find and interact with other agents, not just the models and context. The idea is that an agent can go out, and they all have a standard way of talking to each other and finding each other, allowing them to interact independently without human intervention.
Predictions for the Next Phase of AI:
The speaker believes that standardization and interoperability are essential and inevitable. The following three predictions are offered:
- Optimistic Prediction: Actual sci-fi level interaction with technology. Users can throw away the crappy glass rectangle phone form factor and just talk to computers, which can do things for us. Technology would become more human-centric and less device-centric, perhaps involving a short throw projector and earpiece, or viewing information through glasses.
- Pessimistic Prediction: Imagine if Terminator, Blade Runner, and The Matrix had a baby and that baby is our future society. This includes robot warfare (building on drone warfare). This would be a horrifying use of developed technology, times a million with AI thrown into the mix. There would be a huge new unemployed, disenfranchised group of people with nothing to do because society obsess over business instead of thinking how to make society better.
- Realistic Prediction: It’s just going to be like business as usual. There are one or more bubbles expanding in this space, and one or more of them will pop, and something more useful will emerge, such as something more interoperable or some new technology. The speaker also predicts, in a “tragic comedy way,” that Donald Trump will be selling a phone in front of the White House before the year is out.
This new podcast format is shorter, more concise, and more focused, digging back into AI and bringing back the implications. The speaker plans to bring back previous guests and new guests to discuss these topics, returning every two weeks.