Hype-y New Year - How to Cut Through the AI Noise as a Product Manager
"We're all AI product managers now, Dave"
Before we get started today, I’m going to take this chance to wish you all a Happy New Year, wherever you are in the world. I hope you had a wonderful holiday season with friends and/or loved ones, and that you managed to switch off as much as possible.
I’m also going to take this opportunity to basically sponsor my own newsletter. Here’s some stuff you can do with me!
I’ve got a handful of coaching slots available as we enter 2025. If you’re a product leader looking to make an impact in your organisation or you’re a senior PM looking to make that step up into product leadership, let’s have a chat and see if I can help you.
Saeed Khan and I are continuing our two-man push to make product management teams work more effectively with sales teams. We are running another Maven cohort of our well-received “Working with Sales” course, as well as a more affordable self-paced option for those who don’t want live sessions. Check out both options here.
Berliners! I’m coming to Berlin in January for a handful of events and I’d love to see you there. Come to the pub, join our leadership lunch, or hear my predictions for 2025. Check out the events and register here.
Right - let’s get on with it (oh, and remember to subscribe to this newsletter if you haven’t already!)
It’s 2025 and the AI hype is stronger than ever
I wasn’t originally planning to write about AI again, after my last post on the subject. That said, ever since I started back on LinkedIn after Christmas, the AI hype has been strong. I don’t know if it’s because the algorithm is showing me different stuff, or because people are talking about it more, but it’s getting exhausting.
The hype is primarily focused on the fact that 2025 is going to be the year where amongst other things:
Product management is going to die because developers can just crank out requirements with LLMs now.
This is lucky because software development is going to die because LLMs can generate code faster than humans.
User interface design is going to die because we can just scribble stuff on a napkin and get an LLM to generate working prototypes (including dark mode if we’re lucky).
But UX people are still out of luck because user research is also going to die because of “synthetic respondents”.
Management is going to die (because what the heck do managers do anyway?) and we’ll just have “super individual contributors” running the show.
But not for long, because employment itself is going to die, and we’ll just have HR teams managing AI agents that do all the work.
But don’t worry, because SaaS is going to die (thanks, Satya!) because everything’s agentic now, and we don’t need applications.
According to many commentators, this is all now inevitable because the pace of change in AI is exponential. Apparently, everything’s getting better all the time and we’re going to have AGI (Artificial General Intelligence) by summer. Did you hear that the latest OpenAI model scored 88% on a benchmark you’d never heard of before you read the news article about it? It’s just a matter of time before we’re all doomed.
Is any of this stuff true?
In a word, who knows? We’re one announcement away from every single sceptic being proved wrong. But, you can’t plan your future based on vibes, and so far, most of the hype comes from certain groups of people:
By far the largest group of hype-mongers are the people who stand to make money from the hype (or are trying to recoup their investment). This is inevitable (see also: crypto) and also fairly easy to spot.
You also have people who are just generally techno-optimists and want this stuff to work, and I get that. Having worked on AI solutions in the past, I’m already blown away by the strides that have been made so far. On a purely technical level, I want this stuff to work. It’s exciting!
Inevitably, you have people who don’t really know what they’re talking about but have consumed so much hype from the first two groups that they want it to be true too. Not because of the technology itself per se, but because the idea of “number go up” is incredibly attractive to them.
These people write incessant thought pieces about the inevitability of major disruption. Anyone who questions any of it is a “Luddite” or a “laggard” or “has their head in the sand” or they’re simply “an AI hater”.
Whatever these systems can eventually do, and however good they eventually get, they’re certainly not there yet (although they’re still really good compared to what came before!). Even the new “almost human-level AI” OpenAI model has not yet been released to the public and we have to take a lot of it with a pinch of salt. There are also people who are suspicious that OpenAI has taught to the test. Even if they didn’t, benchmarks are often the solace of people who want to prove that their system is good at everything when, in real life, you’re not working on benchmark exercises. In the meantime, where is GPT5? Where is AGI?
When it comes to “AGI” itself, the definition has always been muddy, but OpenAI has been lowering the bar. They are now defining it as when they “develop AI systems that can generate at least $100 billion in profits” - which is both insufficient as a technical answer (there are many ways to try to generate $100 billion in profits) and unsatisfying even from a business perspective (since OpenAI is years away from turning a profit at all). It’s a long way from the sentient superpower that we were all promised and the human-level reasoning that we appear no closer to achieving.
Working with what we have today
Meanwhile, people are still trying to make the best out of a variety of, let’s face it, functionally identically LLMs. None of them can be relied upon to give a straight answer, and they’re just as confident whether they’re talking nonsense as they are when they’re bang on the money. They can give the appearance of reasoning because everything is very nicely typeset and polite and sprinkled with big, trustworthy words. Frankly, it still feels like magic to me watching these systems crank out text.
On the other hand, it doesn’t take long for them to reliably break down. Try this test: Take a subject that you’re very expert in and start asking detailed questions about that thing. I did this with DOOM 2 as an example, and it was soon merrily inventing false details about the game and the monsters within it. So why would I trust it not to make stuff up about stuff I don’t know about (say, the migratory habits of an African swallow)?
That said, it’s easy enough to poo-poo whether these things reason just because it doesn’t feel like they do. Anyone can be sceptical of anything, and they’ll often be right by default. But Apple went one step further and published a paper on how the illusion of reasoning is just that, an illusion, and what looks like reasoning is just LLMs regurgitating stuff that’s appeared in their training data set. One might argue that Apple only put this paper out because Apple Intelligence is terrible and they need an excuse, but at least they’ve done the work.
The three things I use LLMs for most are transcript summarisation, spitballing, and general research. I thought transcript summarisation was a godsend because who has time to read all their meeting notes, right? Well, one day, when I did have time, I decided to go back and double-check to make sure ChatGPT hadn’t missed any nuances.
Spoiler alert: It had! I’ve now given up just using summarised transcripts on their own, and make sure I jot down key points that I think are important live on the call. I work with clients with complicated problems… I don’t want an LLM to average those out or miss something important. Do you really trust an LLM not to miss something important?
For the other two use cases, LLMs so far remain undefeated, because you actually have to do some work with them to dig into the topic you’re investigating, and there’s no real expectation that you’ll just fire and forget. Using them to explore the space around something, giving me jumping-off points for further investigation (always verify your outputs!), and maybe identifying missing connections between concepts can be a real accelerant. But, you still have to do the work to get good results.
But, if we have to verify everything that comes out of an LLM, and the outputs are so unreliable, are we really going to see the death of all those different types of jobs as claimed? How does that even work?
When good enough is good enough
Actually, it doesn’t matter whether it’ll work or not, because people are going to try anyway. When budgets are tight (but expectations remain higher than ever) and when the hype is as noisy as it is right now, people are going to try to do it anyway. And, to some extent, that’s fine. There are plenty of use cases where people are prepared to sacrifice human-level quality for stuff that’s “good enough”. And, this is assuming that they were getting human-level quality from their human employees! After all, drinking a glass of dirty water is better than dying of thirst.
So, how do you survive and thrive as a product manager in this AI-hyped world?
Well, firstly, don’t keep your head in the sand. These tools are here and the genie is out of the bottle. Learn about them and understand them. When I say “understand”, I don’t mean get down and dirty with the ins and outs of transformer architectures and big data pipelines, but learn what they can and can’t do, and how far they can be trusted. This is crucially important for two different use cases:
The tools you use to help you make an impact in your job
The tools you put into your products to help your customers
I don’t care if you call yourself an “AI Product Manager” or not (OK, I do care a little bit) but keep your focus not on the ins and outs of the technology but on the use cases it unlocks and whether you’re using the best tool for the job. Find ways to test small before you invest big. Be curious, but keep your eyes open and your judgment sound. Ultimately, make sure your work delivers value to you, your organisation and your customers.
The current hype cycle isn’t just hype - there are real, tangible benefits that people are getting out of these tools. You can too. But know the limits and check your work. Stay informed, but you need to look beyond the headlines and the incessant hyping of people who, lest we forget, have a vested interest in all of this stuff. Make sure you’re not being hoodwinked and do your best to be the translation layer between reality and those in your organisation who have been sucked into it.
Great, balanced insights. Thank you
My worry is similar to yours. There seems to be this "faith" that Chatbots are going to save the day. I am finding people who are using it to do general market research without even spot checking the data. Alas, I have found it to sound reasonable, but then to fail miserably when I dive in and try to validate.
What I worry more about is executive staff being all in, as what they do, the current performance is fine. It is perfect to craft a high level narrative, and tell a story.
But go one or two levels deeper where you really need to be ready to justify investments, or efforts based on data, it falls hard.
My fear is that the up and coming generation of PM's will not know when to question the data.