Product Management Isn't Dead and It's Not AI That's Going To Kill It
But where's the humanity!? Oh, it's still there, and it needs to be.
Product management is a deeply human craft. It requires empathy, vision, influence without authority, and nuanced decision-making. These are all things that AI, and specifically LLMs, are terrible at.
I want to get two things out of the way before we continue:
I use generative AI on a daily basis and find it an indispensable tool for getting “stuff” done.
I realise that it will be very ironic if I’m wrong about all of this stuff and you’re reading this 6 months from now from the comfort of your Matrix-style human mulching cylinder.
Let’s get into it. Feel free to subscribe before you scroll!
There's been a lot of chatter recently about the rise of generative AI, and the impact it’s going to have on everyone’s jobs:
We don’t need developers anymore because LLMs can write code for us!
We don’t need designers anymore because LLMs and diffusion models can make wireframes for us!
We don’t need product managers anymore because LLMs can write our requirements and even our strategies for us!
These are extraordinary claims, and extraordinary claims require some evidence. So, is this true? We have Sundar Pichai claiming that 25% of Google’s code is “AI-generated”, and if you can’t trust the CEO of a company desperate to catch up with the AI market, who can you trust? Even that claim is caveated with the fact that human engineers are still reviewing (and presumably fixing) the generated code. I’ve also seen claims from Google insiders that the “AI-generated code” is no more than glorified autocomplete and boilerplate. Now, to be clear, this is still useful, but I don’t think Google will be firing all of its engineers any time soon.
OK, but what about product management? Well, here are some of the areas in which LLMs have been touted as revolutionary for product managers:
Summarisation and categorisation of meetings and research interviews
Synthetic respondents for user interviews (!)
Creating product visions and product strategies
Producing requirements documents and tickets for engineers
There are, of course, plenty of tools looking to satisfy all of these needs, with more coming every week, as well as established tool vendors rushing to get as many AI features into their legacy products as possible. But, are these tools really going to kill product management?
The Disadvantages of Generative AI
I’ve been working on and with AI products since way before they were cool. I’ve built my fair share of AI models to try to work out what they can do. I’m even working through a book called Build a Large Language Model (From Scratch) right now. I say this to try to emphasise that I love playing with this tech. But, we do have to remember one central fact:
LLMs can output very convincing, intelligible, nicely typeset text, but they fundamentally have no idea what they’re talking about.
Now, my buddy
was a guest on my podcast once, and said something to the effect of “It doesn’t matter if they don’t know why they’re picking the words they pick, as long as they’re the right words”. Outcomes over outputs, baby! And, there’s something to this, but you’ve got two basic choices:Trust but verify - accept that LLMs are going get it kind of right but the output needs to be checked and moulded by a human with actual expertise.
Don’t use any expertise - just trust that you’re going to be lucky and that you’re never going to roll snake eyes.
The paradox of overreliance on LLMs is that you either know how to do the work yourself (and LLMs speed you up) or you don’t know how to do it but the output looks kind of fine, so you just go with it. The latter case has the added advantage that you never even have to learn how to do things!
My fundamental belief is that, for anything important, generative AI should be used for inputs, not outputs. Like, sure, use DALL-E to make a fun picture of a robotic overlord for a newsletter, but is that the standard you want to set for your actual work? I’ve yet to see a single compelling output from an LLM that I would be happy to put in front of a customer or use unmodified for anything important.
Or, to put it another way… I speak restaurant German, have listened to all of Rammstein’s albums and can easily recognise German text if you show it to me. But, I wouldn’t ask an LLM to write me an important document in German and then just send it. So why would I use it to produce a strategy if I don’t know how to make one myself?
But, Surely It’s Going To Get Better?
Well, maybe. There are compelling arguments that LLMs are already peaking, and that throwing more computational horsepower isn’t going to help. There’s already talk of model collapse because they’ve already exhausted the publically available content (which, by the way, is awash with AI-generated slop, causing the models to “eat themselves”). There’s talk of using hyper-specific LLMs rather than generalist ones, which might eke out some performance gains, but what if LLMs are already roughly as good as they’re going to be? Now, obviously Sam Altman is saying we’ll have AGI by 2025, but wouldn’t you say that if you needed people to keep investing in your company?
There’s also another snake in the grass - the fact that these models are insanely expensive to train (for increasingly marginal returns) and the computing resources needed to run them are crazy.
has written a bunch of very detailed articles about the amount of money needed to keep these things running, and how companies like OpenAI need constant investment (in the order of billions of dollars) to keep their lights on. Now, I’m very sure people will keep pouring money in, but how long before they want to see a really big return? This is notwithstanding the various lawsuits being filed against AI companies for misuse of copyrighted materials. No one knows where they’ll end up, but they’re not stopping any time soon.None of this makes using LLMs for product management bad per se, but are LLMs going to get better enough to kill product management?
Who’s Going To Kill Product Management?
For all the talk of product management being dead, there sure still seems to be a bunch of product managers out there, and the product manager job market is picking up too. But, there are still a lot of people saying that product management is going to go the way of the dodo.
The problem here is separating reality from hype. Anyone who’s used an LLM to do anything important will know that this stuff can be inconsistent, being up incorrect facts or just be so… vanilla. That’s barely surprising when we consider that these models are able to generate convincing text based on the sum total of existing human knowledge but are fundamentally unable to reason.
Now, I’ve seen articles saying that LLM-generated strategies were rated (by panels of product people) as higher quality than human-generated strategies. On its surface, this is compelling, but dig deeper and you find that these often aren’t strategies at all, but just a list of features that you could build (all of them obvious). Personally, I come away from these types of discussions more concerned that product managers seem to be pretty bad at strategy than that LLMs are good at it.
But, here’s the thing, it doesn’t actually matter if this stuff is any good if people think it’s good. The narrative that we can automate everything via generative AI is already out there. Many business leaders really have no idea what “product management” even is (and quite a few are pretty bad at strategy too). If they boil product management down to a bunch of mechanical tasks that can be easily automated, then they’re going to be very keen to reduce that cost.
But, that’s not what product management should be. LLMs can’t sit in a meeting with
’s Gary from Sales and resolve a debate over prioritisation (certainly not in a way that would change anyone’s mind). LLMs can’t negotiate scope or get someone to disagree and commit. LLMs can’t empathise with users. LLMs can’t come up with truly novel “blue ocean” strategies. LLMs can’t make judgment calls. Of course, in many organisations, product managers also don’t get to do these things, but LLMs aren’t going to solve that either.This leads me to the belief that it’s not AI that’s going to kill product management, nor even product managers using AI that is going to kill it for those who don’t. If anyone’s going to kill product management, it’s influencers (often trying to sell courses or AI tools) persuading cost-cutting leaders that it can be killed and helping fulfil their own prophecies.
Falling in Love with Generative AI
I want to be crystal clear that I love using generative AI, I use these tools every day and I think all PMs should be using them too. I use Gen AI as a brainstorming partner, an idea generator, and a research buddy, as well as for automating some routine tasks. I already can’t imagine going back to the way things were.
But, I still have to use my brain. The best product managers use AI to complement their abilities, not to replace their judgment, creativity, or decision-making. If we can use LLMs to buy us more time for the things we should really be spending our time on, then that’s a beautiful thing. That said, human oversight is more crucial than ever in an LLM world.
PS - here are all the podcast episodes you’ve missed since my last newsletter!
Bee-you-tee-ful. Yes, AI can help you be better but it cannot replace you. Your German language example is why.
(But if all you're doing as a product manager is typing Jira tickets, well, AI really *is* going to take your job.)
Hi Jason, Synthetic Users founder here.
I’m attending your meetup today in Lisbon (funny enough, it’s a 2 minute walk from our office) and would love to chat about the topic of AI (particularly gen AI) with you. I think we agree more than we disagree and talking human to human is the best way to check this.