There’s a thought experiment you’ve probably seen shared as a meme. Five monkeys in a room. A ladder. Bananas at the top. Every time a monkey climbs the ladder, all five get sprayed with cold water. You swap one monkey out for a new one. The new monkey tries to climb. The others pull it down. You keep swapping until it’s a completely new group, and not a single one of them has ever been sprayed. They still won’t climb. None of them knows why. They just… don’t. (the original meme ends with “and that’s how corporate culture works” and we can all take a moment to acknowledge that’s sad, true and funny all at once.)

I’ve been thinking about this in the context of a piece I read this week (published mid-April originally) about AI eroding the talent pipeline. And I think we have a version of this problem quietly building in product marketing that will eventually become evident. Consider this my Canary In A Coalmine statement.

The Version Nobody’s Talking About … Yet

The conversation about AI replacing PMMs has been loud. I’ve written about it myself. And I still believe what I said: AI can’t replace a skilled PMM, but it will make one much better.

But there’s a slower, less visible version of this problem that doesn’t show up in a headline about robots taking jobs. It shows up 5 years from now, when a company looks around and realizes their most experienced PMMs have retired or moved on, the earlier in career team came up in an era where AI did the reps for them, new hires just see AI and agentic workflows, and nobody in the building actually knows why things are the way they are. They just know it is as it is.

38% of CHROs at large enterprises say they’re already worried that early-career talent aren’t building the long-term skills they need (from an SAP/Wakefield Research survey of 100 US CHROs conducted early 2026). Not the AI skills. The judgment skills. Critical thinking. Emotional intelligence. The ability to sit with a messy market signal and turn it into a clear narrative. Nearly 90% of 2026 graduates are worried AI will eliminate their entry-level role entirely up from 64% just a year ago.

They’re not wrong to be worried. The entry-level PMM role is where you learn the craft. You write the battlecard that a senior PMM tears apart and tells you why. You run the positioning workshop and watch it fall flat in the room. You interview a customer and come back with notes that don’t quite capture what the customer was actually saying. You iterate. You develop instincts. You fail small.

That’s not inefficiency. That’s the talent pipeline learning.

The Org That Forgets Why

Here’s the scenario that worries me. A mature company with a functioning PMM team decides, quite reasonably and appropriately, to use AI to make the function more efficient. They automate research synthesis. They use AI to draft battlecards, first-pass messaging docs, launch briefs. The experienced PMMs review and refine. Output doubles. Headcount stays flat.

Good decision? On the spreadsheet, absolutely. This is the discipline changing and being redefined in real time, history being printed as we read.

Let’s jump forward 5 years. The experienced PMMs who knew why the positioning was built the way it was, why the instructional Markdown files say what they do, the competitive context, the failed experiments, the customer insight that changed everything in 2022, have moved on. The team that remains inherited the outputs but not the reasoning. They’ve been reviewing and refining AI drafts for years. They’re fast. They’re proficient. They just don’t know what they don’t know. We have seeded decisions to the machines, Skynet has won.

And when the market shifts (it always shifts), the instinct to re-examine the fundamentals isn’t there. Because it was never developed. The monkey is still not climbing the ladder, and nobody remembers the cold water.

This is the talent debt that Jeff Raikes a former Microsoft executive, Gates Foundation CEO and a man I know to be crazy smart, described recently as “a cost that isn’t showing up on any balance sheet yet.” You optimize for short-term output while quietly eliminating the conditions that produce future expertise. The bill comes due later, all at once, when you need the senior judgment most.

The Founder in 2026 Version

New startups have a different version of the same problem.

I wrote recently about founder-as-PMM: using AI to do the execution work so a two-person team can launch like a much bigger one. I stand by that. At zero headcount, AI is the right answer. Get the product to market. Get customers. Figure out what’s working.

But here’s the question that post didn’t fully answer: who trains the AI?

Not technically. The robots are good at finding stuff, and sometimes making shit up. I mean contextually. The AI that drafts your positioning needs a brief. The AI that writes your battlecard needs a source of truth. The AI that generates your launch messaging needs someone who has been in enough rooms, had enough customer conversations, and absorbed enough market signal to give it the right inputs. Garbage in, garbage out. Always.

A startup that decides to build a PMM function entirely on AI automation from day one has skipped a step. And rightly or wrongly, by design or in ignorance, they’ve got the machine but not the operator. The positioning might look great. The messaging might be clean. But without a human who has developed the instinct to know when the AI is confidently wrong, and AI is often confidently very wrong, you’re flying on vibes that a language model generated, not signal your market actually gave you.

The Humans + AI model I keep coming back to isn’t just a philosophy. It’s a dependency. The AI is only as good as the human context it’s operating in. For a new startup, that context doesn’t exist yet. You have to build it. That means doing some of the messy work yourself before you automate it. The PRFAQ exercise I described in the last post isn’t just a documentation step. It’s you developing the judgment you’ll need to direct whatever AI you eventually point at the problem.

What This Means in Practice

For established PMM teams: document the reasoning, not just the outputs. The battlecard matters less than the decision that shaped it. Your CLAUDE.md, your MSD/MPF, your ICP docs, these aren’t just AI briefing tools. They’re institutional knowledge transfer mechanisms. The goal is that a new PMM in year five has access to the why, not just the what.

And be deliberate about where earlier in career PMMs do the reps. Not everything should be AI-first. Some things should be human-first and AI-assisted later, specifically because the struggle of the first draft is where the instinct develops. You can argue that the blank-page problem is solved, because it is. But sometimes knowing how to get from blank-page to something is more about the journey than the destination.

For founders: use AI to move fast, but use the work to develop your own context. The interview you do for the case study, the PRFAQ you argue with yourself to write, the customer call you sit on before you hand synthesis to a model, these are the inputs that make everything downstream better. Don’t skip them in the name of efficiency.

The five monkeys don’t climb because they are behaviorally trained not to, none of them ever experienced anything that would inform their behavior. In a world where AI handles more and more of the execution, the humans who know which ladders are worth climbing, and why, are going to matter more, not less.

Tend the PMM talent pipeline. No one else will.

Adam