I have always held to contrarian thinking. It has served me well. There is nothing wrong with optimism but when everyone is optimistic, it’s best that one person take a moment and thinking the opposite. Why? TPMs always prepare for all eventualities.
Let me just put it out there what’s on my mind: AI will replace TPMs.
Not in the next year, maybe not even in five, but the trajectory is pretty clear. AI agents are getting smarter. They’re not just answering questions anymore but they’re coordinating actions, pulling signals from across tools, even escalating decisions. Better tooling (Windsurf, Glean Agents), better frameworks (MCP), better models (Qwen plus many), everything is getting better.
My belief is that the future isn’t “AI helping us do our jobs better.” It’s us managing a small army of purpose-built AI agents that do the job.
And if we’re honest… that means we’ll need fewer of us. A lot fewer.
Today still belongs to TPMs who know how to use AI well. When I say “well” I mean those who can weave it into their workflows to scale themselves, automate the boring stuff, and create leverage. If you are just doing odds and ends with ChatGPT, you are not using AI.
But I’m starting to believe the future may not belong to even them.1
If agents become the workflow, what’s left for the people managing it?
That thought sits with me. Uncomfortably. So this post is me starting to think in public about what THAT future could look like, and what it would take to get there.
We’re Asking the Wrong Questions
“Can AI run a standup?”
“Can AI take meeting notes?”
“Can AI write tickets or track a roadmap?”
Sure. Cool. But that’s not the point.
The real question is: Can an AI agent replace a TPM?
To answer this question, we need new ways to measure that.
Because what TPMs actually do isn’t just run ceremonies or follow agile rituals.
-
We process scattered inputs.
-
We synthesize direction.
-
We distribute narrative.
-
We hold accountability.
-
We troubleshoot complexity—both technical and emotional.
If we keep measuring AI’s potential by whether it can follow a process doc or prompt template, or which task it can automate, we’re missing the bigger shift.
We need a better framework for evaluating AI as a replacement going beyond a task assistant.
So… What Would That Framework Look Like?
Lately, I’ve been sketching out a rough rubric to help me think this through, not just “can AI do the task?” but “can AI play the role?”
It’s built around what I think are the core functions of TPM work:
-
Process — making sense of messy, fragmented inputs
-
Synthesis — connecting dots into clear direction
-
Distribute — shaping and sharing the narrative in a way that aligns and motivates
-
Accountability — driving follow-through and ownership when things slip
-
Troubleshooting — figuring out why things are stuck (and it’s rarely just a task issue)
Some of these, AI is already doing surprisingly well. Others… still feel like they require a pulse, a gut, and maybe a little therapy background.
What about the Humane Element of the Job
We don’t realize it but AI is getting better at pattern recognition. Its lack of ego and emotion might make it better at handling tense moments in some situations than humans.
But so much of this job isn’t about reacting to data but it’s about reading the room, reframing direction, and persuading humans with competing goals to move together.
That’s where I think the edges still belong to us. For now.
I don’t have a neat conclusion for this. No punchline.
Just a hunch that we need to stop measuring AI by yesterday’s definition of our jobs.
This is one possible future. I might be wrong.
But if I’m right, the most valuable thing we can do today as TPMs is start designing our own obsolescence and figuring out what comes next.
In my next post, I’ll share the rubric I’ve been working on and about how I’m thinking about AI capabilities across each category, and what it tells us about which parts of the TPM role are truly safe (and which already aren’t).
The goal is not to freak my fellow TPMs out but to tell you that the world is changing too fast around us to sit still.
There is a great quote from Kevin Weil, CPO of Open AI, from his podcast on Lenny’s Podcast:
The thing I try and remind myself is, the AI models that you’re using today is the worst AI model you will ever use for the rest of your life. And when you actually get that in your head, it’s kind of wild.2
At this point, the only security I can see… is automating myself out of a job so I can find what is left for me to do and start focusing on that.
Until next time!
-Aadil