The Job Has Changed

Notes on RecSys Product – Edition 1

After a few years, I restarted writing a newsletter focused on building technology products. I’m hoping to share a post every month.


It’s been years since I last wrote a product newsletter.

My first one – titled Notes on Product Management – was where I tried to make sense of my early years as a product manager. I wrote about roadmaps, storytelling, problem validation – all the messy lessons that come from learning the craft from the many mistakes and helpful counsel along the way.

I thought about writing again when the nature of the work expanded to managing PMs/PM leaders. But I never quite did even if I came close a couple times. Either the work was too consuming, or I just didn’t feel I had something new to say.

But that changed this year when I realized that the job has changed.

Not my title – I’m still a product manager – but the work itself feels entirely different. Especially when I compare how I spend my time now vs. 18 months ago.

Writing long-form has always been my way of thinking. I’ve loved that the word essay comes from the French essayer, meaning “to try.” Essays are how I try to figure things out. This series will be my way of figuring things out.

Also, while I realize that long-form writing feels a bit out of fashion in the era of podcasts and 30-second videos, it still feels like the right medium for reflecting on how the work is changing.


Why Write Now – A Counterpoint to AI FOMO

There’s another reason I wanted to write now. Some of the conversation around AI and product management has started to feel increasingly distorted. There’s an entire ecosystem emerging around AI FOMO and AI fear-mongering.

Depending on who you read, AI has already killed the PM role. Or saved it. Or will turn PMs into part-time engineers, designers, hackers, founders – and of course, prompt and eval savants.

Then there are the stories of the AI-native PM – the one who moves at superhuman speed, writing evaluations (“evals, evals, evals…”) before breakfast, shipping beautiful Cursor-powered prototypes and code on GitHub during the day, building incredible vibe-coded side projects in the evening, and operating on a different plane of existence. And, of course, they’re doing all of this at the hot AI-native startup that will save their career while the rest of the PM population goes extinct.

I’d like to offer a counterpoint.

This series will be decidedly anti-AI FOMO – and designed to get fewer clicks. My view is that these takes on the PM craft sound impressive, but are also often, in my opinion, performative.

That’s because they are often disconnected from the real, slow, iterative work of building products people actually rely on.

And because the work itself has changed, I want to focus on that real, iterative practice – not the performative mythology around it.


The Fundamental shift – Products Architected Around a Model

There are many takes on how products will be built in the future. I often find myself smiling whenever I see the confidence and certainty behind some of them. Because from where I sit, we’re still at the very beginning of something as fundamental as the Internet – and there’s very little that is certain.

That said, one thing does feel clear: we are moving from products that were architected around deterministic flows – predictable logic and carefully hand-crafted states – to products that are architected around a probabilistic model.

The model is no longer just a feature. Increasingly, the model is becoming the heart of the product. That is a fundamental shift to how we build, ship, and manage products.


My Thesis: Why LLM Recsys (Recommender Systems) Are the Core Primitive

At the center of this shift is a single primitive: a model or system that observes context, predicts intent, and recommends what should happen next.

That brings us to my thesis: Most AI-native products share the same underlying loop – and that loop is, fundamentally, a recommender system.

Once we strip away the hype, the interfaces, and the new tools, AI-native products need to: understand context, infer what the user is trying to accomplish, recommend the next step or action, and learn from every interaction.

That is the recommender-system loop. Except that, with LLMs/Large Language models, recommender systems are vastly more flexible and capable. A traditional ranking model used to just choose from a fixed index. An LLM-powered system can reason, generate, act, and orchestrate tools.

What about agents then? An agent is simply the most expressive version of a recommender system: a system that not only recommends what should happen next, but can take that action, explain it, refine it, or even ask questions to clarify intent.

The surface doesn’t matter – it could be a feed, a copilot, a customer service agent, an orchestrator that calls tools, LLM-powered search, or an inbox assistant. They all rely on the same underlying loop.

Even chat interfaces aren’t really “just generation.” Useful ones are making a choice: Should I summarize? Recommend? Rewrite? Plan? Ask a question? Trigger a tool? Take an action? That choice comes from a recommendation policy.

Article content

And so, as products shift from deterministic flows to loops that power recommender systems, product teams shift with them. Our job becomes designing the loops – the feedback, guardrails, policies, and signals – that shape how the LLM powered recommender system behaves.


A Note on Product vs. Product Management

A quick note on scope – I’m intentionally framing this series around product, not just product management.

Model-centric products blur the traditional boundaries between PM, design, engineering, data science, and AI/ML. When the core of a product is a learning loop – not a static workflow – every function shapes how the product behaves and improves:

  • Designers shape the interaction surfaces and conversation design
  • Data scientists architect the measurement and diagnosis of improvement opportunities
  • Engineers shape the orchestration and safety constraints
  • AI/ML teams ensure the model’s capabilities keep getting better
  • PMs ensure the policy and evaluation reflect the ideal end-to-end user experience

Everyone touches the loop. Everyone shapes how the product learns. While the disciplines remain deep and distinct, we definitely see a lot of cross-functional flex that, when managed well, result in a better product.

That’s why this series is about LLM Recommender Systems as a product primitive, not a PM technique.


What This Series Will Explore

Once you see AI-native products as learning loops rather than deterministic workflows, a number of implications follow:

  • Agents aren’t a new paradigm – they’re Recsys that can act
  • Product policy becomes as important as the PRD – a key step to designing a great human-in-the-loop system
  • Teams evolve around capabilities (“atoms”) and surfaces (“molecules”)
  • Evaluations shift from one-time tests to continuous diagnostics
  • Velocity depends on how well the system learns, not how fast teams ship

These are the ideas I’ll explore in upcoming editions – one concept at a time, likely once or twice a month.

This isn’t a guide or a manifesto from someone who’s figured it out. It’s a set of reflections written in motion – from someone who’s building, operating, and trying to understand this shift in real time.

There’s a lot changing, and a lot we don’t yet know. My goal is simply to ground in first principles, learn publicly, and share the process along the way.

If any of this resonates, I’d love to hear from you. The best part of writing is the dialogue that follows.

 0
 0