A while back, I was reviewing results from a What an applied machine learning product manager actually does Applied ML PMs live in the space between innovation and application. They leverage machine learning capabilities, including ranking, recommendation, Making models useful: The PM’s role While working with ML seems like it’s about building models. I’ve found the most important role is deciding what those models should optimize for, and ensuring that optimization aligns with both business goals and user experience. Here’s what I’ve found matters most in practice: Be clear about the goal: Models can optimize for clicks, conversions, or retention – but they can’t decide which outcome matters. That’s where product judgment makes all the difference. Learn enough to ask good questions: You don’t have to write code, but understanding what signals the model uses (and why) helps you challenge assumptions early. Balance fairness and performance: Left unchecked, models often reinforce what they already know. I’ve seen cases where optimizing for “relevance” accidentally meant “popularity,” creating echo chambers that hurt discovery. Fairness sometimes means slowing down accuracy to preserve trust. Turn feedback into measurable levers: Users rarely say, “The model is biased.” They say, “This doesn’t feel right.” The PM’s job is to translate that sentiment into constraints, rules, or additional signals that keep the model honest. Build transparency: Whether for users, sellers, or internal teams, clarity builds trust. Even a simple “Why am I seeing this?” explanation can turn skepticism into confidence. The more PMs understand how models behave, the better they can shape them into tools that serve users – not the other way around. Working with researchers, not around them Some of the most productive collaborations I’ve had were with applied researchers. They think in edge cases, live in data, and care deeply about model integrity – traits that make PM partnerships powerful when done right. Early in my career, I approached research discussions like negotiations: balancing priorities, pushing timelines. Now, I see them as explorations. When I stop asking “When can we ship it?” and start asking “Why does the model behave this way?”, the quality of insights changes completely. Here’s what helps: Ask why a model behaves the way it does, not just how to improve it. Use prototypes or user studies to link model behavior to real-world impact. Treat experiments as stories, not just data – what story does this result tell about your users? In the best teams, research and product are two halves of the same decision-making loop. How PMs can use systems thinkingEven if you’re not managing AI products directly, you can adopt this mindset. Every product has systems that make decisions – about relevance, priority, or visibility. Understanding how those systems “think” is a new kind of product literacy. Getting started can feel scary, so here are some baby steps to get you started: Sit in on one data science or ML review – just listen to how success is defined. Find one automated decision in your product that feels like a black box. Learn what it optimizes for. Replace one vanity metric with a value-based one — trust, satisfaction, or retention over pure engagement. Notice when your intuition disagrees with the data; that’s where understanding deepens. Because in the end, every PM is already managing invisible systems that decide what users see, feel, and trust. Applied ML PMs just do it with a little more math behind the curtain. Final thoughts Applied ML PMs don’t just manage models – they manage meaning. They turn research into reliable experiences and models into moments of clarity for users. The more invisible your work feels, the better the system likely is. When everything “just works”, when results make sense, and users feel understood – that’s the real sign of an effective Applied ML PM. So, if you’re curious about this space, don’t start with the math. Start with the meaning. The rest will follow.