Product ramps, launches and an 8 point checklist

A note for new subscribers: This post is part of a series on my notes on technology product management (this is what I do for a living). You might notice that these posts often link to older posts in the series on LinkedIn even though they are all available on this blog. That is intended for folks who only want to follow future product management related posts. Finally, for all those of you who don’t build tech products for a living, I believe many of these notes have broader applicability. And, I hope you find that to be the case as well…

A quick overview of what we’ve covered on “Notes on Product Management” so far – 


We’ve been on a 3 part journey exploring solving for feasibility and PM <> Eng collaboration. The final part of this journey is all about releases – i.e., ramps and launches. For the purposes of this post, I’ll use the word “ramp” for ongoing improvements and optimizations and “launch” to describe a new/revamped product experience. 

No alt text provided for this image

(a) Ramps – building a drumbeat

While it is natural to think of large product launches as important markers of the team’s success, I believe building a steady drumbeat of ramps is a better leading indicator of a high functioning team. This is because of our tendency to overestimate the impact of a large product launch on our users. Launches get a product in the hands of users. A steady of drumbeat of ramps ensures the product is successful.

I think there are 3 things an IC PM does every week that helps build this steady drumbeat.

No alt text provided for this image

(1) Role model behavior that encourages high velocity experimentation: This sounds like a trivial step. What else would you build a culture around?

Many product teams unconsciously build a culture around trying to ship products that will not receive any criticism from executives or cross-functional partners. In such cultures, team are often stuck in perennial iteration mode and are sensitive to every piece of feedback and criticism. 

Most things we ship as an IC PM will have some detractors. Nothing we ship will be perfect. And, we will regularly have to field questions from seemingly disgruntled executives. We move past this by:

  • Developing a clear bar for when a product is ready to be released to members/customers. For example, releasing a product with a known dead-end is a bad experience and should be unacceptable.
  • Aligning and communicating a clear set of metric guardrails that will help us measure success at a small ramp (e.g. 1% or 5%)
  • ALWAYS taking responsibility when something goes wrong – our team should know that we’ll support them 100% if things go sideways

Assuming we do this, we should be able to free the team to experiment and learn.

(2) Build in a regular cadence of reviewing ramps and the pipeline: The release cadence for the team will vary a lot depending on the product. Enterprise product ramp cadence is more spread out relative to consumer products. API/backend-only products will be able to release faster relative to products that require changes across mobile and desktop platforms.

But, regardless, it helps to build a regular cadence of reviewing ongoing ramps, the pipeline, and sharing notes with the team. If every member of the team begins to understand which metrics matter as time passes, things are going well. That context will help them design and execute better experiments. 

(3) Communicate ramps and lessons learnt with the organization: Many product organizations have systems for communicating ramps – e.g. a shared slack channel or email distribution lists or a weekly meeting. If you don’t have one where you are, create one. Shared context helps. And, if you have one, make sure you use the forum.

It takes time and effort to craft the right message every time you ramp something. But, it is always worth it. It helps us thank team members, spotlight their good work, and share what we’re learning and doing with the rest of the organization.  


(b) Launches – Driving a rigorous process with an 8 point checklist

(1) Run effective “leads” standups with a good “Plan spreadsheet.”

Important launches will often involve 2-3 product teams working tightly together. I recommend bringing team leads together – at minimum, the product + eng + design leads – as often as daily close to launch day. If you don’t have a lot to cover, you can always cancel it. Frequent communication goes a long way in preventing problems before they happen. 

To make these productive, it helps to have a central organizing document. And, the one I’d recommend is a “Plan spreadsheet.” This spreadsheet will have, at minimum, the following:

  • Timelines/key dates
  • Experimentation setup
  • All post-spec decisions on business logic
  • Key dependencies
  • Bugs

When we write our product spec, we don’t yet have a full picture of some of the usability and feasibility constraints. Some of these become evident as we build design prototypes and yet some others only show up as we build the product. There will be many small decision and business logic updates we make through the process. The spreadsheet keeps everyone on the same page.

Here is a sample plan spreadsheet.

(2) Document your Go-to-Market/GTM plan and ensure your GTM pod is meeting regularly.

While it is okay to use your plan spreadsheet for your GTM plan, my recommendation would be to keep this separate. That is partly because this part of the process will typically be driven by a Product Marketing counterpart who will help align the GTM pod – typically a mix of folks from Marketing, Communications, Product support/Operations, Customer Success, and Sales. 

The frequency of meeting with the GTM pod will depend on the product. If we are shipping a sales-driven enterprise product, we would need a GTM team sync at least once a week. This might go up as we get closer to launch. 

The GTM process for an enterprise product is often intense and it helps to carve out time to share input on all the materials created by the team. This will include internal and external comms (blog posts, press, influencer education), marketing and sales training materials and FAQs, as well as help center articles and videos for the product support/customer success teams. 

No alt text provided for this image

(3) Run through legal, security, and safety checks

While these are required before a major launch (most companies likely have an approval process), it helps to build good relationships with folks in these functions and understand what they’re solving for. 

For example –

  • It is helpful to consult legal as soon as we have designs and placeholder copy so they can give us heads up if there’s anything to be changed in our flows.
  • Similarly, it helps to loop in our security counterparts as soon as we have a tech design doc. That will help them give us heads up of any areas where they expect information security concerns.

Problem in these areas can often block a launch. So, it helps to get ahead of them early. This also doesn’t mean we’ll always agree with, say, the recommendation from the legal team. It will just ensure we have the discussion and debate well before launch day.

(4) Be proactive about global language requirements and accessibility

These two checks help us accomplish two things at once. First, they help us ensure our products work for all our users. And, second, it is the right thing to do.

Two notes on these checks –

  • Being proactive about global language requirements isn’t just about making sure content is translated. It also means being sensitive to how our translations show up globally. A phrase that works well in English may show up very poorly in a different language. An English-only name or video may result in global users not feeling included – both lessons learnt from painful experiences. :-)
  • Accessibility issues are ideally solved at the level of the design system the organization uses. Ideally, there are a series of standard design components that have been created with accessibility in mind. That ensures we keep our focus on issues with any custom components built for this launch.

(5) Tracking spec

I’ll start with an admission here. I do not like building tracking specs. Building a tracking spec gets my vote for the least favorite part of an IC PMs job.

But, good tracking does a lot of good all at once. It gives us insight into actual user behavior that helps us iterate on the product. It helps every engineer on the team understand what matters to the success of the product. And, good tracking also helps the engineering and dev ops/SRE (site reliability engineering) teams build the right ongoing monitoring that can help us catch issues when they happen. 

Here’s an example barebones tracking spec we might use if we just launched a new “Experiences” module on Airbnb. We would now be able to measure how often users click the “hearts” in the image using this tracking. We could also add tracking around hovers if we wanted to understand user behavior further.

No alt text provided for this image

(6) Experimentation and measurement plan 

Complex products almost always need a dedicated experimentation plan. The absence of this plan – one that has been aligned across the product, engineering, and data science – can often be existential for the product.

If we don’t setup the experiment well or if we don’t have the right metrics to measure performance, our product may be labeled as a failure. Product launches that fail because of a lack of rigor in experimentation and measurement are often the hardest to stomach. It may have worked well. But, sadly, we’ll never know. 

I have multiple horror stories to share here. One story is when suboptimal metric definition resulted in a series of improvements over two quarters being labeled as unimpressive by a key stakeholder. We realized later that defining the metric right would have changed the perception of this body of work. But, it was too late to shift the perception meaningfully.

In another example, we had a near miss. In the absence of the right metrics, it initially led to questions about whether it was a failure. It turned out to be a high impact launches once we’d gotten measurement sorted.

(7) Testing/bug bashing  

A rule of thumb for bug bashes/testing – divide the number of screens we are shipping by two. For every 10 screens shipped across 2 or 3 platforms (iOS, Android, etc.), we will probably need 5 bug bashes.

Every team develops its own bug bash best practices. Here are a few notes on what has worked for me:

  • Create a separate spreadsheet with all the flows and test criteria – this is best coordinated by a tech lead/engineering manager
  • Ensure every member of the team is assigned roles (e.g. someone takes Chrome/Edge on Windows and Android, another takes Safari on iOS and MacOS, etc.)
  • Report all bugs in the bug sheet on your “Plan” spreadsheet – this is especially important if there are multiple teams involved
  • Triage these bugs together to align on P0s (ramp blockers), P1s (100% ramp blockers), and P2s (follow items) – it helps to triage these together to ensure the team is aligned on the decisions as these will often impact launch dates 

But, and this is where things get interesting, large group bug bashes aren’t as important if we’ve built a quality conscious culture within the team. That can be done by facilitating and encouraging really close collaboration between our design counterparts and the engineering team. When this works well, our design counterpart has already seen and shared feedback on screens before they’re deployed. They are in lock-step with the engineering team on small decisions and are seen as the “go to” for any and all questions on the details.

When this happens, it is magic. 

Finally, when we deal with very large releases (10+ screens across multiple platforms), it helps to use a readiness scorecard to ensure the team is aligned. Below is an example of a readiness scorecard – this is also in the Plan spreadsheet.

No alt text provided for this image

 (8) Beta and ramp

 If we’ve worked through the product development process – starting from the problem statement – the launch is the smoothest and best part of the experience. But, if we have taken any shortcuts along the way, this is the time we pay our debts. 

There are three truths I’ve learnt repeatedly over time. First, every shortcut we take during the process is a debt we will have to repay – often before we ramp. Complex business logic shows up in the form of bugs. A weak experimentation plan hurts our ability to measure success. And, poorly crafted problem statements may even block an initial ramp of the product.

So, don’t take shortcuts. Sweat the details ahead of that beta/ramp. It will show up in the form of a smooth ramp. 

Second, a good product development process nearly always results in meaningful business outcomes. I added a “nearly always” caveat because there are times when things don’t go our way. We always need that dose of luck in the final analysis. But, we also increase the probability of good luck finding you when we run a rigorous progress. 

Finally, launches are just the beginning of the process. It is great to celebrate launches. But, it is helpful to focus on what lies ahead just as quickly. In the long run, no one will care about our excellent product launch if we lose sight of the importance of driving adoption and evolution. 

When we do pay attention to that, we are left with that most beautiful of things – an enduring, high quality, product. 


All this brings us back to the question that got us started on this mini-series – what is the best response to the “how can we move faster” question?

As I said when we started, resourcing is considered a magic bullet far too often. It is tempting to look outward and complain about resourcing. I’ve been there.

But, our highest point of leverage often tends to be running a rigorous product development process and enabling our existing team to be at our best. Doing so means collaborating well with our product development team by getting better at our 4 core skills – problem finding, problem solving, selling, and building an effective teams.

Somewhat ironically, doing so puts us in the best possible position to be trusted with more resources.  

22 Views
 0
 0