Validating our Problem Statement and Hypothesis

A note for new subscribers: This post is part of a series on my notes on technology product management (this is what I do for a living). You might notice that these posts often link to older posts in the series on LinkedIn even though they are all available on this blog. That is intended for folks who only want to follow future product management related posts. Finally, for all those of you who don’t build tech products for a living, I believe many of these notes have broader applicability. And, I hope you find that to be the case as well…


A quick overview of what we’ve covered on “Notes on Product Management” so far –


If there’s a recurring theme in most notes on “Notes on Product Management,” it is about the importance of starting from the problem statement and getting it right. That’s because failed experiments are a very expensive way to learn and we save a lot of cost when we front load the risk in the product development process.

No alt text provided for this image

Thus, we save a lot of wasted team + organizational time when we get our problem statement right. And, we create winning products when we get the hypothesis right.

Given how important these are, the question that should follow is – how do we ensure our problem statements and hypotheses are the right ones?

The first part of the answer to this question is straightforward – we will never have 100% certainty. All our product initiatives are experiments. Experiments, by definition, can succeed and fail.

But, as with probabilistic outcomes in life, there are things we can do to greatly improve the probability of success. In this case, we can do that by running a good validation process.

I. Validating the problem space

When we first looked at problem statements, we broke a problem statement down into 3 parts:

  1. User: Who is it for?
  2. Need: Is the need real?
  3. Business value: Why should we prioritize doing it right now?

Asking these 3 questions alone helps us weed out poor problem statements. In some cases, the target user is poorly defined. In others, the business value is really not clear. And, then there are others where we may see clear business value but only find a vague user need. Working through these questions ensures we avoid creating problem statements out of some anecdotal comment from a user in an interview or because of email feedback from an executive/founder.

That said, asking the 3 questions alone doesn’t suffice. The most challenging part of this is evaluating the question – “Is the need real?”

Over time, I’ve come to believe that there are only 2 categories of needs worth solving for.

(1) Are your target group of users getting visibly frustrated with the status quo? (bonus points if this frustration is causing abandonment)

(2) Are your users hacking existing solutions to get their need fulfilled? (bonus points for the most creative hacks)

No alt text provided for this image

These become easier to grasp with examples:

(1) The original Amazon Prime problem statement is a great example of frustration leading to abandonment. Shipping costs frustrated users and resulted in abandoned shopping cards. Ergo “free shipping.”

(2) For an example on creative workarounds, I thought I’d share one from my recent experience. Our team recently built a product after we observed widespread “hacking” of existing features on LinkedIn. As the COVID-19 lockdowns began in March, multiple companies shared news of layoffs. We then began seeing posts on the feed from members sharing news of the fact that they’d been laid off and asking for support.

Many of these posts were heartfelt as these layoffs impacted financial security, immigration, and/or insurance. And these members were using their Profile headlines and feed posts to say they were “urgently looking” and needed support.

This behavior was definitely against the grain of conventional wisdom on job seeking (don’t let hirers know you’re urgently looking). But, we were seeing tens of thousands of members share this urgency with all of Linkedin. And, most importantly, we were seeing hundreds of thousands of members support these members in the comments.

So, we debated how we might be able to help these members easily amplify this request for help and get support from the community. That debate led to the creation of “OpenToWork photoframes” which have been adopted by 3M+ members in the 3 months since launch – speaking to the intensity of the need.

No alt text provided for this image

In sum, take the time to lay out the a) user, b) need, and c) business value to validate your problem statement. The “need” is the hardest to validate. Visible frustration or creative workarounds/”hacks” from your target group of users are strong indicators that it is a problem worth solving. Powerful problems often have elements of both.

Beware solving problems where both of those are absent.

II. Solution space

Relative to validating the problem space, validating the solution space involves considerably more art than science. That’s because the source of inspiration for your solution can come from 4 areas (the first two will sound familiar):

(1) User complaints/frustration: We hear from our users via support calls/emails and when we conduct user interviews or research. The general rule with user feedback is to listen for the problem and ignore suggestions for the solution.

While that is often true with consumer products, it is bad advice if we are working on B2B products. Great B2B products are shaped by customer requests because B2B products have to solve for requirements beyond usability – legal, compliance, etc.

Thus, B2B product teams often have a long list of great problems + solution ideas from their customers on their backlog. The onus is on the product team to listen for these ideas, prioritize, and build them in the best way possible.

(2) Existing user behavior: We get to understand existing user behavior via analysis or experiments. The story of the hashtag is a great example of the power of following emergent user behavior.

Early Twitter user Chris Messina proposed using “#” followed by the name of the group to help users find related messages in 2007. Over the next 2 years, Twitter users increasingly began using these “hashtags” to enable others to search for tweets related to a particular topic. Twitter finally built in 2009 and the rest, as they say, is history.

(3) Competitor solutions: We learn about competitor solutions from various sources: i) our users/customers, ii) the news, iii) testing competitor products ourselves, and iv) via intelligence from internal teams (sales, business development, corporate development).

Few good competitor ideas go unnoticed. Great ones typically get copied. On occasion, we see exception ideas – those that can’t be copied by competitors despite their best efforts because they build on what is unique to the company.

(4) The team’s gut/insight, or specialized knowledge: These typically come from two places: i) pattern recognition from having solved similar problems before, and ii) knowledge of technology that enables a better solution.

Many many great companies – think of the likes of Intel, Zoom, Salesforce – have been formed by former employees who went on to solve customer’s problems better. Great products are built everyday thanks to an insight from a team member or an executive about a better way to solve a customer or user problem.

No alt text provided for this image

I’ve observed that it is a good sign if you’re arriving at a solution from more than one source. So, it helps to not get overly attached to one source for solutions and, instead, make sure we’re constantly collecting information, triangulating input, and synthesizing them.

Beyond that, the best we can do is to make our experiments as lightweight as possible so we can consistently learn from them and iterate.


A friend of ours is a poker enthusiast who loves teaching beginners how to play poker. The key tenet of his beginner’s poker lesson is to learn to separate the decision making process from the outcomes.

Sometimes, even the best decisions within a hand can lead to poor outcomes. That’s just a fact of life in a probabilistic game. But, in the long run, the probabilistic edge we gain from good decisions stands us in good stead.

Validation of problem statements and hypotheses works the same way. The reason to approach this phase of product development with an emphasis on a rigorous process isn’t because every product that emerges will be a hit. It is because the process helps us maximize the probability of hits in the long run.

And, besides, the core ability in the validation process is learning from every input – existing experiments, user and customer data/feedback, competitive data, insights from members of the team, and so on. That ability to learn stands us in good stead even when there are poor outcomes.

After all, it sometimes takes a Fire phone-esque failure to inspire the creation of an Alexa/Echo.

19 Views
 0
 0