Product Hypothesis Testing

Product development is an iterative process, and I view it as hypothesis testing. Others before me popularized this paradigm—Eric Ries (MVP) and Steven Gary Blank (Customer Development)—and I'd be remiss if I didn't provide attribution. Nonetheless, I have a specific set of hypotheses in mind that I set out to test prior to committing resources to building a new product.

The goal is always to test these fundamental hypotheses as cheaply and quickly as possible. I want to eliminate bad ideas quickly and/or ascertain information that informs a higher-potential pivot around the initial concept… and then re-test the same hypotheses. Often, these hypotheses can be tested without having to build anything—via market research, user interviews, and test-marketing.

I organize these hypotheses into three phases: Problem Validation, Solution Validation, and Business Validation. You should test them roughly in this order, because there's no point validating unit economics for a problem that doesn't exist.


Phase 1: Problem Validation

Before you write a single line of code or sketch a wireframe, you need to confirm the problem is real, painful, and underserved.

1. Persona {persona_name}, in market {market_name}, has an unmet need or pain point.

How to test: Conduct discovery interviews with 10-20 people who fit your hypothesized persona. Don't pitch—just ask about their workflows, frustrations, and how they currently solve the problem. Look for patterns and emotional language ("I hate this," "it takes forever," "we lose money when...").

2. That unmet need (or pain point) is {unmet_need}.

How to test: This is where you get specific. Your initial hypothesis might be wrong about which pain point matters most. During interviews, ask respondents to rank their problems. Use surveys to validate at scale. The pain point you assumed might be a symptom of a deeper issue—or not painful enough to pay to fix.

3. Existing products or workarounds don't sufficiently resolve their unmet need.

How to test: Competitive analysis and interview questions about current solutions. Ask: "How do you solve this today?" and "What's frustrating about that approach?" If they're satisfied with Excel, a competitor, or doing nothing—you have a problem. Also: search for the problem on Google, Reddit, LinkedIn. If no one's complaining, that's a red flag.


Phase 2: Solution Validation

Now you know the problem exists. But can you actually reach these people, and will they pay?

4. We can effectively target and market to this persona/ICP.

How to test: Run small paid ad campaigns (Google, LinkedIn, Meta) targeting your hypothesized ICP. Measure CTR and CPL. Try cold outreach to see if you can even get meetings. If you can't find them or they won't engage, your go-to-market is dead on arrival—regardless of how good your product is.

5. This persona is willing to pay for a product that resolves their unmet need or pain point.

How to test: The best signal is a pre-order, letter of intent, or deposit. Short of that, ask directly in interviews: "If a product solved X, what would you expect to pay?" Run pricing surveys (Van Westendorp, Gabor-Granger). Create a landing page with pricing and measure conversion intent. Talk is cheap—look for commitment signals.

6. We can build a product that resolves this persona's unmet need and that they are willing to pay us for.

How to test: Prototype, mockup, or Wizard-of-Oz your solution. Put it in front of users and gauge reactions. Can you actually deliver the value proposition with your team's capabilities and within reasonable constraints? This is where technical feasibility meets user desirability.


Phase 3: Business Validation

The problem is real, the solution works, and people will pay. Now: does the math work?

7. Our persona's WTP exceeds the CAC and COGS to market and sell our product to them—i.e., the unit economics work out.

How to test: Use your Phase 2 data. Calculate CAC from your test campaigns. Estimate COGS based on your prototype. Model out LTV using realistic assumptions about churn and expansion. If you need a 12-month payback period and your CAC is 18 months of revenue, stop and revisit pricing, ICP, or distribution strategy.

8. We can grow ARR for this product sufficiently to over-compensate us (or our investors) for the capex required to build the product.

How to test: Build a bottoms-up revenue model. How many customers can you realistically acquire per month? At what price point? What's your conversion rate from trial to paid? Stress-test your assumptions. If you need 10,000 customers to break even and your TAM is 15,000, that's not a business—it's a lottery ticket.

9. The market size (SAM) is sufficient to result in meaningful marginal revenue and provide IRR exceeding our (or investors') hurdle rate.

How to test: Top-down TAM/SAM/SOM analysis, validated by bottoms-up customer counts. Talk to investors early—they'll pressure-test your market sizing. Be honest with yourself: a $50M SAM might be fine for a bootstrapped business, but it won't attract venture capital.

10. The ROI of the product is larger than the opportunity cost of not pursuing alternative product development initiatives.

How to test: This is company-specific and often overlooked. Compare this opportunity against other bets you could make with the same resources. Use a simple scoring rubric: market size, confidence level, strategic fit, time to revenue. The best product idea in isolation might still be the wrong one for your company right now.


Where Teams Go Wrong

Most product failures aren't caused by bad execution—they're caused by skipping steps or testing hypotheses out of order.

Jumping straight to #6. Engineers and product people love building. It's the fun part. But if you haven't validated #1-5, you're building a solution in search of a problem. I've seen teams burn 6-12 months on products that failed because no one confirmed the pain point was real—or that anyone would pay to fix it.

Confusing "interesting" with "painful." People will happily tell you an idea is interesting. That means nothing. You need evidence of pain—time wasted, money lost, frustration expressed. Interesting doesn't open wallets.

Over-investing in Phase 1. Analysis paralysis is real. You don't need 100 interviews to validate a problem. Ten to twenty high-quality conversations will surface patterns. If you're not seeing convergence by then, either your hypothesis is wrong or your persona is too broad.

Ignoring #4. Founders often assume distribution will figure itself out. It won't. If you can't reach your ICP affordably, your product doesn't matter. Test this early—ideally before you build anything.

Skipping #10. Every product you build has an opportunity cost. Early-stage companies especially can't afford to chase every idea. The discipline to say "not now" to a good idea is what separates successful companies from distracted ones.


Conclusion

Product development isn't about having the best idea—it's about systematically eliminating bad ones before they consume your resources. These ten hypotheses give you a framework to do exactly that.

Test them in order. Test them cheaply. And be honest with yourself about the results. The goal isn't to confirm your assumptions—it's to break them as fast as possible so you can move on to something better or double down with confidence.

If you get through all ten with strong signal, you're not guaranteed success—but you've dramatically improved your odds. And in early-stage product work, that's the whole game.

Subscribe to Alex Niemi

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe