Laura Klein
Review
It's clear why this book receives great reviews—it's a well-structured, jargon-free, practical guide to product discovery. For product managers looking to enhance their discovery skills, this is one of the better resources available. While it doesn't offer many groundbreaking revelations, it provides a solid foundation in product discovery.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
Products fail or succeed based on how well they solve real problems and change user behaviour, not just by shipping features. Success requires a laser focus on one primary business goal, whether that's increasing revenue, user growth, or another measurable metric. Vague goals like "get more users" don't work - teams need specific, achievable targets they can influence directly.
Understanding users means going beyond basic demographics into their actual behaviours and needs. Interview users in small batches of five until you start predicting what you'll hear. Create provisional personas but validate them with real user data. Don't rely on what users say they want - observe what they actually do. When researching, match methods to specific questions: use quantitative data to understand what's happening, qualitative research to understand why.
Product development should start with observed user problems rather than assumed solutions. Many successful products like Slack and Flickr emerged from solving real problems their creators encountered. When brainstorming, only include participants familiar with recent user research, and focus on specific outcomes. Use techniques like dot voting to identify resonant ideas without requiring full consensus.
Prioritisation means saying no to most ideas while ensuring you release value quickly. Evaluate features on two axes: expected value created for users/company and effort required. Don't just chase new features - make time for technical debt, stability improvements, and iterations on existing functionality. Maintain a short-horizon roadmap plus a backlog of ideas and experiments, rather than planning years ahead.
Design should focus first on user flows and context before diving into screen layouts. Consider where and when people will use features, what interruptions might occur, and how tasks naturally sequence. Create consistent interfaces through style guides and design patterns, but only use patterns that genuinely support user needs. Get ideas out of your head through sketching and wireflows.
Features alone don't guarantee results - focus on changing user behaviour in ways that benefit both users and the business. Guide users to required tasks first, offer helpful nudges toward secondary tasks, and let them discover advanced features over time. Avoid dark patterns that trick users into unintended actions - they destroy long-term trust.
Test assumptions early to avoid wasting resources. Categorise assumptions into problem (is this needed?), solution (will this work?), and implementation (can we build it?) types. Focus first on assumptions that are both likely to fail and would have severe impact. Create falsifiable hypotheses with clear success/failure criteria.
Match validation methods to assumption types: use landing pages or pre-orders to test demand, concierge or Wizard of Oz approaches to validate solutions before building, and technical prototypes to verify implementation feasibility. Track experiments systematically, recording predictions and actual outcomes.
Build measurement into features from the start. Focus on metrics that drive decisions - if a metric changes, what action would you take? Track business outcomes (revenue, churn), user experience (speed, bugs), and product health (ensuring improvements in one area don't harm others). Avoid vanity metrics that always go up but don't inform choices.
Use appropriate measurement methods: A/B testing for comparing versions, cohort analysis for understanding retention patterns, funnel analysis for identifying drop-off points. Ensure proper data collection and stay focused on metrics that matter for your current stage.
Team structure significantly impacts product quality. Avoid common dysfunctions like siloed departments, design-by-committee, dictatorial product managers, or complete chaos. Instead, build "heist teams" where specialists unite around shared goals while leveraging their unique expertise. Product managers should provide clear vision and context while trusting team members to lead in their domains.
Success requires constant iteration across all these areas. Keep improving your understanding of users, refining solutions, testing assumptions, and measuring impact. No product is ever "done" - there's always room to make it better through continued learning and refinement.
This approach helps teams build products that genuinely improve users' lives while driving business success. It replaces guesswork and feature-chasing with validated learning and measured improvement. While the process takes discipline, it dramatically increases the odds of creating something people actually want and will pay for.
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Forward and Introduction
No one makes bad products on purpose, and yet we have so many of them in our lives. Scott Berkun
For something to be better that it was, you need to know what better means
Better products improve the lives of the people who use them in a way that also improves the company that produces them. In other words, better products make companies more money by making their customers more satisfied.
To build better products - Focus on these 6 things:
- Goal: Defining the business need
- Empathy: Understanding user behaviour and needs
- Creation: Designing a user behaviour change that meets both the business and user need
- Validation: Identifying and testing assumptions
- Measurement: Measuring the real impact of changes on user behaviour
- Iteration: Doing it again (and again)
Part 1: Goal
Chapter 1: Defining a Better Business Need
Having one primary goal—at the company, product, or team level—simplifies decisions and aligns efforts. If a higher-level goal is assigned, focus on how your product or team can contribute directly to it. If no goals exist yet, create a single, specific objective that is both measurable and achievable. A broad aim such as “make more money” or “get more users” is too vague; it must be refined into something your team can actually influence.
A measurable, achievable goal clarifies what success looks like. It also reveals what data you need in order to track progress. When teams set overly broad or unreachable goals, they either cannot prove they contributed to any success, or they end up stuck chasing something outside their control.
Remember: We make better products when we find ways to improve things for our business by improving things for our customers. It’s wonderful to make our customers happy, but customers should be happy to pay for great products.
Exercise to Define your Goal:
- Ask everyone to finish one of these sentences (on sticky notes):
- If you don’t have a higher-level goal: “If I could wave a magic wand, our company/product would have more/fewer ____.”
- If you do have a higher-level goal: “In order to reach our company target, our team/product should have more/fewer ____.”
- Read each idea aloud, group similar ones together, and push for specificity (e.g. “Increase conversion from paid ads by 10%” rather than “get more users”).
- Agree on a final version that is clear, measurable, and within the team’s power to achieve.
To decide how to improve that primary goal, look at the user lifecycle funnel. It outlines the ideal journey from a person first hearing about your product (Awareness) to becoming a long-term, paying, and recurring user. By measuring how many users drop off at each step, you identify where friction is highest. If your product serves multiple user types, you may have multiple funnels. Understanding each funnel helps you see how your customer acquisition cost (CAC) compares to customer lifetime value (LTV). Profitability hinges on LTV > CAC.
Once you find the most important friction points in your funnel, investigate why users leave and fix those issues.
- Exercise to Define your User Lifecycle Funnel:
- Get the team to answer each of the following on post-its, then group and shape them on a board into a funnel:
- How will you get people to hear about your product or service?
- How will you help people learn enough about your product and service to know they want to purchase or use it?
- What is the 'aha' moment in someone's first use of the product?
- How will you get permission and ability to contact users?
- How will you make money from this user?
- What will happen within the first couple weeks to turn early users into long-term, retained users?
Companies commonly fail to reach their goals for five reasons:
- They fail to prioritise one primary goal.
- They do not communicate that goal clearly and repeatedly.
- They have no concrete plan to achieve the goal.
- They never make time to work on what matters.
- They give up prematurely rather than iterate on what’s not working.
One method for keeping teams focused is setting OKRs (Objectives and Key Results). OKRs encourage you to pick a single objective, define numeric key results that show progress toward that objective, and then track them carefully.
- Exercise to Define Team OKRs:
- Set the Objective: Keep asking “why?” until you find the core business reason (e.g., “Expand our market share in X region”).
- Generate Key Results: Everyone writes down possible metrics (e.g., revenue, acquisition, retention) that would prove you’re achieving the objective.
- Prioritise: Pick the few key results that matter most. Also define “health metrics” you refuse to let slide while pursuing your main goal.
- Track Progress: Hold weekly reviews to see if key results improve and adjust your approach. Keep going until the objective is met or you need a new one.
By narrowing your focus to a single goal, clarifying it with specific metrics, and mapping the user’s journey in a lifecycle funnel, you can better identify what to build or improve. Then, by using an OKR framework, you keep the team aligned and accountable for reaching that single, measurable target.
Part 2: Empathy
Product Managers can fall into the trap of being too business-orientated. You can’t just care about the business, you won’t be able to hit your business goals without understanding and caring about your customers.
Chapter 2: Understand Your User Better
Focusing on a small, specific user group makes it easier to gain traction. Even large businesses should avoid aiming for “everybody” in the early stages. By targeting one clearly defined audience with shared problems, you stand a better chance of delivering real value. You can always branch out to adjacent markets once you’ve satisfied your initial niche.
Provisional personas help your team visualise an ideal user when designing or improving a product. These are based on assumptions about facts, problems, behaviours, needs, and goals. They’re only hypotheses. After brainstorming, you must validate them with evidence based on interactions and observations of real users. Ultimately, you want your personas to become predictive: if someone matches the persona’s criteria, they’re highly likely to adopt and keep using your product.
Exercise: Provisional Persona Creation:
- Divide a sheet into four quadrants: label them Facts, Problems, Behaviours, Needs/Goals.
- Write sticky notes for each category (two minutes per category, one item per note).
- Pick the top four in each quadrant.
- Consolidate everyone’s results into personas, then vote on the persona that best represents your target user.
Moving from a provisional to a predictive persona requires testing: recruit people matching your persona, try to sell them the product or have them sign up and see if they stick. If your presumed perfect match won’t adopt your product, you either picked the wrong persona or built the wrong solution. Keep iterating until you find a profile that consistently converts.
Until you can identify the specific things that make a person want to be a customer, you don’t have an accurate predictive persona. And that means your product and design decisions will be based on a lie.
Exercise: Identifying Problem Patterns:
- Find five people who fit your persona criteria; interview them about their problems and daily tasks, avoiding any sales pitch.
- Debrief after each session, writing observations and separating “problems” from “behaviours.”
- Look for recurring themes—interview another five people. If the same problems surface, you have a genuine pattern.
This process uncovers shared pain points and clarifies whether your market is really homogeneous. You repeat these small-batch interviews to refine both your product focus and user assumptions.
I often get asked, “How many people do I need to interview before I’ll know the answer to the question I’m asking?” The answer is “Five. And then another five. Until you start to be able to predict what you’re going to hear.”
If you can’t find five customers who will talk to you for 30 minutes about your product, maybe it’s not a great market for you.
Exercise: Making the User Map
- Pick one user/customer you want to map.
- Answer key questions in five categories (use sticky notes for each):
- Channels & Influencers: where they hear about products, who influences decisions.
- Goals & Purchase Intent: what needs they have, how much time/money they spend, whether they’re actively looking for a solution.
- User/Product Fit: specific behaviours/goals that predict usage, how your product helps, how it makes them better.
- Context of Use: where, when, and with whom they use the product.
- Future Use: why they’ll keep using it, how usage might change as they grow or circumstances shift.
- Mark what’s validated vs. guesses, then decide which research you still need.
Pitfalls of Provisional Personas
- Merely sketching a provisional persona but never verifying it can result in your building a product for someone who doesn’t exist.
- Confirmation bias can trick you into hearing only what you want from interviews.
- Rushing into solutions without fully understanding the real problems can waste effort on features nobody needs.
You should constantly be going back to your personas and updating them as you learn new things and build an understanding of the people who are going to buy your product.
Cindy Alvarez stresses two crucial questions:
- Who is getting the most value from our product?
- How can we learn more about them so we can retain and monetise them, and acquire more like them?
To do this she recommends three steps:
- Identify power users by analysing who uses the product most.
- Ask one straightforward question (“Does this product make your life easier/better?”) via email or a quick in-app poll.
- Interview those who said “Extremely” (they love the product) and those who said “No” but still use it. Understanding both extremes reveals why your best users are happy—and why the reluctant ones stick around despite dissatisfaction.
Chapter 3: Do Better Research
The art of research is clarifying what you want to learn and matching it to the right method. Asking people the wrong question or using the wrong methodology often leads to misleading data. Instead of letting users propose solutions or predict the future, focus on problems, goals, and behaviours. This shifts our enquiry from “What should we build next?” to “What’s preventing people from accomplishing what they need?”
Exercise: Picking a Research Topic
- Write down a question you want answered in your next research session.
- Check if it’s specific, answerable, and aimed at discovering user problems or product issues rather than asking users to design or guess.
- If it’s vague (“Will you use my product?” “How should I redesign my UX?”), refine it until you have a clear, actionable question.
Exercise: Picking a Research Methodology
- List each research question on a sticky note.
- Decide if it’s about users (e.g., their problems, unmet needs) or product (e.g., usability, usage data). Mark it “U” or “P.”
- Decide if you’re generating ideas (G) or validating something (V).
- Identify if it’s “What” (quantitative) or “Why” (qualitative).
- Decide if it needs long-term (L) or single-session (S) observation.
- Use these labels to choose the right research approach.
Generating Ideas (Generative) vs. Validating Ideas (Evaluative):
Generative research creates new insights. Observing user behaviours or open-ended interviews help discover problems or opportunities. Evaluative research tests existing hypotheses or prototypes to see if something works or doesn’t.
Quantitative (What) vs Qualitative (Why):
- What research is quantitative. It reveals facts and metrics (e.g., how many people abandon checkout). Why research is qualitative. It uncovers motivations, frustrations, and how to fix problems.
Long Term vs. One Off:
- Single-session studies capture immediate tasks or first impressions.
- Longitudinal or diary studies track user behaviour and attitudes over time.
Common Methods:
User Methodologies (to learn about users):
- Observational Research: Watching users in context or remotely.
- Contextual Inquiry: Deep immersion in the user’s environment.
- Customer Development: Early-stage interviews focusing on discoverable user problems and willingness to pay.
- Wizard of Oz / Concierge Tests: Manually simulating features to learn if users value them before full build.
- Diary Studies: Tracking behaviours or attitudes over days or weeks.
- Surveys (used carefully): Can help confirm patterns but are risky for open-ended idea generation.
Product Methodologies (to learn about the product):
- Usability Tests (task-based): Observing people as they attempt key tasks in a prototype or product.
- Five-Second Tests: Quickly testing if landing pages or designs communicate value.
- Funnel / Cohort Analysis: Tracking where users drop off or convert over time.
- A/B Tests: Comparing variations in a live environment to see which performs better.
- Observational Research: Watching existing usage flows.
- Solution Interviews: Testing a proposed solution with users to gauge acceptance.
- Diary Studies: Monitoring usage patterns or performance over an extended period.
Pitfalls in Choosing Methods
Some methods (like surveys or focus groups) can give misleading data if poorly designed or executed. Surveys rarely generate new insights unless you’re an expert writer of survey questions. Focus groups often combine multiple users’ opinions into an unhelpful blend.
Eye tracking and other advanced methods can be overkill without clear objectives.
Steve Krug’s Advice
A low-risk way to start usability testing is to test competitors’ products. This reveals design patterns or missteps without facing the fear of hearing your own product’s flaws. By watching users tackle tasks in rival products, you learn what works well and what doesn’t—then apply those lessons to improve your own offering. This simple approach builds empathy for real user struggles and avoids internal objections about time or criticism of “our baby.”
Research methods must align with the questions you want answered. Start by defining the user or product issue you need to explore, choose an appropriate generative or evaluative approach, and decide whether to measure “what” is happening or explore “why.” Combine multiple methods over time, and always stay focused on uncovering real problems and opportunities that drive better product decisions.
Chapter 4: Listen Better
Empathy involves understanding people’s situations and motivations within their own context. Instead of simply feeling sorry for them, it means walking in their shoes and learning the reasons behind their behaviours. Focused listening reveals these insights. Listening just to gather random data can create overwhelming amounts of information, so it’s important to start with a clear goal in mind: pick a specific question or topic you want to investigate, and keep that focus as you interview or observe users.
Certain questions lead to unhelpful answers:
- Asking “Will you buy this product?” encourages polite, optimistic responses rather than truth.
- Asking “What would we need to build to get you to buy?” outsources product design to users, who can’t usually envision good solutions.
A better approach is to discover the real problems and context in people’s lives. Rather than relying on users’ feature requests or guesses about the future, dig into the specific frustrations they face and how they’ve tried solving them. Explore their past and current behaviour for genuine intent to fix the problem.
When interviewing, it’s best to ask open-ended questions that invite storytelling. Start with easy background questions to build rapport. Move into the user’s workflow or relevant context, spotting friction points. Ask how they’ve tried to solve those problems or whether they’ve paid for a solution before. Encourage stories of actual events or daily routines. Avoid giving answers or steering the conversation toward what you want to hear. Let participants show you what they really do or have done. Follow up on interesting remarks until you understand why they feel as they do.
Below is a simple way to improve your interviewing skills:
Exercise: Interviewing Better
- Prepare a set of open-ended questions about a product or problem.
- Choose a real participant to interview and an observer to watch you, not the participant.
- Conduct a conversational interview, covering your prepared questions in any order. Don’t sell your product; don’t lead or help the participant; and listen more than you talk.
- Have the observer note if you do any of these: selling, using yes/no questions, leading, making the participant imagine features, helping them too soon, failing to follow up, or talking too much.
- Discuss feedback with the observer afterward and adjust your technique.
Listening can be risky if you interpret everything too literally. People sometimes misremember or exaggerate, so watch for patterns and observe behaviours whenever possible. Consider the context in which they use products, and look for differences between what they say and do. This combination of watchful observation and careful questioning yields deeper insights than simple verbal feedback alone.
The best interviewers practice a lot, record themselves, and reflect on what to improve. Studying how professionals (or skilled interviewers like Terry Gross) handle conversations can also provide new techniques. It’s important to build rapport gradually, using easier, objective questions at first before moving into deeper territory. A good interviewer stays alert to unspoken cues, noticing when participants hint at bigger problems or vulnerabilities. Observing and inferring from these signals is a key to uncovering the truths that help shape better products.
Part 3:Creation
Chapter 5: Have Better Ideas
Great ideas often emerge from observed user problems. Flickr started as a photo-sharing feature within an online game, and Slack grew out of a gaming team’s internal communication tool. Airbnb arose from the founders’ own housing challenges. These products succeeded by addressing real needs, not by guessing what people wanted. In existing products, many successful features also come from watching how people actually use them—like hashtags, originally a user hack in Twitter.
While any source of ideas can be valid, prioritising user research and understanding the “why” behind user behaviour leads to far more valuable concepts. How to generate better ideas:
Better Brainstorming
- Only Invite Informed Participants: Everyone present should have some stake in the product and be familiar with recent user research.
- State a Clear Goal: Focus on a specific outcome, like increasing conversion or identifying new markets.
- Free Listing: Each person silently writes ideas on sticky notes, one idea per note, before discussing. This fosters diversity of ideas.
- Dot Vote: Everyone gets a small number of stickers to vote on their favourite ideas. The goal is not to reach consensus; it’s to understand which ideas resonate with the team.
User-Defined Tasks
Users don’t always perform the neat, predefined tasks we imagine. During research or usability tests, let people show how they naturally use a product, or describe their most recent or most common tasks. Observing their own workflows may reveal unconventional usage patterns or new problems to solve.
Exercise: Mapping the Customer Journey
- Do the Research. Gather qualitative or quantitative data about how people encounter and use the product.
- Generate Touchpoints. Everyone silently writes down every interaction a user might have.
- Arrange in a Timeline. Cluster related touchpoints and order them from first encounter to deeper engagement or cancellation.
- Categorize. Label the major phases (e.g., Awareness, Education, Engagement).
- Annotate. Add notes on user feelings, goals, or stories at each step to highlight friction or opportunities.
- Save It. Photograph or redesign as a readable artefact so the team can reference it when generating ideas.
Beware of the pitfalls of endless ideation. Ideas are plentiful but can be useless if they’re disconnected from user needs. Some bad ideas may evolve into good ones over time, and vice versa, so keep revisiting assumptions as the market, technology, or users change.
Tips:
- Create Design Principles from Insights. Gather data from multiple sources (quant, stakeholder input, interviews), converge on insights, then convert those insights into actionable design principles (e.g., “Learn while doing”).
- Write More Effective Principles: They should be memorable, understandable, and easy to apply (e.g., “It should feel like a conversation”).
- Select the Right Collection. Choose five to eight principles that mix tactical constraints (like size requirements) and higher-level guidance (like tone or approach). They help ensure consistency across various product touch-points and teams.
Better ideas happen when teams understand their users’ contexts, constraints, and motivations. By combining good research with collaborative, user-focused exercises, you generate features and products people genuinely value.
Chapter 6: Prioritise Better
Good prioritisation involves deciding what to build and in what order, saying "no" to most ideas, and ensuring you release value sooner while avoiding bloat. Poor prioritisation leads to cluttered products, missed opportunities, and wasted effort. Two key questions to ask when evaluating any feature are:
- Does it create value for the company and the user?
- Is the return worth the effort?
You'll never perfectly predict return on investment or effort, but you can learn to make better estimates by involving the team and tracking results. Keep in mind that new and exciting features often overshadow vital tasks such as fixing technical debt, maintaining speed and stability, improving usability, or iterating on existing features.
Exercise: The Quick Estimate:
- Generate feature ideas: Everyone writes potential features on sticky notes silently for five minutes.
- Combine and explain: Post duplicates together and clarify each idea.
- Make a 2x2: Label one axis Easy → Hard, the other Low Return → High Return.
- Sort and discuss: Place sticky notes where they belong relative to each other.
- Get rid of half: Eliminate low-value ideas; focus on high-return features.
Exercise: Finding the Core
- Review research: Make sure everyone understands what problem you're solving.
- Generate product requirements: Five minutes of silent sticky-note writing about what the feature needs to include.
- Combine and explain: Remove duplicates. Expect many notes.
- Identify the core: Draw a circle labelled "core." Move only the absolutely essential notes inside it.
- Then identify what's next: Add a second ring labelled "Next (Maybe)." Place remaining "nice to have" or secondary ideas outside the core.
It's easy to confuse a roadmap with solid priorities: a roadmap that stretches years into the future is risky. Instead, maintain a short-horizon plan plus a backlog of ideas and experiments. Reevaluate consistently, but don't change directions so often that your team never finishes anything. Give proper attention to hidden must-dos such as bug fixes, technical debt, stability, or design improvements.
Tips from Teresa Torres
- Don’t prioritise in a vacuum. Instead involve stakeholders and customers so decisions aren't based on guesswork alone.
- Don’t overestimate potential impact. Instead record predictions and compare them to actual results to improve your estimations.
- Don’t keep it in your head. Use visual "maps" of customer needs, business models, and user stories so the entire team can see what's missing and what matters most.
Chapter 7: Design Better
When designing a product, focus first on how people will use it, where and when they’ll use it, and in what order tasks happen. Too many teams start with a screen layout and miss the overall context and flow. By identifying possible user paths, potential interruptions, and each user’s intention, you build experiences that match real user behaviours. This also surfaces the full complexity of features - so you can prioritise properly before building.
Exercise: What Happens Next?
- Make a table with columns for User, Intent, Success, and Series of Actions.
- For each type of user and their goal, write how they define "success."
- Ask "What happens next?" until you've covered all actions needed.
- Ensure no dead ends: each user's intent should be fully satisfied.
Teams often create designs that show data without addressing how that data will be input or updated. A quick way to spot these gaps:
Exercise: Matching Inputs and Outputs
- Print mockups or screenshots of each screen.
- Circle every piece of dynamic data (an "output").
- For each output, note who can change it, how, and any constraints or error states.
- Confirm that every output has a corresponding "input" method or management interface.
Consistent and simple interfaces speed implementation and reduce user confusion. A style guide ensures consistency:
Exercise: Make a Style Guide
- Print screens of your product and label each visual element or interaction (buttons, fonts, colours).
- Collect details (sizes, hex codes, spacing) in a spreadsheet. Note any conflicts.
- Resolve conflicts by standardising on one approach.
- Publish the guide (with code samples, if possible) so everyone on the team can apply it.
Design patterns (like infinite scroll or Tinder-style swiping) help when they match user tasks but can ruin usability if blindly copied. Always ask which pattern best supports the behaviour you need. A recognised approach in the wrong context leads to confusion.
Get ideas out of your head and onto paper. Sketching interfaces and flows clarifies complex interactions. "Wireflows" illustrate a user's path step-by-step. Draw simple "Star People / stick people" to show human context. Group-sketching sessions—like drawing six scenes of how your product improves a user's life—can spark new insights. Visualising ideas keeps meetings grounded and fosters understanding faster than words alone.
Chapter 8: Create Better User Behaviour
Most companies are inordinately attached to the idea of features. They celebrate when features ship. They write marketing emails sharing all the exciting new features they’ve just released. They give product managers promotions and reviews based on how many new features have been added to products. This is exactly wrong. Features don’t matter. They don’t matter at all. All that matters is customer behaviour.
Shipping is not the goal.
Instead of thinking backward from the feature—“If we add music, it might increase user engagement!” they needed to think forward from the goal—“If we want to increase user engagement, we could try several different product changes.” The great thing about this process is that you’re not limiting your options too early. Any given user behaviour might be affected by a number of product changes. User engagement can be improved in hundreds of ways, many of them difficult and expensive to build, but others quite simple. The hardest thing to build isn’t the most likely to cause the biggest change in user behaviour.
Products succeed by changing user behaviour, not just by shipping features. A feature alone doesn't guarantee results. Define what users need to do to find value and how their actions serve the business metrics you want to improve (e.g., engagement, conversion). Then, design each step so the user naturally moves toward that goal. Avoid overwhelming people with exploration; guide them to required tasks, offer helpful nudges toward secondary tasks, and let them discover advanced features over time.
Encouraging Behaviour Change
- Focus on changing behaviour that benefits both the user and the company (e.g., getting users to buy more items if it genuinely helps them find what they need).
- Avoid dark patterns: never trick users into unintended commitments. Deceptive tactics harm long-term loyalty and trust.
Exercise: Designing backward
- Draw the goal: Sketch or list what a fully engaged user looks like (e.g., profile filled, basic actions completed).
- Mark required items: Identify tasks absolutely needed for users to see initial value.
- Mark encouraged items: Call out tasks that help but aren't critical for first use.
- Mark eventual/advanced items: Let advanced features emerge naturally; don't force them too early.
- Onboard users: Build a guided path for required tasks. Lightly suggest encouraged tasks. Let advanced features reveal themselves later.
Exercise: Identifying user intent
- Define desired metric (e.g., "increase revenue").
- Link to user behavior (e.g., "buy more items per order").
- Match a real user need (e.g., "stock up conveniently").
- Determine a trigger (e.g., "offer a discount for larger quantities").
- Treat these as hypotheses: If the trigger doesn't produce the behavior change, refine or try a different approach.
Desired Metric | User Behaviour | User Need | Trigger |
x | x | x | x |
x | x | x | x |
Amy Jo Kim's Game Thinking Framework:
- Sketch the Customer Journey in four stages: Discovery, Onboarding, Habit Building, Mastery.
- Write Job Stories ("When I ___, I want to ___ so I can ___.") for each stage.
- Design the Core Loop First: Figure out the daily/weekly habit that drives ongoing engagement.
- Build in Order: Perfect the core loop, then refine onboarding, then discovery, and finally mastery features for power users.
By focusing on the behaviours you want to see—rather than on a big feature checklist—you ensure that each new release makes sense for users from the moment they sign up, guides them toward immediate value, and builds healthy, long-term engagement.
Part 4: Validation
Chapter 9: Identify Assumptions Better
Product teams often base plans on assumptions that remain unexamined until they cause a product to fail in the market. Some assumptions are safe; others are highly risky. You can't rigorously test every decision, but you can identify which assumptions are most dangerous and mitigate them before committing too many resources.
Unexamined assumptions
Waterfall processes tend to push assumptions into the earliest stages, then never revisit them. Teams may discover too late that their core problem, solution, or implementation assumptions were invalid. By recognising assumptions early and revisiting them, you avoid wasting major investments of time and money on the wrong path.
Types of assumptions:
- Problem assumptions: Beliefs about the user's need (e.g., a market of people who want to share files across devices).
- Solution assumptions: Beliefs about whether the proposed solution satisfies that need (e.g., that users will trust an unfamiliar startup with their data).
- Implementation assumptions: Beliefs about whether you can successfully build and deliver the solution (e.g., synchronising files reliably across multiple operating systems).
Exercise: Finding assumptions
- Generate (3 minutes): Complete the sentence "This product will fail unless ___," writing each assumption on a separate sticky note.
- Share: Combine duplicates and clarify confusing points.
- Categorise: Label each assumption as Problem, Solution, or Implementation. If an assumption covers multiple points, split it into smaller ones.
- Refine: If an assumption is unclear, break it down further.
This reveals hidden assumptions and flags those that might be especially risky or costly if proven wrong.
The riskiest assumption: Focus on assumptions that are both likely to fail and would have a severe impact if they did. Plot them on a 2×2 grid (Likelihood vs. Impact). Items in the top-right corner need immediate attention. As you resolve one risk, a new top risk often emerges. The goal is not to eliminate risk entirely but to reduce it enough to move forward confidently.
Exercise: Creating a falsifiable statement:
- Pick the riskiest assumption: Decide what you most need to verify.
- Formulate a testable hypothesis: Write what you believe (e.g., "Most of our target users have two devices") and define what outcome proves it (e.g., "80 out of 100 people interviewed confirm they use two devices for work").
- Decide criteria for success/failure: Specify the exact metrics or thresholds you need.
A clear hypothesis prevents ambiguous outcomes. Even if an assumption is difficult to test, articulating success/failure criteria forces you to clarify what being wrong would look like.
The dangers of identifying assumptions
- Assumption stack: Teams may keep adding assumptions without revalidating old ones. If even one assumption fails, everything built on top is compromised.
- Changing truth: Invalid assumptions sometimes become valid over time (e.g., online grocery shopping was unviable in the 1990s, widely adopted later). Keep revisiting assumptions when markets or technologies shift.
Tips from Learie Hercules:
- Keep cycle times short: Aim for 12-week projects that deliver something valuable and learnable. Sustaining energy and stakeholder interest is easier in small cycles.
- Identify what users will forgive: Figure out the few things that must work perfectly and which can be imperfect but easily corrected by the user. Invest resources where they matter most.
- Identify assumptions early: Ask team members questions from a fresh perspective to uncover biases. Challenge whether old knowledge still applies.
- Create a safe space for experimentation: Run small tests so any single failure is not catastrophic. Encourage learning by framing dead ends as lessons rather than full failures.
- Don't try to get everything right the first time: Build, measure, and iterate even within short cycles. At mid-point, prepare for the next iteration based on feedback.
- Understand the different types of risk
- Value proposition risk: Does anyone truly need this?
- Stakeholder risk: Is funding or support at risk if decision-makers change or lose interest?
- Market risk: Could timing or external factors make the product irrelevant?
- Team composition risk: Can your specific team build, launch, and maintain this solution?
- Feedback loop risk: Do you have a system of user input and metrics to learn from success or failure?
By systematically identifying and testing your assumptions, you reduce the chance of building the wrong product and increase your odds of delivering real value.
Chapter 10: Validate Assumptions Better
Teams often uncover hidden assumptions but still need to test them. Validating assumptions means admitting your best ideas might fail and then figuring out how to learn cheaply from experiments. Different testing methods apply to different types of assumptions (problem, solution, implementation). By picking the right method, you can glean insights before investing too much time and effort.
Several common validation methods help reduce risk:
- Landing Pages: Quickly gauge interest and messaging for a proposed product, though they won't prove people will buy.
- Audience Building: Create a blog, newsletter, or community around the problem space to learn what resonates. This provides early research participants and potential first users.
- Concierge: Solve a user's problem manually before automating anything. Great for learning exactly what users need, but not suited for large-scale social or hardware products.
- Wizard of Oz: Build a front-end as if features are automated, then secretly do the heavy lifting by hand. Ideal for testing ideas that can later be automated if they prove valuable.
- Fake Door: Insert a button or link for a yet-to-be-built feature. Measure clicks to see if users show enough interest. It won't reveal why they clicked, just how many did.
- Pre-Orders: See if people pay in advance. This best shows genuine demand, though it risks over-committing to certain product promises.
- Usability Testing: Watch real users perform tasks on prototypes or released features. It reveals how easily (or not) they accomplish tasks; it won't validate whether users truly need the product.
- Analytics and Metrics: Quantitative methods (A/B tests, funnel analysis) measure real user actions in the wild. They show "what" but not "why," and need sufficient sample sizes.
Exercise: Pick a Validation Method
- List Your Assumptions: From previous chapters, pick a problem, solution, and implementation assumption.
- Check Test Options: Match each assumption to a method in the validation table (e.g., problem assumptions might need a landing page, concierge, or pre-orders).
Problem (Verify the problem exists) | Solution (test if the solution works) | Implementation (Verify Technical Feasibility) | |
Landing Page | x | ||
Audience Building | x | ||
Concierge | x | x | |
Wizard of Oz | x | x | |
Fake Door | x | ||
Pre-Orders | x | x | |
Usability Testing | x | ||
Analytics / Metrics | x | x |
- Rewrite Hypothesis: Update your falsifiable statements using an appropriate test (e.g., if you pick pre-orders, define how many pre-sales count as success).
Exercise: The Hypothesis Tracker
Keep a clear record of each experiment so you can revisit outcomes:
- Experiment Name
- Owner
- Description
- Start/End Dates
- Predicted Cost
- Audience
- Metrics to Change & Amount
- Danger Metrics (what shouldn't get worse)
- Method
- Reasoning
Each time you test an idea, record what you expect to happen, why, and by when you'll check results. Compare real outcomes to your prediction. Note the suspected reason for success or failure and adapt accordingly.
The Dangers of Validating Assumptions
- Ignoring the why: Seeing a metric improve doesn't explain user motives. Dig deeper to understand behaviour.
- Failing to learn from failures: If metrics don't move as predicted, treat it as a genuine failure and investigate why.
- Choosing irrelevant metrics: Pick goals that actually matter to you (e.g., revenue, retention, team morale). You decide which results define success.
Janice Fraser's Advice:
- Start with the "leap of faith" assumption.
- Decide what evidence would prove or disprove it.
- Pick a method (landing page, wizard, etc.) to gather that evidence.
- Avoid unfalsifiable, vague assumptions like "people will like this."
- Once you've identified all assumptions, stack-rank them to find the biggest risks with the least evidence.
- You don't need the "perfect" assumption to test—just pick one that meaningfully reduces risk.
By systematically testing assumptions with the right method, you save time, build what customers actually need, and learn quickly from both successes and failures.
Part 5: Measurement
Chapter 11: Measure Better
Teams often rewarded on shipping features on time and on budget - because tracking is easy —when we should actually me measuring real user and business impact. Good metrics clarify which changes genuinely make the product better. Without metrics, teams rely on guesswork, and user experience or business goals can suffer.
Why you should build metrics in early: We don’t want to discove critical problems too late. Integrating analytics and testing hooks from the start ensures you can measure the right user behaviours and quickly validate (or invalidate) decisions. Even adding one key metric to each new feature's user story can prevent major blind spots.
Definitions of Different Metric Types
- Business Metrics: Track core outcomes like revenue, churn, acquisition cost, or donations. They reveal whether you're creating real value (or profit), even if you're not strictly a for-profit organisation.
- User Experience & Engineering Metrics: Speed, downtime, bugs. These directly affect user satisfaction. Often overlooked because no single feature team "owns" them, yet poor performance can drive users away.
- Health Metrics: Safety checks so you don't improve one area at the cost of others. For example, you might increase initial engagement but harm long-term retention. Monitor "downstream" effects to ensure overall product health.
- Leading Metrics: Intermediate behaviours that (you hope) predict key outcomes. For instance, "number of daily runs" might lead to weight loss. Beware "gaming" them: if you only chase a leading metric, you can break the link to the real goal.
- Feature-Specific Metrics: Gauge usage of one feature (e.g., how many people use "save for later"). They help decide whether to iterate on or kill a feature, but shouldn't be the only metric.
- Vanity Metrics: Figures that climb no matter what, like "registered users". They look impressive but don't drive decisions or measure real success.
Measure what matters. Focus on metrics that guide choices. Ask: “What would I do if this metric changes?” If there’s no plausible action, it’s likely not worth tracking. For each new feature, define the specific business or user behaviour change you expect and measure exactly that.
Exercise: Pick a Metric
- Pick a new feature (e.g., “save this apartment” button).
- Ask yourself: “Why did we build this? Which user or business behavior do we hope to change?”
- List possible metrics (e.g., number of clicks, short-term retention, etc.).
- Select the one that tells if the feature meets its main goal (e.g., does ‘saving’ increase revisit rate?).
You’ll rarely learn much from simple click counts alone. Connect feature usage to the broader behaviour you intended to influence.
Measurement Methodologies:
- A/B and Multivariate Testing: Shows different feature versions to separate user groups to determine which performs better. Requires significant traffic for statistical significance. Cannot reveal why versions win or lose. Can test entire user flows, not just small changes.
- Cohort Analysis: Groups users by traits like signup date or acquisition channel. Compares retention and other metrics to understand how changes affect different groups over time and which acquisition sources perform best.
- Funnel Analysis: Visualises sequential steps like onboarding or checkout to identify where users drop off. Requires tracking individual users through each step.
- DAU/MAU: Measures product "stickiness" through daily to monthly active user ratio. Trends should be monitored over time, considering that usage frequency varies by product type.
- X over X: Compares performance across time periods (month over month, year over year). Highlights growth or decline patterns while accounting for seasonal changes. Rolling windows can reduce data noise.
The Dangers of Metrics
- Gaming the Metric: Fixating on a leading metric (like forcing people to add 7 friends) might break the true correlation to long-term health. Keep context in mind.
- Reporting Instead of Acting: A weekly dashboard is pointless if no decisions stem from it. Good data prompts questions like “Why did this happen?” “What will we do next?”
- Bad Data Collection: Sloppy methods or broken analytics lead to flawed conclusions. Valid tests require careful setup and consistent definitions.
- Giving Up: It takes effort to add metrics. Don’t abandon measurement if it’s hard or reveals unpleasant truths. Without data, you’re guessing in the dark.
Avinash Kaushik’s Advice:
- See-think-do-care: Segment users by intent. "See" means no purchase intent, "think" means mild curiosity, "do" means ready to buy, "care" are loyal customers.
- Pick metrics for each segment: Marketing to "do" requires measuring conversion. Reaching "see" calls for mindshare metrics (e.g., social shares), since they won't buy immediately.
- Use the right metric for the right audience: Don't judge content marketing by immediate sales if it's intended to educate early-stage prospects.
- Focus on one cluster at a time: Startups without product-market fit should focus on the "do" segment. Larger, mature companies can pursue bigger "see" or "think" audiences.
By selecting relevant metrics, building them in early, and applying the correct measurement methodology, you track real user value instead of vanity figures. This fosters faster, more confident product decisions and consistent improvement over time.
Chapter 12: Build a Better Team
A great product team shares a clear goal, trusts each member's unique skills, and collaborates fluidly without silos or anarchy. It's easy to structure organisations poorly—like isolating disciplines (silos), forcing every decision to be unanimous (communes), having one person dictating to everyone (dictatorships), or letting chaos reign (anarchy). Each extreme leads to communication breakdowns and slow, low-quality work. The most effective approach is akin to a "heist team," where specialists contribute uniquely yet align on a shared mission. Everyone understands user and business needs well enough to make independent choices, but they coordinate seamlessly to meet their target.
Types of Teams and Their Issues:
- Silos: Design, engineering, and other functions sit in separate teams, pass deliverables around, and rarely share deeper user or business insights.
- Communes: Everyone tries to do everything together, leading to design-by-committee and massive inefficiency.
- Dictators: One leader unilaterally decides all details. Talented team members either leave or stagnate, and the PM becomes a bottleneck.
- Anarchies: No shared goals or trust. Everyone acts alone, possibly competing rather than collaborating.
The "Heist Team" Solution:
- Specialists (PM, design, engineering, research, marketing, etc.) unite around a single, clearly defined goal.
- They trust each other and coordinate who does what.
- They work in small groups or individually when appropriate, then come together to align or solve problems.
- The PM ensures everyone shares the vision and keeps the team focused on user value and business impact.
Exercise: Forming a Heist Team
- Define the Shared Goal: Clarify what success looks like (e.g., a metric or problem solved).
- List Skills Needed: Identify essential roles (design, engineering, PM, etc.) and each member's unique strengths.
- Plan Together: Outline tasks and responsibilities. Let experts lead their parts but ensure everyone grasps the broader context.
- Execute in Small Groups: Individuals or pairs work efficiently on specialised tasks but stay accessible for quick feedback or pivots.
- Regroup and Iterate: Come back for reviews, testing, and to tackle emerging problems as a team.
A product manager must unify business goals, user needs, and technical feasibility. They require:
- Empathy: Understand and represent user problems.
- Prioritisation: Sort through competing ideas and constraints to find the highest-value work first.
- Vision: Provide a clear, overarching direction so others see how each feature and decision fits.
- Communication: Persuade stakeholders, rally teams around strategy, and ensure everyone's aligned on goals.
- Collaboration: Partner with designers and engineers, trusting them to lead in their areas while ensuring cross-functional synergy.
Tips from Irene Au:
- Different designers fit different needs (interaction, visual, research). Figure out the product's core challenges before hiring.
- PMs must guide teams with vision—merely creating wireframes or spec sheets isn't enough if nobody knows the bigger picture.
- Foster a culture where PMs, designers, and engineers all learn from users. Avoid territorial behaviour; collaboration is key.
Tips from Dan Olsen:
- The best teams feature a strong triad: product, design, and engineering each leading in their domain.
- Product managers handle market/business goals, designers create user flows and experiences, and engineers solve technical challenges.
- Everyone stays involved through the development cycle, but at different times one function may step forward to lead.
- PMs must ensure alignment and a clear backlog. Use short syncs (e.g., stand-ups) and one-pagers describing user problems.
- Good PMs are decisive yet open to feedback, adjusting priorities based on data and team insights.
A well-structured, trust-filled team aligned around real user problems, guided by a clear product vision, and supported by strong PM leadership is essential for building better products faster.
Part 6: Iteration
The exercises in this book aren’t meant to be done once and then forgotten. They all get better with iteration. Your products get better with iteration. Your team gets better with iteration. You get better with iteration. So iterate. And build better products.