Alexander Osterwalder & David Bland
Review
This book offers a step-by-step approach to testing business ideas, helping teams avoid pouring time, money, and energy into concepts that customers won’t adopt or that can’t be delivered profitably. It takes you beyond theoretical frameworks like the Business Model Canvas or Value Proposition Canvas and shows how to put those tools into action through experiments and evidence-driven decision-making.
This is a great primer to important product discovery and customer validation concepts.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
There are three core risks that can derail any new venture: desirability (does anyone want this?), feasibility (can it be built and delivered effectively?), and viability (does it make financial sense?). By systematically tackling these risks in an iterative manner, teams gain clarity on whether to move forward, pivot, or stop altogether.
Much of the book focuses on designing the right environment for experimentation, starting with how to assemble a diverse and cross-functional team. Make sure you have representation from product, engineering and design, but also look to include marketers, legal experts, finance specialists, and more. Psychological safety, entrepreneurial thinking, and dedicated time for testing can all help the team thrive.
Shaping the initial idea is equally important. The authors recommend beginning with intuition to form a preliminary business model, then rapidly refining it through quick loops of design, testing, and learning. This ensures that each iteration is grounded in real-world insights, rather than guesswork or siloed thinking.
A core practice is to identify and prioritise hypotheses. Each hypothesis expresses an assumption—“We believe that…”—about your product, market, or customers. They should be clear, testable, and narrow in scope, making it simpler to find the right evidence to confirm or refute them.
Before you invest heavily in building your product, the authors show how to design experiments that are cheap, fast, and reliable enough to highlight flaws or opportunities early. By measuring what people do, rather than just what they say, you get stronger evidence of real demand, or lack thereof.
Through these experiments, teams gather data of varying quality: facts from real-world settings yield stronger evidence than mere opinions or lab simulations. Each experiment’s output then feeds into a learning phase, where you interpret results to see if they support or contradict your hypothesis. Insights gained here inform the next pivot, step, or refinement.
After analysing the evidence, the team decides whether to persevere with the current plan, pivot to a new approach, or kill the idea entirely. This flexible stance helps organizations avoid excessive “sunk cost” thinking, preventing them from clinging to an unworkable idea for too long.
The book dedicates an entire section to experiment selection, outlining different types that fit various stages of innovation. Early “discovery experiments” like customer interviews or simple landing pages produce relatively quick, low-cost insights. Later “validation experiments” such as pre-sales, crowdfunding campaigns, or fully functional prototypes demand more resources but yield stronger confirmation.
The authors provide a wealth of practical examples and experiment “menus,” covering both B2B and B2C contexts. For instance, hardware startups might begin with a basic 3D print, while software ventures might launch clickable prototypes. All of these experiments share a common goal: to reduce uncertainty before committing real capital.
Readers learn that each experiment can and should evolve as more evidence emerges. Early-stage experiments might merely check if the broad concept resonates with target customers. Subsequent tests become increasingly sophisticated, aiming to validate whether customers will pay, or whether production is feasible at scale.
Another key element is avoiding common pitfalls. The authors warn against rushing to build without testing, or relying solely on what customers say they want / or what what they say they would do. They also highlight the dangers of confirmation bias, insufficient sampling, or trying just one experiment per critical hypothesis. Multiple angles of evidence typically paint a far more accurate picture.
Leaders play a crucial role in creating a culture that values experimentation. Instead of dictating top-down decisions, they should provide an environment where teams can safely test, fail fast, and learn without fearing punishment. This involves removing roadblocks—whether in budget, access to customers, or decision-making processes—so that experimentation can happen swiftly.
Facilitation skills become indispensable. Leaders must ask questions rather than provide answers, helping teams think critically about their experiments and the resulting data. They must also encourage “strong opinions, weakly held,” so that entrenched viewpoints don’t stand in the way of new evidence.
With this supportive climate, accountability shifts from “Did you ship the feature?” to “Did you achieve the business outcome?” Teams are empowered to solve real problems for customers and gather evidence to show they’re on the right track. Traditional structures and siloed departments often need reshaping to make this happen effectively.
Cross-functional collaboration is repeatedly emphasised, since successful experiments often require quick feedback loops between design, tech, marketing, and other disciplines. Small, nimble teams that combine multiple skill sets can iterate more effectively in uncertain environments, beating out larger but more rigid organisational silos.
As an ongoing process, the book encourages an experimental mindset even after an idea gains traction. Products, markets, and customers evolve, so continuous testing and refinement help avoid stagnation. The guidance on how to manage scaling tests—moving from quick trials to higher fidelity experiments—supports that long-term perspective.
From shaping hypotheses to deciding the final fate of a concept, every step is grounded in “learning.” The learning cards, test cards, and other documented methods ensure that insights don’t stay locked in someone’s head. Instead, the whole team or organisation can share and build on what was discovered.
Ultimately, the message is that business success isn’t about flashy pitches or unwavering self-belief; it’s about gathering and acting on evidence. The authors equip readers with both the mindset and practical tools to become more confident in choosing which ideas to pursue and which to leave behind.
By reframing experiments as a crucial source of business intelligence, the book offers a fresh take on risk management. It pushes entrepreneurs and intrapreneurs to confront uncertainty head-on, which in turn leads to stronger, more validated innovation.
In the end, this approach not only reduces the likelihood of failure but also uncovers new opportunities. When you replace assumptions with data at each step, you spot untapped market segments, overlooked customer pains, and unexpected monetisation routes—all of which can lead to a stronger, more resilient business model.
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Introduction
The book aims to help: Start testing business ideas. Boost testing skills.Scale testing in an organisation.
Sits alongside: Business Model Book and Value Proposition Book.
Test ideas thoroughly before executing, to avoid wasting time, energy and resources on ideas that won’t work.
Entrepreneurs number one task is to reduce risk and uncertainty.
A1: Decide on Business Model | Take an idea. Define and assess your business model canvas Define and assess your customer value proposition |
A2: Test Business Model | Identify and prioritise key hypothesis
Identify best way to validate with experiments
Run Experiments
Document Key Insights
Adapt Business Model / Value Proposition |
B: Execute on Business Model | Execute once key hypothesis have been validated |
Three Types of Risk
- Desirability risk: focuses on customer interest. This involves evaluating whether your target market is large enough, if your value proposition resonates with enough potential customers, and if you can effectively reach and acquire those customers.
- Viability risk: examines the financial aspects of the business. Here you need to determine if you can earn sufficient revenue, identify sustainable revenue streams, and ensure that costs remain manageable for the business to be profitable.
- Feasibility risk: addresses your ability to build and deliver the product or service. This involves securing key resources like technology, intellectual property, and brand assets, as well as establishing the necessary capabilities and partnerships to operate successfully.
1.1 Design the team
- Make it cross-functional:
- Design, product, tech, legal, data, sales, finance, marketing, research
- Make it diverse:
- race, ethnicity, gender, age, experience and thought
- About experience…
- entrepreneurial is a plus, many testing tools usable without experience
- Team Behaviour
- Psychological Safety
- Data influenced
- Experiment driven
- Customer centric
- Entrepreneurial
- Iterative process
- Question assumptions
- Environment
- Dedicated - 100% time allocation
- Small dedicated teams outperform teams with split responsibilities
- Funded
- Autonomous: ownership of problem, freedom
- Company to provide:
- Support: leadership and coaching
- Access: customers and resources
- Direction: strategy, guidance and KPIs
- Team Alignment:
- Mission Statement
- Period of operation
- Joint objectives: what do we intend to achieve together
- Joint commitments: who does what
- Joint resources: what do we need to achieve
- Joint risks: what can prevent us from succeeding
1.2 Shape the Idea
- Design Loop
- Shape and reshape business until best business model and value proposition.
- First iterate on intuition and starting point.
- Subsequent iterations based on evidence and insights from testing loop.
- Step 1. Ideate - come up with lots of alternatives
- Step 2. Business Prototype - narrow down, canvases, tests
- Step 3. Asses - best way to address customer pains? Best way to monetise?
2.1 Hypothesise
- In short.…
- Identify Hypothesis underlying your idea
- Prioritise most important hypothesis
- Intro to hypothesis
- “to suppose” in greek. In this context: an assumption that your value proposition or business model, or strategy builds on. What you need to learn about to understand if your business idea might work.
- A good hypothesis:
- Starts with ‘we believe that…’
- Use it in the negative to avoid bias ‘we believe that millennials won’t subscribe to x’
- Testable: Validated or invalidated with an experiment
- Precise: Who, what and when of your assumption
- Discrete: Only one distinct testable and precise thing to investigate
- Risk types:
- Desirable (explore first)
- Customer profile - jobs, pains, gains
- Value map - products and services, gain creators, pain relievers
- Customer segments: big enough, right ones, they exist
- Value proposition: right ones for segments, unique enough
- Channels: reach and acquire customers, master channels & deliver value
- Customer relationships: build with right customers, switching, retaining
- Feasible (explore second)
- Key activities: perform at scale, right quality,
- Key partners: create key partnerships
- Key Resources: secure & manage key resources (tech, IP, human, financial, etc)
- Viable (explore third)
- Revenue streams: customers willingness to pay, sufficient revenue generation
- Cost structure: manage costs from infrastructure, keep under control
- Profit: generate more revenue than cost
- Mapping and prioritisation:
- Intro:
- Invite core and supporting team
- Pin up the Business Model Canvas
- Pin up the Value Proposition Canvas
- Map Assumptions:
- Invite folks to identify hypothesis
- Write 1 per sticky note and stick next to relevant area
- Refine with team (testable, precise, discrete, short)
- Prioritise
- Prioritise based on 2 axis (Evidence and Importance)
- First do desirability risks, then feasibility, then viability
- Pick the Important claims with little evidence to experiment on
Do they want this?
Can we do this?
Should we do this?
2.2 Experiment
- Introduction
- Design experiments: Turn Hypothesis into experiments. Start with cheap fast ones
- Run experiments: Run like a scientist, clean evidence
- Experiment to reduce risk and uncertainty
- Based on the scientific method. Evidence produced can be weak or strong. Can be fast/slow or cheap/expensive.
- A good experiment
- Precise enough to be replicated by team members to generate usable comparable data.
- Defines the following precisely:
- Who = test subject
- Where = context
- What = test elements
- Components: Hypothesis, Experiment, Metrics, Criteria
- Can use multiple experiments for the same hypothesis
- Experiment Card
- Test Name & Test Deadline
- Assigned to & Duration
- Hypothesis (We believe that…)
- Test (To verify that we will…)
- Metric (and measure…)
- Criteria (we are right if…)
- Hypothesis Criticality
- Test Cost
- Data Reliability
- Time required
2.3 Learn
- Introduction
- Strength of evidence
- Different Experiments create different evidence
- Customer interviews (transcripts and quotes)
- Search trend analysis (search data )
- Small production run (time to create, cost to create, customer sat)
- Insights
- What you learn from studying the evidence
- Learning related to the viability of a hypothesis and potential discovery of new directions
- The foundation to make informed business decisions and take action
- Insights learning card
- Date of learning
- Person responsible
- Hypothesis (we believed that…)
- Observation (we observed…)
- Insights (from that we learned that…)
- Decisions and actions (therefore, we will…)
- Data reliability
- Action required
1. Analyse the evidence (distinguish between strong and weak)
2. Gain insights (key learnings from analysing data, support or refute hypothesis and help with your understanding)
Strong | Weak |
Facts & Evidence | Opinions & Beliefs |
What people do | What people say |
Real world settings | Lab settings |
Large investments | Small investments |
2.4 Decide What to do next
- Your Options
- Persevere (continue testing idea based on evidence and insights)
- Pivot (make a significant change to elements of value proposition or bus.model
- Kill (stop investing time in validating idea, likely it won’t work)
- Evidence refutes hypothesis?
- Kill
- Pivot
- Test a different way
- Evidence supports hypothesis?
- Test the next critical hypothesis
- Same hypothesis, next experiment, higher fidelity
- New insight?
- Kill, Pivot, Persevere
- Unclear insight?
- Continue testing
3. Experiments
3.1 Selecting an Experiment
- Experiment Selection Questions
- Hypothesis Type
- What is the major learning objective
- Some produce better evidence for desirability, some for viability and some for feasibility
- Level of uncertainty
- How much evidence do you already have?
- The less you know, the less you should waste time, energy & money
- When you know little, you just need some small evidence to guide you. So pick cheap and quick.
- The more you know, you’ll pick longer and more expensive experiments
- Urgency
- How much time do you have until the next major decision point?
- Selection may depend on time and money constraints
- Rules of thumb:
- Go quick and cheap at the beginning
- Increase the strength of evidence with multiple experiments for same Hyp.
- Always pick the experiment that gives the strongest evidence given constraints
- Reduce uncertainty as much as you can before you build anything
- Truth Curve / Trend: As you spend more money and run more experiments, the level of uncertainty drops off. As you make progress over time
- Discovery vs Validation
- Discovery
- Weak evidence is sufficient to discover if your general direction is right
- You get first insights into most important hypothesis
- Validation
- Strong evidence is required to validate the direction you’ve taken
- You aim to confirm the insights you’ve gotten from your most important hypothesis
- Discovery experiments are typically faster and cheaper but have less evidence strength.
- Validation experiments have stronger evidence but take longer on average.
- Experiment Characteristics
- Characteristics that help choosing experiments
- Time to set up: 2 days
- Time to run: 10 days
- Setup & Run Capabilities: Design, Data, Tech
- Cost: £0
- Evidence strength: Strong
- Good for: Desirability | Feasibility | Viability
- Experiment sequences: Experiments guide and inform the next experiment as the evidence stacks up the experiments get larger and carry more weight
B2B Hardware | Customer Interview > Paper Prototype > 3D print > Data Sheet > Mash Up MVP > Letter of intent > Crowdfunding |
B2B Software | Customer interview > Discussion forums > Boomerang > Clickable prototype > Presale > Single Feature MVP |
B2B Services | Expert stakeholder interviews > customer support analysis > Brochure > presale > concierge |
B2C Hardware | Customer Interview > search trend analysis > paper prototype > 3D print > explainer video > crowdfunding > pop up store |
B2C Software | Customer interview > online ad > simple landing page > email campaign > clickable prototype > mock sale > wizard of oz |
B2C Services | Customer interview > search trend analysis > online ad > simple landing page > email campaign > presale > Concierge |
B2B2C with B2C | Customer interview > online ad > simple landing page > explainer video > presale > concierge > buy a feature > data sheet > partner & supplier interview > letter of intent > Pop-up store |
Highly Regulated | A day in the life > validation survey > customer support analysis > sales force feedback > storyboard > explainer video > brochure > partner and supplier interview > datasheet > presale |
Discovery Experiments
Exploration | Customer Interview
Expert Stakeholder
Interviews
Partner & Supplier Interviews
A day in the life Discovery Survey |
Data Analysis | Search Trend Analysis
Web traffic analysis
Discussion forums
Sales force feedback
Customer support analysis |
Interest Discovery | Online Ad
Link tracking
404 test (link to nowhere)
Feature Stub (button to popup)
Email Campaign
Social Media Campaign
Referral Program |
Discussion Prototypes | 3D Print
Paper Prototype
Storyboard
Data Sheet
Brochure
Explainer Video
Boomerang (customer test on competitor product)
Pretend to own (non functioning low fidelity prototype) |
Preference & Prioritisation Discovery | Product Box (cereal box, features, benefits etc)
Speed Boat (anchors slowing down progress)
Card Sorting
Buy a feature |
Validation Experiments
Interaction Prototypes | Clickable prototype
Single feature MVP
Mash-up (functioning MVP combining multiple services to deliver value)
Concierge (deliver automated experience manually first) Life-sized prototype |
Call to action | Simple landing page (includes call to action)
Crowdfunding
Split test (A/B test)
Presale (make the sale before available)
Validation Survey |
Simulation | Wizard of Oz (deliver service manually, hide the people, to see if its worth automating)
Mock Sale (high fidelity prototype in store, let customers think they can buy)
Letter of intent
Pop-up Store
Extreme Programming Spike (feasibility prototype) |
Mindset
Avoid experiment pitfalls
- Time trap: Not dedicating enough time. Dedicate time, weekly goals, visualise work.
- Analysis Paralysis: Overthinking.Time box analysis, differentiate between reversible and irreversible decisions, avoid debates of opinion.
- Incomparable data / evidence: Messy data that's not comparable. Use test cards, make test context/subject/metrics explicit, involve stakeholders.
- Weak data / Evidence: Only measuring what people say, not what they do. Don't believe what people say, call to action experiments, generate evidence that gets people as close as possible to real world situations.
- Confirmation Bias: Only believing evidence that agrees with your hypothesis. Involve others in synthesis, compelling hypothesis that challenge beliefs, conduct multiple experiments.
- Too few experiments: Conducting only one experiment for a key hypothesis. Conduct multiple, differentiate between weak and strong signals, increase strength of evidence with decreasing certainty.
- Failure to learn and adapt: Don't take time to analyse evidence and generate insights and actions. Time to synthesise, what patterns matter, rituals to keep eyes on the prize.
- Outsource testing: Outsourcsing what you should do yourself. Shift resources to internal, build up team of testers.
Lead through experimentation
Language - Don’t accidentally disempower teams by being too forceful with language. It’s important they feel they have autonomy - else they’ll wait for you to define the work.
Accountability - don’t hold teams accountable for shipping features, instead give them enough freedom that they are accountable for business outcomes.
Facilitation - As a leader facilitation becomes more important than giving prescriptive direction. Encourage teams to pursue multiple directions and run multiple experiments.
Strong opinions, weakly held - Start with a hypothesis, but be open to being wrong.
Steps Leaders can take:
- Create an enabling environment: Create space and adopt a culture of testing ideas. This often means different processes and success metrics.
- Remove obstacles and open doors: access to customers, brand, IP and other resources.
- Make sure evidence trumps opinion: change decision making to emphasise the importance of gathering evidence
- Ask questions rather than provide answers: relentlessly enquire about evidence, insights and patterns that validate business ideas.
Silos Vs Cross-Functional Teams: When there’s ambiguity over the solution - you need to work as a cross-functional team. Things move really fast, and being cross-functional makes you more agile. Small dedicated cross-functional teams can outperform much larger teams in an ambiguous space.