Evidence Guided

Evidence Guided

Author

Itamar Gilad

Year
2024
image

Review

Directionally I think the advice in this book is spot on - we need more evidence guided product management. The author lays down one of the cleanest arguments to why an evidence guided approach is needed, and why opinion driven product is hopeless. The author also does a good job of hooking this approach into more traditional product development ways of working.

Even though I fully agree with the theory in the book, I think the practical approach is a little too opinionated and clunky (for example the author overweights the importance of numerical RICE scores IMHO) but it’s not a terrible place for an immature product team to start.

You Might Also Like:

image

Key Takeaways

The 20% that gave me 80% of the value.

Product Managers in evidence-guided companies can share research, test results, and learnings in support of their ideas. Product managers in opinion-based companies will tell you smart people conceived, reviewed, and approved the plan, and that it’s backed by irrefutable logic.

Evidence-guided companies understand how difficult it is to change outcomes and do everything they can to increase their odds. An evidence-guided approach improves resource efficiency, reduces planning time, suppress politics, builds trust, empowers people and teams, and delivers value to the business and to the customers faster.

An Evidence Guided Scorecard. How evidence guided are you?

We need a system that acknowledged uncertainty and works to improve the ratio of success versus failure.The GIST Model is a meta-framework that combines tried-and-tested product methodologies:

  • Goals define the outcomes we wish to achieve (measurable benefits for customers and for the business).
  • Ideas are hypothetical ways to achieve the goals.
  • Steps are short projects or activities that develop help us test an idea and gather supporting evidence (often without coding)
  • Tasks are work items that go into doing the work (managed by your existing Scrum or Kanban process)

Output goals are harmful harmful because most ideas yield little or no value, and betting big on unproven ideas is likely to produce waste.

A key component of this evaluation is Confidence, which measures the strength of the evidence in support of an idea.

image

If we don’t seek to gather evidence, and we follow the plan-and-execute approach it forces us to commit when levels of uncertainty are high. When only a minority of ideas create measurable impact, and a high ratio of launched features and products get little or no use - this is a bad idea.

Rely on opinions, sparse data, consensus, and experience and your track record will be abysmal.

Evidence-guided companies improve the odds of success by supercharging human judgment with evidence. They continuously evaluate and test ideas and update the plans based on what they learn.

The GIST model (Goals, Ideas, Steps, Tasks) helps you follow best practice for goal-setting, idea evaluation, experimentation, and execution.

Goals

Don’t debate ideas without first agreeing on the goals (what you’re trying to achieve). Instead define firm measurable goals that state the positive change you wish to create (outcomes) first. This approach is much more robust in the face of uncertainty.

Evidence-Guided Goals are best expressed as OKRs:

  • Objective: describes the desired end state or the direction of change. They are aspirational and inspiring, but not necessarily measurable or even fully feasible.
  • Key results: 2–5 measurable targets that define what success is in the current goal cycle. If you can define both the current value and the target (e.g: Grow the ratio of content creators with more than 10,000 followers from 8.5% to 10%)
  • Context: a short explanation as to why you think this goal is important, what relevant evidence you’ve found, and any other relevant info.

Product ideas aren’t included in OKRs - keeping teams focused on the outcome and giving them freedom to change how they get there.

Express the value exchange loop between your customers and your business with two clear measurable metrics:

  • The North Star Metric measures how much value we deliver
  • The Top Business Metric measures how much value we capture

To identify your North Star Metric ask “Why are customers using my product? What is the core value they seek?”

Replace missino statements with yearly company objectives by combining them with the two main value exchange metrics as KRs. For Example:

  • Objective: Help enterprise employees express themselves better with smart docs (Mission)
  • KR1: Grow number of docs created per month from 100K to 250K (+150%) (North Star)
  • KR2 : Grow yearly revenue from $1.1M to $2M (+81%) (Top Business Metric)

Metrics Trees

The North Star and Top Business Metric are typically lagging indicators. Break them into sub-metrics which you can influence more readily.

image

The metrics tree (or graph) shows the connections between the most important metrics at the top and the most actionable metrics at the bottom. Some teams will define one key metric (called the Local North Star) which they continuously try to improve on.

Three Step OKR Alignment Process:

  1. Set yearly top-down goals: The leadership publishing draft OKRs (3–6 objectives, each with 2–5 key results). The Fewer the better. The OKRs explain where the company should be by the end of the year. The choice of goals is influenced by the company’s strategy, values, opportunities and threats ahead. If leaders don’t enough information to select KRs they leave the values blank. Leaders should solicit feedback on their OKRs.
  2. Set bottom-up goals quarterly: Leaders ask reports to propose OKRs of their own. A compy of 500 people should have just two levels of goals (company level and team level) larger orgs may need more tiers. Product team goals should be set by the product manager, tech lead and design lead. Each team should adapt the OKR to its area of responsibility (if it makes sense to do so). Pick key results that are leading sub-metrics (if unsure conduct research). Set target values for key results based on baselines and trends, aim for ambitious-yet-achievable. Don’t punish the team for missing its targets or reward it for exceeding them. You can improve the odds of success by setting fewer key results (never more than 4).
  3. Finalise the goals: Team leads share draft goals and review them with their managers, relevant stakeholders and leadership. Adjust goals through discussion and negotiation - but always through mutual agreement. The final review is with the company leadership. The process is therefore both top-down and bottom-up. At least 60% of KRs should be invented bottom up.

Ideas

Most ideas fail to improve key metrics and are simply not worth doing. Don’t make the classic mistake of letting management pick a handful of ideas. You’re placing bets too early, and creating a culture of internal lobbying and salespeople. Experience, expertise, and rank don’t improve the odds of success.

When most ideas are bad and when no one can tell which are the good ones, withholding judgment and putting multiple ideas to the test significantly raises the odds of success.

  • Generate more and better ideas through research.
  • Use evidence to pick ideas in an objective and consistent way

Using evidence to pick the best ideas:

  1. Evaluate multiple ideas quickly and objectively using ICE scoring (Impact, Confidence, Ease) to evaluate ideas.
  2. Pick some ideas to test. Use judgment and evidence to a prioritisation call.
  3. Validate the chosen ideas. Test the assumptions embedded in an idea (using Steps)
  4. Re-evaluate your ideas in light of the evidence you found. Park weak ones, develop and further test promising ones.
  5. Build and launch. Switch from product discovery to product delivery when you have confidence in an idea.

When defining quarterly goals, pick a working-set of 3-5 ideas to test per key result from the candidate list or generate them on-demand through research and ideation. Ideas in your working-set will be tested and launched (if they show good evidence).

Spend just a few minutes evaluating each new idea during the initial triage and go deeper when evaluating your candidate ideas (for example by collecting data, conducting reviews, and developing models).

ICE Scores: Assign numeric score from 0-10 to for each of the following: Impact, Ease and Confidence. Confidence should stem from supporting evidence. The Confidence Meter lists common types of evidence you may find and the level of confidence they provide.

Individuals and teams tend to be overly optimistic about future projects and tasks, underestimating time, costs, and risks, and at the same time overestimating the benefits:

image
  • Thematic support: aligns with vision / strategy, outside research, market trends
  • Estimates & plans: back of the envelope calculations, feasibility estimation etc
  • Anecdotal Evidence: a few product data points, sales requests, interested customers
  • Market data: supported by surveys, smoke tests, competitor offerings
  • User/Customer Evidence: lots of product data, 20+ user interviews, usability study, MVP
  • Test Results: Longitudinal user studies, large-scale MVP, alpha/beta, A/B experiments

Steps

By testing ideas and analysing the results, we can learn whether our assume are true, and to make evidence-guided decisions. To get the best chance of success we must test early and often.

image

Steps are activities or mini-projects designed to test ideas. Steps can be as simple as generating projections in a spreadsheet, or as complex as running a full beta.

Each step gives you supporting evidence, directions for improvement, and a more complete version of the feature. Step by step you’ll gain more confidence in the idea and be willing to invest more.

Steps help you discover your mistakes early when it’s still easy and cheap to fix them.

Make steps as short and minimal as possible. As we gain confidence in the idea we’ll be willing to invest more, so late steps are typically longer. Recalculating ICE scores after each step can help you communicate how your confidence is changing over time.

Validated Learning is the combination of testing, evidence, and judgment that helps you home in on the right solution.

  • Smaller and less risky ideas can be processed and tested quickly.
  • Stagger investment into bigger more costly ideas. Start with cheap tests and invest more only if you find supporting evidence.
  • Follow the rule: Evidence → Confidence → Investment.

Choosing Steps

It’s common to discover in hindsight that an important assumption isn’t true.

Validation methods can be grouped into five categories: from the cheapest and least accurate, to the most costly and rigorous:

Assessment
Fact-Finding
Tests
Experiments
Release
Goals Alignment
Data Analysis
Smoke Tests
Multivariate
Post-launch
Business Modelling
Surveys
Wizard of Oz
A/B/n Test
Holdback
ICE Analysis
Competitive Analysis
Concierge Test
A/B Test
% Launch
Assumption Mapping
User Interviews
Usability Test
Stakeholder Reviews
Field Research
Early Adopters
Alpha
Longitudinal
Fishfood
Labs
Beta
Preview
Dogfood

You’ll save so much time. Less project plans and requirements. The team always have a motivating short-term launch goal. You’ll learn faster and you’ll reduce the time you spend on bad ideas.

When (most of the time) your idea creates no measurable improvement or if the results are not clear - assume that the idea is not working.

Progress feels super fast when you’re traversing the steps with multiple ideas.

Combining execution and learning isn’t easy but you’ll be able to innovate at a much faster rate if you do.

Tasks

Build a shared view of the world (Goals → Ideas → Steps → Tasks). Tasks are the day-to-day activities - manage through Scrum and Kanban. Connect each task through the GIST stack to the goals of the team and the company. This helps empower the team with context for decision making. There should be no hidden projects - make sure all work is represented in tasks.

Create a GIST Board to show the relationship between Goals, Ideas, Steps and Tasks.

image

Review the board weekly with the trio (product, design, engineering). Assess progress on goals review the ideas you’re pursuing, and discuss if they’re still the most promising. Review the status of each step. What did we learn? Discuss and agree any changes to the plan.

Choosing Ideas

  • Ask: How to best achieve the quarterly goals? Everyone should be able to explain clearly (ideally with metrics) why these ideas are the most important. Optimise for achieving the goals, not for launching specific ideas.
  • The team is choosing which ideas to test first, not which ideas to build and launch.
  • Most ideas will fail, so expect to explore many of them.
  • The team uses evidence, not opinions, to make decisions
  • Say “no” to ideas that fall outside the goals.

Choosing Steps

  • The job of steps is to both move the development of the idea forward and to validate the core assumptions that underlie it.
  • Steps can be executed in parallel - sometimes we will use one step to test multiple ideas.

Context, Not Requirements

Shift the definition of team success from pushing code to achieving the team goals.

If you make sure team members understand the context (users and their needs, business rationale, competitive situation etc) then it eliminates the need to spoon-feed them with bite-sized, detailed requirements.

Regular GIST board reviews foster shared understanding. Teams explain goal and idea choices, discuss hypotheses, assumptions, and evidence, and identify validation needs. This process ensures everyone grasps the rationale behind decisions.

Planning and Executing Steps

  • Test assumptions: Define key hypotheses to validate.
  • Target audience: Specify who will participate in the test.
  • Test method: Outline the approach and tools for conducting the test.
  • Metrics: Be clear on what you need to measure
  • Success criteria: Set clear, measurable targets for the test outcomes.
  • Agree on a hypotheses statement if it helps:
    • ‘We believe that [doing this], for [this target group], will achieve [this benefit]. We’ll have reason to believe we are right when we see [this measurable result].’

GIST can help you get everyone on the same page and reach agreement with managers, stakeholders, and the team.

  • Developers no longer focus on tasks, they aim to accomplish steps to test ideas that help achieve goals. All of which they helped define.
  • Managers and stakeholders benefit from understanding what the product team is trying to accomplish through goals and have visibility into the list of ideas the team is considering.
image

Deep Summary

Longer form notes, typically condensed, reworded and de-duplicated.

Introduction

As a Product Manager, how do you know you’re working on the right things?

  • PdMs in evidence-guided companies can share research, test results, and learnings in support of their ideas
  • PdMs in opinion-based companies will tell you the plan was conceived, reviewed, and approved by smart and experienced people and derived from some bigger plan, and is backed by irrefutable logic.

Evidence-guided companies understand how difficult it is to change outcomes and do everything they can to increase their odds.

An evidence-guided approach improves resource efficiency, reduces planning time, suppress politics, builds trust, empowers people and teams, and delivers value to the business and to the customers faster.

An Evidence Guided Scorecard. How evidence guided are you?

Goals

  • We identify and measure impact (value delivered and value captured) using a very small set of top-level metrics.
  • We map out corresponding submetrics and know how they are interconnected.
  • We express goals as outcomes (measurable improvements) and outputs.
  • Teams define their own team level goals.
  • Goals are aligned top-down, bottom-up and across our company.

Ideas

  • We consistently collect and evaluate ideas no matter where they come from.
  • Each team transparently manages an list of ideas.
  • Ideas are selected based on impact, ease, and supporting evidence (confidence).

Steps

  • Ideas are validated by tests, experiments, or releases prior to fully launching.
  • We re-evaluate ideas based on test results.
  • Ideas that don’t produce supporting evidence are modified or parked.

Tasks

  • The team are involved in defining goals, ideas, and validation steps.
  • Teams regularly review the status of goals, ideas, and steps.
  • Tasks are associated to one or more discovery or delivery steps.

Chapter 1: From Opinions to Evidence

  • We need a system that acknowledged uncertainty and works to improve the ratio of success versus failure.

The GIST Model is a meta-framework that combines tried-and-tested product methodologies:

  • Goals define the outcomes we wish to achieve (measurable benefits for customers and for the business).
  • Ideas are hypothetical ways to achieve the goals.
  • Steps are short projects or activities that develop help us test an idea and gather supporting evidence (often without coding)
  • Tasks are work items that go into doing the work (managed by your existing Scrum or Kanban process)

Output goals are harmful harmful because most ideas yield little or no value, and betting big on unproven ideas is likely to produce waste.

A key component of this evaluation is Confidence, which measures the strength of the evidence in support of an idea.

image
  • If we don’t seek to gather evidence, and we follow the plan-and-execute approach it forces us to commit when levels of uncertainty are high. When only a minority of ideas create measurable impact, and a high ratio of launched features and products get little or no use - this is a bad idea.
  • Rely on opinions, sparse data, consensus, and experience and your track record will be abysmal.
  • Evidence-guided companies improve the odds of success by supercharging human judgment with evidence. They continuously evaluate and test ideas and update the plans based on what they learn.
  • The GIST model (Goals, Ideas, Steps, Tasks) helps you follow best practice for goal-setting, idea evaluation, experimentation, and execution.

Chapter 2 Goals

Don’t debate ideas without first agreeing on the goals (what you’re trying to achieve). Instead define firm measurable goals that state the positive change you wish to create (outcomes) first. Empower teams to discover the best way to achieve the goals and to say no to requests that pull away from them.This approach is much more robust in the face of uncertainty.

Evidence-Guided Goals are best expressed as OKRs:

  • Objective: describes the desired end state or the direction of change. They are aspirational and inspiring, but not necessarily measurable or even fully feasible.
  • Key results: 2–5 measurable targets that define what success is in the current goal cycle. If you can define both the current value and the target (e.g: Grow the ratio of content creators with more than 10,000 followers from 8.5% to 10%)
  • Context: a short explanation as to why you think this goal is important, what relevant evidence you’ve found, and any other relevant info.

The OKRs paints a clear picture of what the team is trying to achieve.

Example OKR:

  • Objective: All customers onboard quickly and successfully
  • KR1: Reduce average onboarding time from 30 days to 4 days // Why: Many potential customers end the trial period before they fully onboard
  • KR2: Increase onboarding completion rate from 72% to 80%

Product ideas aren’t included in OKRs - keeping teams focused on the outcome and giving them freedom to change how they get there.

Evidence-guided companies derive outcomes and priorities from models funnels, flywheels, and user journeys).

Express the value exchange loop between your customers and your business with two clear measurable metrics:

  • The North Star Metric measures how much value we deliver
  • The Top Business Metric measures how much value we capture

Examples of North Start Metrics:

  • WhatsApp—Messages sent per month
  • YouTube—Minutes watched per month
  • Airbnb—Nights booked per month
  • HubSpot—Weekly active teams

To identify your North Star Metric ask “Why are customers using my product? What is the core value they seek?”

A North Start Metric should:

  • Be as close as possible to the core value experience
  • An aggregate number, not a rate or ratio (it should sum up the value across the entire market)
  • Be simple and memorable

Mission Statements are important, but are often too abstract and high level to guide people’s daily work. To get everyone to understand what top management considers success try taking a slice of the mission as a yearly company objective and adding the two main value exchange metrics as KRs. For Example:

  • Objective: Help enterprise employees express themselves better with smart docs (Mission)
  • KR1: Grow number of docs created per month from 100K to 250K (+150%) (North Star)
  • KR2 : Grow yearly revenue from $1.1M to $2M (+81%) (Top Business Metric)

Product folk tend to focus on value delivery, business people tend to focus on value capture, but the collective mission is to achieve both.

Metrics Trees

The North Star and Top Business Metric are typically affected by many factors and are slow to reflect changes - they are lagging indicators. It’s helpful to break them into submetrics which you can influence more readily. The overlap between the metrics trees shows the connection between value creation and value capture.

image

Decomposing metrics can help provide useful insights. Start at the top of the tree and create a general model before you go deep. Ask at each level: “What changes in human or system behaviour can cause the desired outcome?”

The metrics tree (or graph) shows the connections between the most important metrics at the top and the most actionable metrics at the bottom. Ideally each product team “owns” a consistent set of metrics at the bottom of the tree, while metrics higher up the tree are the responsibility of mid-level and senior management teams.

Some teams will define one key metric (called the Local North Star) which they continuously try to improve on.

Not everything can or needs to be captured in your metric trees. Add supplementary goals to keep your company and products healthy to address technical debt, user privacy, culture, security etc.

Targets for the North Star Metric and the Top Business Metric already exist - but OKRs are more more specific and explain how the company will achieve them.

Three Step Alignment Process:

  1. Set yearly top-down goals: The leadership publishing draft OKRs (3–6 objectives, each with 2–5 key results). The Fewer the better. The OKRs explain where the company should be by the end of the year. The choice of goals is influenced by the company’s strategy, values, opportunities and threats ahead. If leaders don’t enough information to select KRs they leave the values blank. Leaders should solicit feedback on their OKRs.
  2. Set bottom-up goals quarterly: Leaders ask reports to propose OKRs of their own. A compy of 500 people should have just two levels of goals (company level and team level) larger orgs may need more tiers. Product team goals should be set by the product manager, tech lead and design lead. Each team should adapt the OKR to its area of responsibility (if it makes sense to do so). Pick key results that are leading sub-metrics (if unsure conduct research). Set target values for key results based on baselines and trends, aim for ambitious-yet-achievable. Don’t punish the team for missing its targets or reward it for exceeding them. You can improve the odds of success by setting fewer key results (never more than 4).
  3. Finalise the goals: Team leads share draft goals and review them with their managers, relevant stakeholders and leadership. Adjust goals through discussion and negotiation - but always through mutual agreement. The final review is with the company leadership. The process is therefore both top-down and bottom-up. At least 60% of KRs should be invented bottom up.
Education OKR Example

If given an OKR with output goals from your manager - interview them to get to the real goal. What are we trying to achieve? What metrics would improve if we’re successful? Why is this a top-priority goal?

Outcomes-Based leadership allows organisations to achieve more, with clarity of mission, strategy, goals, and measurement of progress.

Chapter 3: Ideas

Choosing which ideas to build is one of the trickiest and interesting parts of product management.

Most ideas fail to improve key metrics (only 1/3 at Microsoft, 1/10 at Google, 3/10 at Slack, 3/10 at Netflix) and are simply not worth doing.

Don’t make the classic mistake of letting management pick a handful of ideas. You’re placing bets too early, and creating a culture of internal lobbying and salespeople. Experience, expertise, and rank don’t improve the odds of success.

When most ideas are bad and when no one can tell which are the good ones, withholding judgment and putting multiple ideas to the test significantly raises the odds of success.

  • Generate more and better ideas through research.
  • Use evidence to pick ideas in an objective and consistent way

Using evidence to pick the best ideas:

  1. Evaluate multiple ideas quickly and objectively using ICE scoring (Impact, Confidence, Ease) to evaluate ideas.
  2. Pick some ideas to test. Use judgment and evidence to a prioritisation call.
  3. Validate the chosen ideas. Test the assumptions embedded in an idea (using Steps)
  4. Re-evaluate your ideas in light of the evidence you found. Park weak ones, develop and further test promising ones.
  5. Build and launch. Switch from product discovery to product delivery when you have confidence in an idea.

Product teams should manage their own idea bank transparently. Ideas should be considered - most should be parked. Keep a smaller list (< 40) of candidates, the ideas you wish to investigate further.

When defining quarterly goals, pick a working-set of 3-5 ideas to test per key result from the candidate list or generate them on-demand through research and ideation. Ideas in your working-set will be tested and launched (if they show good evidence).

Spend just a few minutes evaluating each new idea during the initial triage and go deeper when evaluating your candidate ideas (for example by collecting data, conducting reviews, and developing models).

ICE Scores: Each is assigned three numeric attributes: Impact, Confidence, and Ease, each in the range 0–10. A fourth value holds the ICE score, which is simply the three values multiplied together: ICE Score = Impact * Confidence * Ease.

  • ICE facilitates a structured discussion focused on how much the idea is going to contribute to the goals (Impact), how easy it is going to be to implement (Ease), and how sure we are that these projections will come true (Confidence).
  • Impact is often the most uncertain. Use Guesstimates (on intuition, experience, and logic), past idea comparisons, fact findings, back of the envelope calculations, simulations or tests and experiments to assign an impact value.
  • Ease is an estimation of how hard or easy it is going to be to implement this idea in full. Ease is typically the inverse of effort (person-week). Engineering and design work is often the scarcest and most expensive resource so that’s what we refer to. Consider how long it will take to build the whole idea in full.
  • Confidence asks “How sure are we that the idea will have the expected Impact and the projected Ease?” Lower confidence scores indicate lower certainty. 10 would be absolute conviction. Confidence stems from supporting evidence. Evidence types aren’t all created equal. The Confidence Meter lists common types of evidence you may find and the level of confidence they provide.

Individuals and teams tend to be overly optimistic about future projects and tasks, underestimating time, costs, and risks, and at the same time overestimating the benefits.

image
  • Thematic support: aligns with vision / strategy, outside research, market trends
  • Estimates & plans: back of the envelope calculations, feasibility estimation etc
  • Anecdotal Evidence: a few product data points, sales requests, interested customers
  • Market data: supported by surveys, smoke tests, competitor offerings
  • User/Customer Evidence: lots of product data, 20+ user interviews, usability study, MVP
  • Test Results: Longitudinal user studies, large-scale MVP, alpha/beta, A/B experiments

Chapter 4: Steps

  • Relying on opinions and intuition leaves us exposed to false positives and false negatives. The antidote is learning. By testing ideas and analysing the results, we can learn whether our assume are true, and to make evidence-guided decisions.
  • The ‘launch-and-iterate’ approach pushes testing to the end. One we’ve committed, invested, and launched, our ability to abandon the idea or make major corrections is limited.
  • To get the best chance of success we must test early and often.

Steps: The Discovery Engine of GIST

image
  • Steps are activities or mini-projects designed to test ideas. Steps can be as simple as generating projections in a spreadsheet, or as complex as running a full beta.
  • Each step gives you supporting evidence, directions for improvement, and a more complete version of the feature.
  • Step by step you’ll gain more confidence in the idea and be willing to invest more.
  • Progression isn’t linear - you might have to take a step backwards.
  • Steps help you discover your mistakes early when it’s still easy and cheap to fix them.
  • Make steps as short and minimal as possible. As we gain confidence in the idea we’ll be willing to invest more, so late steps are typically longer.
  • Recalculating ICE scores after each step can help you communicate how your confidence is changing over time.

Validated Learning is the combination of testing, evidence, and judgment that helps you home in on the right solution.

  • Smaller and less risky ideas can be processed and tested quickly.
  • Stagger investment into bigger more costly ideas. Start with cheap tests and invest more only if you find supporting evidence.
  • Follow the rule: Evidence → Confidence → Investment.

Choosing Steps

Steps are there to validate assumptions or refute the risks. Marty Cagan lists four areas of assumptions/risks:

  • Value—Is it something that the customers need? Would it justify the cost?
  • Usability—Can they learn how to use it? Does it fit in their lives?
  • Feasibility—Can we build it within reasonable time and cost? Is the technology ready?
  • Viability—Does the idea make business sense? Is it congruent with our existing business?

It’s quite common to discover in hindsight that an important assumption (for example that people need another social network) isn’t true.

User David J. Bland’s Assumption Mapping technique for larger ideas.

Validation methods can be grouped into five categories: from the cheapest and least accurate, to the most costly and rigorous:

Assessment
Fact-Finding
Tests
Experiments
Release
Goals Alignment
Data Analysis
Smoke Tests
Multivariate
Post-launch
Business Modelling
Surveys
Wizard of Oz
A/B/n Test
Holdback
ICE Analysis
Competitive Analysis
Concierge Test
A/B Test
% Launch
Assumption Mapping
User Interviews
Usability Test
Stakeholder Reviews
Field Research
Early Adopters
Alpha
Longitudinal
Fishfood
Labs
Beta
Preview
Dogfood
  • Every product idea in any company can be tested

Notes:

  • You’ll save so much time. Less project plans and requirements. The team always have a motivating short-term launch goal. You’ll learn faster and you’ll reduce the time you spend on bad ideas.
  • When (most of the time) your idea creates no measurable improvement or if the results are not clear - assume that the idea is not working. Involve more people in this discussion to reduce your bias.
  • Progress feels super fast when you’re traversing the steps with multiple ideas.
  • Combining execution and learning isn’t easy but you’ll be able to innovate at a much faster rate if you do.

Chapter 5: Tasks

  • Many engineers and designers find themselves disconnected from the business and focused on delivering outputs according to a schedule.
  • Product management can become sandwiched between waterfall planning and agile execution and having to make the two systems work together.
  • Instead build a shared view of the world (Goals → Ideas → Steps)
  • Tasks are the day-to-day activities - manage through Scrum and Kanban.
  • Connect each task through the GIST stack to the goals of the team and the company. This helps empower the team with context for decision making.
  • There should be no hidden projects - make sure all work is represented in tasks.
  • Step backlogs get the team thinking in days long or weeks-long mini-projects with their own “launch” to a well-defined set of users.
  • Create a GIST Board to show the relationship between Goals, Ideas, Steps and Tasks.
image
  • Goals on the left, ideas in the middle and steps on the right.
  • Create a new board at the beginning of each goal cycle.
  • There are usually too many tasks to show.
  • Have each team create their own GIST board.
  • Review the board weekly with the trio (product, design, engineering). Assess progress on goals review the ideas you’re pursuing, and discuss if they’re still the most promising. Review the status of each step. What did we learn? Discuss and agree any changes to the plan.

Choosing Ideas

  • Ask: How to best achieve the quarterly goals? Everyone should be able to explain clearly (ideally with metrics) why these ideas are the most important.
  • The team should optimise for achieving the goals, not for launching specific ideas.
  • The team is choosing which ideas to test first, not which ideas to build and launch.
  • Most ideas will fail, so expect to explore many of them.
  • The team uses evidence, not opinions, to make decisions
  • Say “no” to ideas that fall outside the goals.

Choosing Steps

  • The job of steps is to both move the development of the idea forward and to validate the core assumptions that underlie it.
  • Steps can be executed in parallel - sometimes we will use one step to test multiple ideas.

Context, Not Requirements

  • Get developers contributing to product discovery and product delivery if you can.
  • Shift the definition of team success from pushing code to achieving the team goals.
  • If you make sure team members understand the context (users and their needs, business rationale, competitive situation etc) then it eliminates the need to spoon-feed them with bite-sized, detailed requirements.
  • Shared documents aren’t shared understanding → Write something but use it to have a productive conversation with words
  • Regular GIST board reviews foster shared understanding. Teams explain goal and idea choices, discuss hypotheses, assumptions, and evidence, and identify validation needs. This process ensures everyone grasps the rationale behind decisions.

Planning and Executing Steps

  • Test assumptions: Define key hypotheses to validate.
  • Target audience: Specify who will participate in the test.
  • Test method: Outline the approach and tools for conducting the test.
  • Metrics: Be clear on what you need to measure
  • Success criteria: Set clear, measurable targets for the test outcomes.
  • Agree on a hypotheses statement if it helps:
    • ‘We believe that [doing this], for [this target group], will achieve [this benefit]. We’ll have reason to believe we are right when we see [this measurable result].’

GIST can help you get everyone on the same page and reach agreement with managers, stakeholders, and the team.

  • Developers no longer focus on tasks, they aim to accomplish steps to test ideas that help achieve goals. All of which they helped define.
  • Managers and stakeholders benefit from understanding what the product team is trying to accomplish through goals and have visibility into the list of ideas the team is considering.

Chapter 6: The Evidence-Guided Company

How it all fits into organisational strategy:

  • Actively seek strategic opportunities—market segments with clear, strong needs, where the company can potentially step in and create high customer value and a viable business.
  • Opportunities identified through research are vetted against the company’s business strategy. Promising opportunities are assigned to a cross-functional strategy squad who quickly size up and validate the opportunity. The findings are reported back within weeks to the executive team with recommendations.
  • Most opportunities turn out to be less meaningful than they first seem -but some yield clear supporting evidence. In that case a small product team may be created with a charter of discovering product ideas with strong product/market fit potential and high business upside.
  • The team uses the GIST model to define measurable goals, generate and prioritises ideas. does further research, and validates them through steps.

Chapter 7: Scaling GIST

  • Scaling GIST at Startups:
    • Goals focus the team on the most important achievements: first finding product/market fit and then establishing a scalable business model.
    • Start with a small set of ideas, but let the list grow as you learn more through research and testing. Keep just a single idea bank for the entire startup until a late stage.
    • Learn fast and at very low cost. Start by doing research, assessment, and fact-finding, then move to early-stage tests, before committing resources to build and test their ideas.
  • Scaling GIST at Scale-ups:
    • Use goals to help align and focus the company - it becomes more important once you’re big enough to pursue multiple things.
    • The North Start Metric, Top Business Metric, and their metrics trees are important tools at this stage for alignment and setting of priorities.
    • Establish product teams with clear missions and areas of responsibility. Get the trio to set team goals and create a GIST board.
    • Operate as strategically aligned, but loosely coupled and practice thinking big but starting small.
  • Scaling GIST at large companies:
    • Have clear business units or product areas with their own dedicated product and business teams.
    • OKRs help steer large companies - keep them as small as possible because there are so many of them across the org.
    • Avoid saying yes to big bets without detecting and validating opportunities and ideas first.
    • Dependencies and legacy code will slow the rate of development and limit the number of ideas you can get after. Set goals to mitigate these challenges
    • Good, transparent idea prioritisation that factors in evidence becomes crucial.

Chapter 8: GIST Patterns

  • To benefit from evidence-guided development companies need to create product teams, rather than feature teams or delivery teams.
  • Customer requests should be evaluated using ICE.
  • B2B product teams can test assumptions and product ideas using interviews, early adopter programs, data analysis, concierge tests and pilots.
  • GIST is easier for B2C companies to adopt, but some fall into common traps:
    • Skipping testing for cheap ideas
    • Not conducting qualitative research
    • Testing too slowly
  • Internal platforms and services teams can easily adopt evidence-guided approaches - they get less demands and have access to in-house to observe and test with. Work as service providers and focus on goals and metrics of your internal customers, rather than the company's customers.

Chapter 9: Adopting GIST

  • Transparency, discussing evidence, and clear rules of operation can help product teams gradually earn trust from managers and stakeholders.
  • If you face resistance to discovery show the waste in the current process.
  • Use the scorecard if leaders think they’re already working in an evidence-guided mode.
  • Aim for gradual adoption rather than a big-bang.
  • Start with the layer where currently the biggest pain lives: Goals, Ideas. Steps, or Tasks.
  • Get an executive sponsor who will make adoption of evidence-guided thinking a priority with the executive teams, and will create the space and allocate the resources needed for adoption.
  • Create a steering group that will facilitate and support the change, measure progress, and report back to management.