Jez Humble, Barry O′reilly, Joanne Molesky
Review
This was a super ambitious book and it suffered from taking on too much. There’s no denying that it’s a collection of great ideas, and the approach is coherence but it doesn’t feel like a single cohesive methodology.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
This book explains how to build organisations that innovate quickly. The aim is to respond fast to shifting markets, customer needs, and new technology. Software and data now sit at the centre of most products and services. Old, project‑heavy management models don’t fit this pace. A product‑centric, experiment‑driven model does.
An enterprise is a complex, adaptive system of people with a shared purpose. Executives own that purpose. They align strategy and culture to achieve it, and they keep both evolving with the environment. Shareholder value is an outcome, not a strategy. Long‑term success comes from focusing on employees, customers, and products.
Culture is the engine. High‑trust, generative cultures share information, surface problems early, and learn from failure. Mission Command replaces command‑and‑control: leaders state intent and constraints; teams decide how. This closes three gaps that slow large organisations—the knowledge, alignment, and effects gaps → by using clear goals and fast feedback instead of detail and oversight.
Manage your portfolio as two different games. Exploration searches for new business models. It uses small teams, rapid experiments, and a high tolerance for failure. Exploitation scales proven models with quality, reliability, and incremental improvement. Balance across three horizons: today’s core, tomorrow’s growth engines, and future options. Place many small bets, protect the fragile middle, and prioritise with simple principles like cost of delay.
Exploration combines design thinking with the Lean Startup approach. Define the problem and the outcomes you seek. Make assumptions explicit. Build the smallest viable test. Measure with real users. Learn and then persevere, pivot, or stop. Use the ‘One Metric That Matters’ to keep focus. Apply innovation accounting, cohort analysis, and funnel metrics to replace opinion and vanity numbers. Early on, do things that don’t scale to learn fast.
Execution at scale demands flow. Use the Improvement Kata to set short‑horizon target conditions and run rapid PDCA cycles. Map value streams to see where work waits and where quality fails upstream. Limit work‑in‑process, create pull, and shorten feedback loops. Prioritise with Cost of Delay and favour small batches that finish. Measure lead time, release frequency, time to restore service, and change fail rates.
Continuous delivery is the engineering backbone. Put everything in version control, build a single deployment pipeline, and keep the trunk releasable. Integrate daily, revert fast, and automate tests that give quick, reliable feedback. Decouple deployment from release with feature flags, blue‑green and canary techniques, and dark launches. Prove impact with A/B tests; most ideas won’t help, so make it cheap and safe to learn that early.
Structure teams and systems for autonomy and speed. Make small, cross‑functional teams the unit of delivery, each accountable for a clear outcome. Align architecture to teams so most changes touch one service. “You build it, you run it” ties autonomy to responsibility. Provide a self‑service platform so teams can create environments and deploy on demand. Evolve legacy systems with the strangler pattern instead of big‑bang rewrites.
Transformation must reach beyond product teams. Model and measure culture; shift managers from Theory X to Theory Y; hire and grow for learning ability and mindset. Make it safe to fail with blameless postmortems and concrete follow‑ups. Reduce hidden bias by auditing pay and promotion, setting diverse candidate targets, and tracking satisfaction and advancement by demographic. Invest in people: simple development plans, frequent feedback, easy access to training funds, and protected time for exploration.
Treat governance, risk, and compliance as part of the value stream. Start from intent, not rituals. Move approvals and decisions to the lowest sensible level within clear guardrails. Prefer “trust and verify” over blanket prevention; embed detective controls, monitoring, and automated evidence in everyday tools and pipelines. Contain regulatory blast radius through architecture. Use compensating controls when rules clash with flow, and prioritise risk work with economics, not fear.
Modernise financial management. Unbundle the annual budget into targets, rolling forecasts, dynamic resource allocation, and performance evaluation. Fund work event‑by‑event, not once per year. In exploration, give small teams fixed time and spend boundaries; continue based on evidence. In exploitation, scale funding as outcomes improve and let efficient teams reinvest. Run the business on products, not projects, with simple P\&Ls tied to stable teams. Use activity‑based thinking for “good enough” visibility. Let CapEx/OpEx classification follow decisions, not drive them. Update procurement to short, outcome‑based contracts, incremental delivery, and real competition.
Reframe technology as a competitive advantage, not a cost centre. Data from high performers shows speed and stability rise together when teams version‑control everything, integrate small changes frequently, design for observability, and operate what they build. Change advisory boards that sit outside teams slow throughput with little stability gain; lightweight peer review works better. Cloud matters because self‑service kills ticket queues. If you build your own platform, treat it like a product and prove it improves flow. Practise disaster recovery with real failure injection and blameless learning.
Real case studies show these ideas can work even in government. Start small, deliver value early, automate relentlessly, and govern by principles that don’t slow delivery. Grow by evidence, not by decree.
Begin where you are. Set a clear, inspiring direction. Limit initial scope. Choose growth‑mindset people. Define target outcomes and let teams discover the path through experiments. Show results quickly, share learning openly, and use strategy deployment to align across levels through a collaborative “catchball” process. Keep reviewing and adjusting based on evidence.
The core message is simple. Build a system that learns faster than competitors and turns that learning into customer value. Align on purpose and outcomes. Push authority to the edge with strong guardrails. Design for flow and small batches. Measure what matters. Invest in people. Treat every function as part of the same learning system. Do this in small, disciplined steps, every day.
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Preface
Software is eating the world. Marc Andreesen
This book is about how to build organisations that innovate quickly. The goal is to respond fast to shifting markets, customer needs, and new technology.
Survival depends on finding new businesses and creating value for customers. Competition is rising. The life span of big companies is shrinking. Long-term success requires mastering the cultural and technical forces that speed up innovation.
Software has empowered customers, but it also empowers businesses to learn from users. Companies that invest in UX and design thinking grow faster. New tools and practices cut the cost of building products. Small teams can ship prototypes in days. They scale what works and discard what doesn't.
Software makes this possible. It's cheap to prototype. You can test ideas with real users early. You learn fast and fold those insights back into the product.
Hardware keeps shrinking while computing power grows. Software now sits at the centre of most products and services. Many firms have brought software back in-house. IT is shifting from a utility to a source of advantage.
Old project models do not fit this pace. Yet they still shape budgeting, governance, and operations. A better, product-centric model has emerged. This book connects its parts and ties them to the culture that enables high performance.
The methods are not new, but adoption is uneven. Piecemeal change creates local wins and system-wide friction. Too many products still miss the mark at high cost.
Used well, these practices deliver faster releases, higher quality, and happier teams. Change stalls when only one part of the organisation shifts. So we also address finance, governance, risk, architecture, and portfolio management.
We provide patterns and principles, not rigid rules. Every organisation is different. Experiment, measure, and adapt. This approach takes discipline but reduces risk and shows value sooner.
Don't say, "It can't work here." Expect obstacles. Treat them as experiments. Improvement never ends. Opportunities lie in your products and in how you work and think.
Part 1. Orient
The purpose of an organisation is to enable ordinary human beings to do extraordinary things. Peter Drucker
Shareholder value is the dumbest idea in the world...[it is] a result, not a strategy...Your main constituencies are your employees, your customers, and your products.l Jack Welch
We define an enterprise as a complex, adaptive system of people with a common purpose. That includes corporations, nonprofits, and the public sector. A shared purpose, known by everyone, is essential.
Purpose is not vision or mission. Vision states what you aspire to be. Mission states the business you are in. Purpose says what you do for someone else. It puts managers in customers' shoes. Examples: Kellogg aims to nourish families. IAG helps people manage risk and recover from loss. SpaceX exists to make life multiplanetary.
Executives own the purpose. They set strategy to achieve it and grow the culture to support it. Both strategy and culture must evolve with the environment. Leaders guide that evolution and keep them aligned. Done well, the organisation adapts, finds changing customer needs, and stays resilient. That is good governance.
Pure shareholder value is a poor north star. It drives short-term thinking and cuts into long-term capabilities. It discourages bold bets and R&D. It also ignores intangibles and externalities. Research shows that chasing profit alone can even lower returns.
Long-term success comes from building innovation capacity and focusing on employees, customers, and products. Part I shows how to do that.
Chapter 1. Introduction
Systems shape behaviour. In bad systems, good people can do harm without realising. Fix the system, not the people.
NUMMI shows that quality and productivity come from a high-trust system. Frontline workers are empowered to stop the line, surface problems immediately, and collaborate with managers to fix root causes. Continuous improvement is everyone's job.
Taylorism decomposes work into narrow tasks and concentrates authority in planners. The Toyota Production System (TPS) instead optimises the whole, builds quality in, and relies on cross-functional teamwork, rapid feedback, and learning.
Intrinsic motivators: mastery, purpose, autonomy, drive high performance in creative, non-routine work. Extrinsic carrots and sticks reduce performance in such work. Pride in craftsmanship beats bonuses and ratings.
Copying tools without the culture fails. Andon cords don't help if managers are rewarded for output over quality. Status symbols and rigid job hierarchies block teamwork and rotation. Supplier, engineering, and operations silos prevent fixing systemic defects. Urgency matters; without it, transformation stalls.
Lean is a human system. Practices are context-specific countermeasures, not rituals. The core is enabling people closest to the work to solve customer problems in line with strategy, powered by fast, honest information flow.
Information culture predicts safety and performance. Pathological cultures hoard information and punish messengers. Bureaucratic cultures follow the book and protect turf. Generative cultures share risks, train messengers, encourage bridging, and treat failure as a trigger for inquiry. Generative cultures correlate with job satisfaction and superior organisational outcomes.
High trust beats "command and control." Modern militaries use Mission Command (Auftragstaktik): leaders communicate intent and constraints; teams decide how to act locally. Orders are short and general. Freedom within bounds enables speed, adaptation, and coordinated initiative.
Enterprises are complex adaptive systems that suffer from "friction" creating three fundamental gaps. The knowledge gap means we never know enough about the situation. The alignment gap occurs when people don't execute exactly what was planned. The effects gap emerges when outcomes differ from expectations.
Traditional responses of more detailed plans, rules, and reports counterintuitively worsen these gaps by slowing feedback and crushing initiative.
The solution is to close gaps with intent, not detail. Organisations should replace command-and-control with mission-based approaches that limit direction to purpose and desired end-state while giving each level freedom to choose actions within constraints. Higher-level intent must be shared widely across the organisation, and teams should use back-briefs to explain how they'll achieve intent while leaders confirm alignment. This approach follows the Principle of Mission, which sets clear target conditions with time horizons that shrink as uncertainty grows. Teams decide the means while integrating continuously and adjusting based on real-time feedback. In practice, this transforms key organisational processes. Budgeting shifts from annual locks to regular reviews of high-level objectives with dynamic resource allocation. Program management defines measurable iteration objectives rather than detailed task lists and enables team self-coordination. Process improvement runs iterative experiments owned by the people who actually operate the processes. The fundamental contrast is clear: where traditional approaches respond to the knowledge gap with more detailed information, intent-based leadership limits direction to defining and communicating intent. Where conventional methods address alignment gaps with more detailed instructions, the new approach communicates higher intent and allows teams to define their approach. Where standard practices counter effects gaps with more detailed controls, directed opportunism gives individuals freedom to adjust actions within bounds. Rather than fighting complexity with more complexity, organisations should embrace directed opportunism that combines clear intent with execution flexibility.
Mission Command requires a supportive culture. Recruit and train for judgement. Expect people to accept responsibility. Leaders must back decisions made in good faith and tolerate honest mistakes. Build a network of trust up, down, and across the hierarchy.
People are the competitive advantage. The durable asset is the capability to learn faster than rivals, innovate continuously, and translate learning into customer value. Software, used well, compresses the learn-build-measure cycle and enables rapid, compounding improvement.
Big companies can be high-performing. Many are, but fewer than in smaller firms. The main barrier (is not size, regulation, or legacy tech) is culture, leadership, and strategy that prioritise control over learning and outcomes.
Shortcuts rarely work. Innovation labs, acquisitions, new methodologies, and reorganisations succeed only when paired with whole-company cultural change, aligned governance, and supplier integration. With a generative culture and mission-based management, the shortcuts aren't needed.
This book proceeds by separating two modes of work:
- Explore (search for value)
- Exploit (scale validated value)
The book shows how to run both while transforming culture, finance, governance, risk, and compliance to support continuous improvement at scale.
Chapter 2. Manage the Dynamics of the Enterprise Portfolio
The purpose of a business is to create a customer. Peter Drucker
Successful ideas diffuse from scarce advantage to commodity. Early adopters gain an edge; later the idea becomes standard and a base for new innovations. Many ideas stall at the "chasm" between early adopters and the early majority.
Product categories pass from rapid growth to maturity and consolidation. Management, funding, and marketing must change by stage. Mature categories emphasise efficiency and incremental improvement; new categories demand discovery and risk.
Exploration and exploitation are different games. Exploration searches for disruptive business models. It uses small, cross-functional teams, fast experiments, high tolerance for failure, and learning as the goal. Exploitation optimises a proven model. It uses coordinated teams, quality and customer satisfaction, incremental innovation, and performance against plan.
- Progress in explore = approaching product/market fit
- Progress in exploit = hitting forecasts and outcomes.
Lean Startup fits the explore domain. Form value and growth hypotheses. Build the smallest test (MVP). Measure with real customers. Learn and decide to persevere, pivot, or stop. Conserve runway by running many cheap experiments quickly.
Use optionality. Place many small bets with capped downside and potential large upside. Treat each experiment as buying information. Prefer resource scarcity and constraints to force simplicity, speed, and focus.
Apply the same approach to internal IT. Avoid big-bang projects with large teams, long roadmaps, and delayed integration. Ship small increments to internal customers, get feedback, and adjust. This reduces complexity, prevents "shadow IT," and improves morale and adoption.
Exploitation often fails under the project paradigm. Most "good" feature ideas don't move the needle; controlled experiments commonly show neutral or negative impact. Big upfront plans plus planning fallacy and scope creep create the large-batch death spiral. Complexity rises, operations suffer, and future work slows. Output is mistaken for outcomes. Hero culture and high utilisation make collaboration slower and block improvement work.
Manage exploit work with lean flow. Define and measure outcomes, not output. Work iteratively toward programme-level objectives. Visualise work, limit Work-In-Progress, and finish before starting more. Reduce lead time with small batches and continuous delivery. Reward system-level collaboration, simplicity, and learning from failures in safe-to-fail systems.
Balance the enterprise portfolio economically. Use a growth/materiality view and a three-horizons model:
- Horizon 1: The current core business. Generate's todays cashflow.
- Horizon 2: High-growth businesses. Emerging units that'll form the core business of the future. Today's revenue growth, tomorrows cashflow.
- Horizon 3: Options on future high-growth businesses. Run lean startup style experiments to find product/market fit.
Each horizon needs different governance, incentives, metrics, and leadership attention.
Horizon 2 is fragile. It demands investment without immediate returns, threatens the core, and is often managed with Horizon-1 rules. Protect it with appropriate metrics and autonomy. If culture blocks this, create independent units with separate capital, incentives, and decision rights.
You cannot buy innovation if your culture and governance remain unchanged. Acquisitions and "innovation labs" fail when Horizon 2 /3 teams are forced into Horizon-1 processes and targets. Transform the culture and leadership practices; then outside talent can thrive.
Allocate investments intentionally across horizons (e.g., majority to H1, meaningful to H2, a steady small % to H3). Fund H3 quarterly based on validated learning, not plans. Track horizon-specific metrics:
- H1: revenue and share
- H2: Growth rate and target accounts
- H3: Engagement and word-of-mouth or lighthouse customers
Survive and grow by continually exploring new models while efficiently exploiting validated ones, and by managing the transitions between horizons with intent, fit-for-purpose governance, and cultural support.
Reflection prompts
- What explicit framework shows your current balance across explore, exploit, and core?
- Can leaders see the whole portfolio at a glance?
- Which metrics signal health in each horizon?
- What are your intentional H1/H2/H3 investment ratios? What should they be?
- How is leadership protecting H2/H3 and managing transitions without applying H1 rules?
Part 2. Explore
The best lack all conviction, while the worst / Are full of passionate intensity. W. B. Yeats
When faced with a new opportunity or a problem to be solved, our human instinct is to jump straight to a solution without adequately exploring the problem space, testing the assumptions inherent in the proposed solution, or challenging ourselves to validate the solution with real users.
We can see this instinct at work when we design new products, add new features to existing products, address process and organisational problems, begin projects, or replace existing systems. It is the force that leads us towards buying expensive tools that purport to solve all of our use cases, rolling out a new methodology or organisational refresh across the whole company, or investing in "bet the company" programmes of work.
Worse, we often fall in love with our own solutions, and then fall prey to the sunk cost fallacy when we ignore evidence that should cause us to question whether we should continue to pursue them. When combined with a position of power, these forces can have catastrophic consequences - one of our colleagues was nearly fired by a client for having the temerity to ask about the business case behind a particular project.
If we had one superpower, it would be to magically appear whenever a problem or new opportunity was under discussion. Our mission would be to prevent anybody from commencing a major programme to solve the problem or pursue the opportunity until they do the following:
- Define the measurable business outcome to be achieved
- Build the smallest possible prototype capable of demonstrating measurable progress towards that outcome
- Demonstrate that the proposed solution actually provides value to the audience it is designed for
Since we are only mortal, we trust that you will keep a copy of this book to hand to wield at the appropriate moment.
In this part, we discuss how to explore opportunities and problem spaces by taking a scientific and systematic approach to problem solving. By taking an experimental approach, we can effectively manage the risks and enable teams to make better decisions and judgements under the uncertainty that is inherent in innovation.
Chapter 3. Model and Measure Investment Risk
Doubt is not a pleasant condition, but certainty is absurd. Voltaire
ROI hinges on adoption and survival. The two most important predictors are: will the initiative be cancelled, and will anyone use it. Costs matter far less than cancellation risk and demand/distribution.
Long development cycles destroy value. They reduce potential ROI by delaying time-to-market. They delay feedback on whether users value what you are building. Market research is poor at predicting product/market fit in new categories. In the absence of data, pet projects get funded; large systems replacements in regulated sectors waste vast sums. 30%-50% of time-to-market is often spent in low-value "fuzzy front-end" planning.
Model risk before you commit. In any business plan, focus on (1) the sensitivity of your key outcome metric to each input variable and (2) the uncertainty of the sensitive variables. In technology programmes, ROI uncertainty is high and grows with programme duration.
Apply a scientific method to product development. Treat "requirements" as untested beliefs, not facts. Steve Bell and Mike Orzen comment that "users are often unable to articulate exactly what they need, yet they often seem insistent about what they don't want...once they see it.". What we actually have are hypotheses about value and growth that must be tested with experiments, not justified with plans.
The 'Lean Startup' is the operating model for extreme uncertainty. Capture assumptions in a simple canvas. Validate problem/solution fit, then product/market fit, through fast experiments. Build the smallest test (MVP), measure with real users, learn, then persevere, pivot, or stop.
A common objection to these principles is that such experiments cannot possibly be representative of a complete product. This objection is based on a false understanding of measurement. The purpose of measurement is not to gain certainty but to reduce uncertainty.
Measurement: A quantitively expressed reduction of uncertainty based on one or more observations
Use MVPs as measurements. Success means users choose to use the MVP and the predefined customer outcome is met. If not, pivot and retest. Design experiments that target the riskiest assumptions first, the variables with the highest information value.
Prefer discovery over detailed plans. Traditional project planning commits on untested assumptions and defers truth until after launch. The discovery process produces evidence early and cheaply, making the next investment decision with real usage data instead of narratives.
Spend on learning when the Expected Value of Information (EVI) justifies it. Roughly: value of information ≈ chance of being wrong × cost of being wrong. If an experiment costs far less than its EVI, run it: especially on big, risky bets.
Apply the same approach to internal technology products. State a measurable downstream customer outcome. Find a pilot team; never mandate adoption. Ship an MVP in days or weeks, not months. Iterate based on voluntary use and outcome achievement.
Use OODA to speed learning. Observe, Orient, Decide, Act are concurrent activities linked by feedback and feed-forward loops; decisions should be delayed until the last responsible moment. Orientation (culture, experience, current information) shapes what we see and do. Organisations act through implicit guidance and control (shared intent, decentralised command) and explicit feed-forward (policies, compliance). "Operating inside" a rival's OODA loop means aligning with their expectations, then surprising them.
Scientific management vs. the scientific method. Taylorism centralises analysis and treats workers as executors. The experimental approach designs a system where teams run their own experiments, share evidence, and adapt. It demands skills in experiment design, measurement, analysis, and cross-functional collaboration. Plan-driven methods fit repeatable, well-understood work; they fail under uncertainty.
Questions to take away:
- How do you model investment risk today? What real data do you use?
- Which assumptions have the highest information value? How are you measuring them now?
- What evidence shows users will value your current work?
- When did real users last try your product? What changed because of it?
- Where can you replace planning with an MVP to reduce uncertainty this quarter?
Chapter 4
Discovery is a rapid, time-boxed, iterative set of activities that combines design thinking and Lean Startup to reduce uncertainty early.
Design thinking takes a solution-focused approach to problem solving, working collaboratively to iterate an endless, shifting path toward perfection. It works towards product goals via specific ideation, prototyping, implementation, and learning steps to bring the appropriate solution to light
The primary objective of a new business initiative is to validate its business model hypotheses (and iterate and pivot until it does). Search versus execution is what differentiates a new venture from an existing business unit.
Form a small, cross-functional, empowered team; include decision-makers. Engage customers and users from day one; treat them as co-creators, not recipients. Create a shared understanding: articulate the problem, vision, constraints, success metrics. Structure exploration: start wide (divergent), then narrow (convergent) to a testable idea. Externalise assumptions on canvases; identify riskiest assumptions. Define an MVP, answer 'should we build it?' before 'can we build it?'. Set the One Metric That Matters (OMTM). Run the experiment; measure; decide to pivot, persevere, or stop; socialise learning.
Engage both customers and users. Distinguish payers/customers from users; in enterprises, mandated users still need usable, valuable tools.Use interviews, shadowing, and observable behaviour; never rely on mandates for adoption. Make feedback continuous and visible; let voluntary use guide decisions.
Creating shared understanding and problem statements. Use visual artefacts, canvases, and information radiators to depersonalise debate and align on facts. Good problem statements name the user, context, observable pain, and desired outcome (not a pre-chosen solution).
Problem Statement Canvas: capture the triggering insight, evidence that the problem exists, the proposed experiment, and perspectives from customer, market, organisation, and notable "you-won't-believe" facts to test.
Structured exploration often follows loops of divergent thinking: generating many options rapidly. Convergent thinking: prioritising toward one starting hypothesis with a cheap, falsifiable test.
If starting a new venture, consider using the business model canvas to get a rough map of the business plan. Canvas blocks include: Customer Segments, Value Proposition, Channels, Customer Relationships, Revenue Streams, Key Activities, Key Resources, Key Partnerships, Cost Structure. Treat every block as a hypothesis to validate. Four levels of strategic mastery:
- Product focus: optimise the value proposition but ignore the business model.
- Beginner: use the Business Model as a checklist.
- Master: build reinforcing blocks that outcompete (tight business‑model fit).
- Invincible: continuously self‑disrupt while the current model is still winning.
There are plenty of other canvases, here’s when to use them…
- Lean Canvas: use when product/market fit is the riskiest bet; sharper on problem, solution, unfair advantage.
- Opportunity Canvas: use to align “what and why” with company strategy before committing to delivery.
- Value Proposition Canvas: use to map pains, gains, and jobs to offers; tighten problem‑solution fit.
An important part of discovery is understanding customers and users. Some key components of which are:
- Customer Personas: quick first pass to align; iterate with evidence; anchor discussions in user goals and contexts.
- Customer Empathy: practise deliberate listening; balance feeling the experience with analysing it.
- Jobs-to-be-Done: focus on the progress users hire a product to make, across contexts and constraints.
- Get out of the building / genchi genbutsu: observe in the real environment; prefer direct evidence over reports.
Your goal is to turn insights and data into unfair advantage. Combine qualitative empathy with quantitative analysis; data is a tool, not a substitute for understanding. Leverage existing customer data to ask sharper questions such as "Why are customers cancelling their memberships?" or "How are customers related to one another?"; prototype answers with quick experiments. Use analytics to surface weak signals, then design tests that confirm or refute hypotheses.
Accelerate experimentation with MVPs. Focus on should we build it before can we build it. An MVP tests assumptions with minimum effort. Define success as voluntary use by target users and movement on the OMTM (one metric that matters). Build a slice across value (delightful, usable, valuable, feasible), not a single technical layer.
Common MVP types / mediums (pick the cheapest that answers the question):
- Paper: sketches/wireframes to create shared understanding fast; weak on usability proof.
- Interactive prototype: clickable mockups to test flow and comprehension; tech not validated.
- Concierge: manual service mimicking the product; rich learning, not scalable.
- Wizard of Oz: real front-end, manual back-end; tests demand and value perception.
- Micro-niche / landing test: tiny feature set and traffic to gauge interest/willingness to pay.
- Working software: instrumented feature in production; strongest signal, highest cost.
The MVP mindset and experiment evaluation loop:
- Define learning goal and OMTM (the one metric that matters).
- Involve customers/users.
- Design the cheapest experiment.
- Ship, measure, and interpret.
- Decide: pivot, persevere, or stop.
- Share evidence; update canvases; repeat quickly.
The One Metric That Matters (OMTM) answers the most pressing question by tying directly to the riskiest assumption. Creates focus and productive debate; provides transparency and shared understanding. Supports a culture of experimentation. Prefer rates/ratios over totals/averages. Avoid lagging metrics (ROI, churn) early; choose leading indicators that move quickly. Evolves by stage and problem area. Clarifies: Are we making progress? (what) What caused the change? (why) How do we improve? (how) Early on, bias to "love metrics" (engagement, repeat use, delight) to prove resonance before scaling.
A3 Thinking (plan-do-check-act on one page)
- Background: why this matters; scope and context.
- Current condition & problem statement: observable gap, not a solution request.
- Goal statement: target condition and success metric.
- Root-cause analysis: hypotheses and evidence.
- Countermeasures: experiments you will run.
- Check / confirmation: how you will know effects occurred.
- Follow-up: next steps and shared learning.
Questions to take away:
- What is your current business hypothesis, and what MVP will you use to test it?
- Have you engaged both customers and users directly? What did you learn that changed your plan?
- Which canvas best fits your stage (BMC, Lean, Opportunity, Value Proposition), and what are the riskiest assumptions on it?
- What is your OMTM right now, and why? Is it a leading indicator tied to your hypothesis?
- How will you structure this week's divergent → convergent workshop, and what problem statement will it produce?
- Where can you apply A3 Thinking to make your learning loop explicit and shareable?
Chapter 5. Evaluate the Product/Market Fit
Innovation accounting is a disciplined way to define hypotheses, run experiments, measure learning, and communicate progress for early, uncertain bets. It replaces opinion and lagging financials with leading indicators of value and traction.
"If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter. The problem is that, if anything, manager: were simply measuring what seemed simplest to measure (i.e., just what they currently knew how to measure), not what mattered most." Douglass Hubbard
The goals of measurement in innovation are to create accountability, manage uncertainty-driven risk, surface opportunities and errors early, inform investment choices with the right precision, act despite imperfect information, and improve the organisation's innovation capability over time. Metrics must be actionable, not vanity. Favour cohort analysis, funnels, activations, and experiment outcomes over totals like visits or downloads. A good metric changes behaviour; if you can't act on it, it's a vanity metric.
AARRR (Pirate Metrics) provide a simple model:
- Acquisition (who shows up)
- Activation (who has a good first experience)
- Retention (who comes back)
- Revenue (who pays or creates value)
- Referral (who brings others)
Always measure by cohort to see whether changes improved each stage.Horizon-specific focus varies:
- Horizon 1 (optimise) focuses on revenue vs plan, margin, market share, stability, and quality.
- Horizon 2 (scale) tracks revenue growth, sales cycle, unit economics, retention/expansion;
- Horizon 3 (explore) emphasises CAC, viral coefficient, CLV, burn/runway, and engagement;
Dashboards should only show customer-focused metrics that trigger action and set targets. Governance reviews, weekly or fortnightly, should include product, engineering, and key stakeholders to decide persevere/pivot/stop and update the One Metric That Matters (OMTM) as assumptions change. Internal advocates are essential: find change agents who want evidence, safety, and context, then give them small wins and transparent data so they sponsor adoption without alienating others.
Do things that don't scale: manually onboard and support customers, narrow the market to maximise learning, and delay automation until demand forces it. Build a runway of questions, not requirements. Develop empathy by treating customers and users as co-creators, observing them directly ("go look, see"), and using in-house expertise where available before expanding outward. Leverage frugal innovation to prove ideas cheaply, using concierge or Wizard-of-Oz approaches to normalise experimentation.
Engines of growth include:
- Viral (growth as a side-effect of use; watch K and activation)
- Sticky (low churn and compounding retention)
- Paid (profitable acquisition; LTV > CAC).
Enterprise extensions include:
- Expand (new geographies/adjacencies)
- Platform (ecosystem with third-party complements). Once a core product has traction, expose interfaces and create incentives for complements to multiply value and defensibility.
In early exploration, incur minimal technical debt to learn fast (keep CI and a few smoke tests). Once validated, pay debt down aggressively, modularise, add user-journey tests, adopt TDD on new code.
"Choosing at what point in the lifecycle of our product or feature to pay down our technical debt is an art. If you find (as many do) that you've gone too far down the path of accumulating technical debt, consider the alternatives to the Big Rewrite"
Key drivers of growth when moving from explore to exploit are: target bigger lookalike markets, choose the monetisation model early, manage adoption without distorting the product, avoid big-bang launches (use staged alphas/betas), and keep team culture intact.
The OMTM answers the riskiest question now, ties to a specific hypothesis, and evolves as the stage changes. It should be a rate or ratio, not a total, and avoid lagging indicators early. It clarifies what moved, why, and how to improve. A3 thinking provides a one-page, structured approach: background, current condition & problem statement, goal, root-cause analysis, countermeasures, check/confirmation, and follow-up.
Questions for readers: What's on your innovation scorecard? What is your current OMTM and which assumption does it test? What vanity metric will you stop measuring ? How often do you run governance reviews? Who are your internal advocates and what evidence will you give them this month? Which frugal, unscalable test will you run next? What is your engine of growth and how will you validate it?
Part 3: Exploit
Exploit how to effectively manage and deliver validated ideas at scale requires a different mindset and execution model.
Many enterprises still rely on a traditional, centralised, project-based "phase-gate" process, even when adopting agile practices. This "water-scrum-fall" approach rooted in post-WWII military-industrial software engineering, batches work into large programs, slows delivery, and limits responsiveness to new information. In today's software-driven world, where needs change quickly and user value is uncertain, this model creates waste and hinders adaptability.
The authors propose a lean-agile paradigm for large-scale programs that emphasises adaptability over prediction. Unlike common scaling frameworks that layer coordination on top of old processes, this approach centres on continuous improvement at the senior leadership level, enabling each organisation to evolve its own processes in alignment with its goals.
Core principles include:
- Iterative, outcome-driven alignment at leadership level.
- Scientific, goal-oriented work to remove non-value activities.
- Continuous delivery to reduce risk and cycle times.
- Architectures enabling autonomous, loosely coupled teams.
- Smaller batch sizes and experimental approaches.
- Strong, frequent feedback loops to guide decisions and maximise customer value.
Chapter 6. Deploy Continuous Improvement
Focusing on quality and capability is a more reliable path to productivity than trying to "optimise productivity" directly. To become a high-performing software organisation, improve execution first - build reliable systems, simplify, and reduce complexity - then worry about tight alignment with business priorities.
Use 'Improvement Kata' as your operating routine for progress under uncertainty. Start by clarifying an inspiring direction and translating it into concrete outcomes at the value-stream level. Grasp the current condition with facts about how work actually flows. Set a short-horizon target condition - one to twelve weeks - that specifies how the process should operate in measurable terms (for example, WIP limits, continuous integration cadence, number of "good builds" per day, deployability). Then iterate toward that condition with rapid PDCA experiments. Keep a tight daily cadence by revisiting five prompts: what's the target condition; what's the actual condition now; which obstacle matters most right now; what's the next step and what do we expect; when will we check what we learned.
Leaders must adopt and teach the Kata, coach routinely, and enable teams to solve their own problems. Treat it as a meta-method: you evolve your current playbook rather than conform to a prescriptive framework. Real agility is the habit of continuous experimentation, not a set of rituals.
Aim for double-loop learning. When results disappoint, don't just adjust actions; also question policies, norms, and goals. Expect to miss some target conditions and treat misses as useful data that expose obstacles for the next iteration.
Plan outcomes, not tasks. Don't predefine the "how" - discover it through experiments. Demonstrate progress in small, tangible slices every few weeks, integrate frequently, and delay irreversible decisions to keep options open.
Manage feature demand and improvement work with the same mechanism: target conditions. In generative cultures, you set business outcomes, and let teams hypothesise solutions, and measure impact. In more traditional setups, integrate with a program backlog but enforce WIP limits and a strong definition of done: integrated, fully tested with automation, and demonstrably deployable before accepting new work.
Avoid managing by team velocity outside a team's context. Instead, use activity accounting and value stream mapping to understand flow: cycle time, WIP, rework, and failure demand versus value demand. Use real-time metrics to trigger conversations and support, not punishment, and shorten iteration length if you need more visibility.
Treat lean as investment, not austerity. Fund automation, CI/CD, toolchains, and refactoring to remove waste and reduce failure demand. Empower engineering to make these investments without permission theatre.
In practice, make sure everyone knows the long-term direction and near-term outcomes, each team runs short target-condition cycles with daily PDCA and coaching, you limit WIP and protect capacity for improvement, and you measure real flow rather than comparing velocities. Make failure demand and rework visible and drive them down.
Finally, pressure-test your environment with a few questions: how much time goes to no-value-add work and failure demand; can teams invest in automation and simplification without hurdles; does everyone understand the outcomes they're aiming for and how those are set and reviewed; and do teams run regular, fast-feedback experiments on their process so you learn what actually works.
Chapter 7. Identify Value and Increase Flow
Most organisations are overloaded with work that doesn't matter. The Lean alternative is to define value precisely, map how that value actually flows from idea to outcome, and then remove the queues, rework, and interruptions that slow everything down. Start by choosing a product or service and mapping its value stream from request to delivery. Involve the smallest set of people who truly represent each step and can authorise change. Record what really happens today, not the best case: who does what, how much work is waiting where, and three core metrics - lead time (from accept to handoff), process time (focused work time without interruption), and percent complete-and-accurate (%C/A, the rate at which work arrives usable without rework). Expect to discover that waste is concentrated in handoffs and rework; %C/A will tell you where upstream quality is failing. Then sketch a bold future-state value stream that radically shortens lead time and improves rolled %C/A. Don't plan the how yet - turn the gaps into target conditions and use the Improvement Kata to move toward them.
Make the flow visible and manageable. Translate your current-state map into a board with explicit process steps and queues, one card per item, and a cumulative flow diagram so you can see WIP and lead times over time. Impose work-in-process limits per step and queue, and let them bite; the pain reveals the obstacles you must remove. Create a pull system so work enters a step only when capacity is free. Define classes of service that encode urgency profiles so time-sensitive work is handled appropriately. Hold short, regular operational reviews to adjust WIP limits, classes of service, and policies based on actual performance.
Manage WIP at the enterprise level by protecting slack and stopping the practice of assigning people to multiple projects; context switching destroys flow and quality. Keep a small pool of specialists unassigned and available on demand, and keep their utilisation deliberately low. Above all, reduce batch size - smaller items move faster, vary less, and build trust.
Prioritise economically with Cost of Delay. Treat prioritisation as choosing what to delay given limited capacity. Estimate the economic impact per unit time of not having each item, accept accuracy over false precision, and make your assumptions explicit so you can validate them. Use CD3 (Cost of Delay ÷ Duration) to favour smaller, high-impact slices and to encourage teams to split work into thinner, more valuable increments. Standardise a handful of urgency profiles and reflect them in your classes of service. Let teams schedule autonomously using current Cost of Delay information, while portfolio functions shift from making priority calls to maintaining the decision framework, data, and feedback loops. Don't bolt Cost of Delay onto old governance; use it to simplify and replace heavyweight queues and HiPPO-driven choices. It pays off most when queues are long; if you don't have large backlogs, keep it lightweight.
Chapter 8. Adopt Lean Engineering Practices
Continuous delivery is the engineering counterpart that makes fast, safe flow real. Its goal isn’t deploying ten times a day; it’s making it cheap and low-risk to work in small batches. Build a deployment pipeline that turns every change into a deployable package, runs fast automated checks, and pushes with one button through test and staging toward production. The pipeline’s job is to detect and reject risky changes quickly, provide a clear audit trail, and make environments reproducible. Put everything in version control—code, tests, schemas, migrations, infrastructure, deployment and provisioning scripts, and documentation—and promote only through the pipeline.
Adopt two non-negotiables. First, “done” means merged to trunk and releasable (for hosted services, deployed); for nontrivial features, validate impact on users, not just code completeness. Second, keeping the system releasable beats starting new work; if trunk isn’t green, stop and fix or revert immediately. Practice trunk-based development and integrate at least daily. Break large changes into safe, incremental steps behind guards so trunk stays releasable. When a build fails, revert fast. Optimize for overall lead time to users, not “dev complete” on a branch.
Treat test automation as the foundation, not an afterthought. Collaborate tightly between developers and testers to design maintainable, parallelizable suites that give rapid, reliable feedback; curate ruthlessly to avoid flaky, ignored tests. Use automated tests to gate the pipeline, and save exploration, usability, security, and performance investigations for targeted manual and automated checks once builds are stable. Provision production-like environments on demand; if you can’t spin up a clean test environment from scripts, fix that first. Only invest heavily in automation for validated products and features; for experiments, keep automation lean.
Decouple deployment from release. Deploy new versions safely whenever you want; release features to users when it makes business sense. Use patterns like blue-green or canary deployments to shift traffic gradually and roll back instantly. Wrap unfinished or risky functionality in feature flags so you can dark-launch to staff, small cohorts, or A/B tests, and disable quickly if needed. For mobile, consider soft-launching under a separate brand to validate before a wide release.
Expect culture change. High performance correlates with a generative culture where development, operations, and security collaborate, metrics trigger conversations rather than punishment, and leaders remove obstacles. Start with configuration management, trunk-based CI, and on-demand environment provisioning; improvements in releases and operations won’t stick without them. Use the pipeline’s telemetry to find bottlenecks, shorten feedback loops, and feed new target conditions back into your Improvement Kata.
Put together, these practices let you do the right work and get it out safely, quickly, and repeatedly. Map and redesign value streams, limit WIP, and manage by flow. Prioritize with the real economics of delay and duration. Engineer for small batches with CI, a rigorous pipeline, and decoupled release. Measure what matters—lead time, WIP, failure demand, %C/A—and keep iterating toward shorter, safer, more valuable delivery.
Chapter 9. Take an Experimental Approach to Product Development
A focus on the speed of delivery is essential to ensuring we build the right things by aligning teams around measurable business and customer outcomes, not pre-defined features. Instead of maintaining a programme-level backlog, leaders set target conditions (goals with measurable acceptance criteria) that teams own, discover solutions for, and test. The aim is to minimise output while maximising outcomes, empowering teams to use their creativity and deep knowledge to achieve the goals.
A key tool for this approach is impact mapping, which links target conditions to stakeholders, desired behavioural impacts, and potential solutions with software as a last resort. This creates a shared understanding across business and technical roles, treats solutions as hypotheses, and avoids prematurely locking into specific features.
The chapter promotes hypothesis-driven development:
- Formulate a clear hypothesis tied to a target condition.
- Design the smallest, cheapest experiment to test it.
- Use user research and, where applicable, A/B testing to validate or falsify assumptions.
A/B testing is highlighted as a powerful way to measure causal impact, but it reveals a sobering truth: 60–90% of ideas fail to improve outcomes. Running safe-to-fail experiments early prevents wasted investment, complexity, and opportunity cost. This requires cultural change: valuing data over opinion (even from senior leaders), working in small batches, and accepting that information gained not shipped features is the real output.
An experimental culture demands trust, cross-functional collaboration, and continuous delivery capabilities. It should coexist with product vision and design thinking, not replace them. The goal is rapid learning, adaptability, and the freedom for any team member to test bold ideas.
Core principles:
- Define goals as measurable outcomes, not solutions.
- Empower teams to choose and test their own approaches.
- Use impact mapping to turn goals into testable hypotheses.
- Minimise waste through small, cheap, safe-to-fail experiments.
- Build a culture where measurement rules, not position or tradition.
Chapter 10: Implement Mission Command
As organisations grow, process control that optimises for predictability becomes a drag on innovation. In product development, over-prescriptive rules suppress continuous improvement and push out the tinkerers who run safe-to-fail experiments. Complexity then accumulates both organisationally and in systems, until change slows to a crawl. The antidote is Mission Command: high alignment on outcomes paired with high autonomy in execution.
Make the team your atomic unit. Keep teams small (about ten people or fewer), cross-functional, and accountable for a clear outcome metric (a "fitness function"). Apply subsidiarity so decisions default to the people closest to the work. Use programme-level target conditions (via the Improvement Kata) to align teams without prescribing methods. Leaders' job is to simplify processes, remove friction, grow new leaders, and continuously expand the capability and autonomy of these teams.
Match organisation to architecture. Decompose systems so that adding a feature typically changes one service at a time, with stable, non-chatty APIs and independent deployability. Align service boundaries with team boundaries (Conway's Law) and make teams own their services over the full lifecycle ("you build it, you run it"). Avoid functional silos that force every change to traverse multiple teams; they create rework, handoff delays, and brittle coupling.
Enable true autonomy in practice. Let teams push changes without heavyweight external approvals; treat well-tested, dark-launched changes as standard changes. Give teams the skills and authority to form hypotheses, ship A/B tests, and read results. Allow teams to choose their toolchains, while providing a platform (PaaS/IaaS) for self-service environments and deployments; if a centralised platform can't meet needs, teams may choose their own stack and own its operations. Don't require funding approval for small experiments set sensible guardrails instead. Co-locate people who work on the same product (or create strong virtual co-location) rather than shuffling org charts.
Align rewards with outcomes, not outputs. Don't pay for "dev complete," bug counts, or velocity comparisons; these drive the wrong behaviours. Recognise teams for customer outcomes, reliability, and learning speed. When metrics reveal issues, use them to start conversations and offer help, not to punish.
Expect and design for compounding benefits. Autonomy plus independent deployability compresses change lead time, accelerates learning, improves customer service, and raises motivation. Owning services end-to-end also simplifies P&L: the cost of a service is the team plus its resources, making margin signals clearer.
Evolve your architecture continuously using the strangler pattern, not "big-bang" rewrites. Deliver new value first; resist porting legacy features unless a business process is actually changing. Ship a tiny vertical slice fast, then iterate. Build everything with testability and deployability in mind (TDD, CI, modular design), and target a platform that supports automated, low-risk releases. Incremental strangling may take longer in theory than a total replacement, but it delivers value early, adapts as needs change, and avoids risky cutovers.
Steer enterprise architecture with target conditions, not standardisation edicts or giant architectural epics. Specify desired qualities (performance, availability, security, deployability, testability) and let teams experiment their way there. Measure the "surface area" of legacy systems you intend to retire, make progress visible, and keep reducing complexity as a never-ending activity.
Moving to Mission Command at scale is hard and incremental. It demands parallel shifts in budgeting, procurement, risk, and release management so teams can truly manage cost and risk locally while central groups set outcomes, increase transparency, and provide support. Start with the smallest changes that enable one team to run experiments or deploy independently, prove the outcome, and keep going.
Part IV. Transform
Transformation is never finished. To realise the full payoff of lean, the mindset and practices must extend beyond product teams to every function: governance, risk and compliance, finance, procurement, vendor management, and HR. The biggest obstacles appear where these legacy processes conflict with fast feedback, small batches, and customer focus. Choose suppliers who will iterate, listen, and experiment with you, not just deliver to a static contract.
Change starts with leaders who model the culture: set a clear purpose and simple constraints, create trust, and make the current state transparent. Replace command-and-control with context-and-empowerment so decisions sit with those closest to the work, while maintaining visibility and alignment on outcomes. Encourage prudent risk-taking and daily reflection; close the gap between espoused values and actual behaviour.
Expect resistance. Command-and-control feels safer because it deflects accountability. A genuine lean transformation will surface failures and setbacks and that is evidence of learning, not grounds for blame. If you aren't occasionally getting worse before you get better, you're measuring vanity metrics instead of real outcomes. Empower people to act in service of customers within clear guardrails, keep adjusting the system, and let lean thinking propagate through every part of the enterprise.
Everyone thinks of changing the world, but no one thinks of changing himself. Leo Tolstoy
Chapter 11. Grow an Innovation Culture
Culture is the engine of adaptability. Treat it as something you can model, measure, and deliberately evolve. Make it visible with lightweight, anonymous surveys run regularly, publish aggregate results, and discuss them openly. Use a simple model to interpret what you see: artefacts (what's visible), espoused values (what's said), and underlying assumptions (what's believed). Watch where behaviour contradicts slogans; reward patterns reveal real values. Shifting from Theory X (control, extrinsic carrots/sticks) to Theory Y (autonomy, mastery, purpose) is foundational in knowledge work.
Change thinking by first changing behaviour. Define the few concrete behaviours you want (how meetings run, how decisions get made, how work is improved), train for them, and reinforce them consistently. Create the "disconfirmation" that triggers movement without resorting to fear: set ambitious, measurable outcomes and expose gaps kindly but clearly. Reduce learning anxiety by giving people safety, time, and coaching to acquire new skills. Use the Improvement Kata to practice small, daily experiments toward target conditions so learning is continuous and low risk.
Make it safe to fail. Run blameless postmortems after incidents with the prime directive, focus on timelines and contributing factors, and avoid single "root causes" in complex systems. Produce tangible follow-ups (runbook updates, tests, automations), and verify they worked with drills. The goal is decisions that are better informed next time and systems that limit blast radius when things go wrong.
Stop chasing mythical "10x" individuals; in companies, the system beats the hero. Hire and grow for learning ability, emergent leadership, and mindset. Look for people who argue a position hard and then change their mind quickly when presented with new facts. In recruiting and promotion, value evidence of rapid skill acquisition over perfect résumé matches; then invest to help people learn on the job.
Build the environment that manufactures talent. Help everyone maintain a simple personal development plan; decouple performance feedback from pay decisions; normalise frequent, permission-based feedback; give easy access to training funds; and reserve time for self-directed projects. Retrain rather than replace where possible. Reduce learning anxiety explicitly by promising support for reskilling, no punishment for honest mistakes, and fair severance for those who choose to opt out.
Eliminate hidden bias so you don't leak talent. Audit and correct pay by role across demographics. Set target conditions for diverse candidate slates and promotion pools. Monitor tenure, advancement rates, and satisfaction by demographic to spot inequities. Regularly review policies and HR processes with external expertise; set clear norms, model them from the top, and act on violations.
Leaders make all of this real. Model the behaviours you ask for, especially under stress. Replace command-and-control with context-and-empowerment, keep work and progress transparent, and align on outcomes over outputs. Treat culture change as a daily habit, not a one-off program: keep measuring, keep practising, and keep improving how people learn together in service of customers.
Chapter 12. Embrace Lean Thinking for Governance, Risk, and Compliance
Governance isn't the same as risk and compliance paperwork. Governance is steering: clear direction, visible outcomes, and authority pushed to the people closest to the work. Treat GRC processes as part of the value stream, not sacred artefacts. They must evolve continuously so teams can explore, deliver, and learn while still meeting laws and contracts.
Start with shared intent. Make responsibility, authority, visibility, and empowerment explicit. Risk is about trade-offs, not elimination; every control shifts risk elsewhere. Compliance is non-optional, but the means of demonstrating it should be lightweight and responsive. Challenge rules that are "required" by frameworks (most prescribe outcomes, not specific rituals). Replace command-and-control with context-and-empowerment, then measure against real outcomes instead of box-ticking.
Apply lean to GRC like you would to delivery. Map the end-to-end value stream and overlay GRC activities to see where controls interrupt flow, add queues, or create rework. Ask of each control: does it achieve its intent, and does it improve overall effectiveness? Move approvals and decisions to the lowest sensible level with clear guardrails and escalation paths. Minimise documentation to what's actually used, keep it accessible, and automate its creation where possible.
Prefer "trust, then verify" over blanket prevention. Grant teams the access they need and instrument systems so actions are attributable, monitored, and reviewed frequently. Use detective controls (continuous monitoring, automated tests, small-batch reviews) to shrink feedback loops; reserve preventive gates for the few, high-impact moments where they truly reduce risk. Embed GRC evidence in everyday tools and pipelines so auditors can pull it any time without disrupting flow.
Bring GRC into the team. Treat security, risk, and audit partners as contributors from day one: co-design privacy and security, pair to prevent common flaws, add automated security checks to the deployment pipeline, and test patches and configurations continuously. This shifts discovery left, reduces last-minute surprises, and raises the whole team's capability.
Prioritise with economics, not fear. Replace "wouldn't-it-be-horrible-if" stories with quantified impact and likelihood, then schedule mitigation work alongside features using Cost of Delay (and CD3). No free passes: risk work competes transparently with everything else, guided by the same mission and metrics.
Contain regulatory blast radius. Architect so stricter regimes apply only where required; segregate sensitive environments and teams, and keep the rest of the system fast and flexible. Where prescribed controls clash with flow, propose compensating controls that achieve the same outcome (for example, strong pipelines, audit trails, approvals localised within a team) and validate them with your assessors.
Make change inspection-ready by default. Put code, infra, policies, migrations, and pipeline configs in version control; ensure every change is traceable from commit to production with automated tests and approvals. Replace spreadsheet theatre with living evidence: logs, build artefacts, deployment histories, alerts, and dashboards tied to your controls.
In practice, this means GRC and product teams sharing goals, language, and measures; controls designed for flow; automated, continuous evidence; and risk prioritised with the business. Done well, you get better governance and faster delivery: fewer bottlenecks, smaller batch sizes, shorter feedback cycles, and clearer accountability without compromising compliance.
Chapter 13. Evolve Financial Management to Drive Product Innovation
Traditional, centralised, project-oriented financial management slows innovation. Annual budgets bundle four different jobs: targets, forecasts, resource allocation, and performance evaluation into one rigid ritual. That encourages gaming the numbers, optimises for "hitting budget", and times decisions to the finance calendar instead of customer opportunity. It also distorts choices through CapEx/OpEx rules and treats product teams as cost centres rather than value creators.
Unbundle the budget. Set ambitious, relative targets tied to outcomes; maintain unbiased rolling forecasts that are always current; and allocate resources dynamically based on evidence, not calendar. Use strategy deployment to translate goals into clear decision rights and guardrails, then let teams adjust plans continuously as they learn.
Make funding event-driven. Replace big, annual asks with smaller, frequent checkpoints. In the explore domain, give small teams fixed time and spend boundaries to test hypotheses; continue or stop based on measured impact. In the exploit domain, scale funding as evidence accumulates, and don't "punish" efficiency by cutting the team: let them reinvest saved capacity into new experiments so momentum compounds. Manage a portfolio with optionality: many small bets, rapid culling, a few scaled winners.
Run the business on products, not projects. Give each product or service an owner, a stable cross-functional team, and a simple P&L anchored in the team's operating costs. As value and costs shift over the lifecycle, change the team's shape or retire the product; use Cost of Delay to decide cross-cutting investments and to justify decommissioning work.
Improve financial visibility with activity-based thinking. Attribute spend to the activities and products that drive it so leaders see true cost of ownership and trade-offs. Aim for "accurate enough" over false precision; the goal is faster, better decisions, not perfect models.
Stop using budget adherence as a performance proxy. Reward outcomes and learning, not calendar compliance. Share upside broadly so everyone feels and acts like an owner. Separate performance conversations from compensation mechanics; measure teams on customer and business results within their guardrails.
Decouple CapEx/OpEx from product decisions. Let classification follow the work, not drive it. Treat most exploration as OpEx. In exploitation, use simple, transparent rules (for example, a fixed percentage of time for enduring assets) rather than heavy timesheets, and be honest about software lifespans versus depreciation schedules. Make the business call first; let accounting classify afterwards.
Modernise procurement to reduce total cost and increase adaptability. Favour short, outcome-based engagements, incremental delivery, and competitive access for smaller suppliers. Pay for working software and measurable results, not promises and headcount. Avoid long, monolithic contracts, automatic renewals, and awards based solely on unit price; collaborate closely and test vendor fit through real work.
The shift is cultural as much as procedural: push financial responsibility to the edge under clear constraints; compress feedback loops between spend and learning; and make money flow match product flow. When targets, forecasts, and funding become continuous and evidence-based, product teams can innovate quickly and responsibly and finance gets better governance, not less.
Chapter 14. Turn IT into a Competitive Advantage
Treating IT as a cost centre traps organisations in a loop: ageing, interdependent systems get harder to change, unplanned work crowds out planned work, and the only lever left is "efficiency", which often means cutting the very capacity needed to simplify. Breaking that loop starts by reframing IT as a product organisation whose work creates advantage, not just support.
The old IT mindset (projects thrown over a wall to operations, change slowed by heavyweight approvals, and tool choice constrained by central standards) promises stability but rarely delivers it. Data from high-performing organisations shows you can have both speed and safety: shorter lead times and more frequent releases go hand in hand with faster recovery and lower change fail rates when teams version-control everything (code, config, infra), design for observability, integrate small changes into trunk daily, and interact in a high-trust, win-win way. Conversely, external change boards tend to throttle throughput with little stability upside; lightweight peer review works better because it keeps ownership with the people doing the work.
"You build it, you run it" is the simplest, most powerful prescription. Teams that design and ship a service also operate it: on call, monitoring, configuration, architectural choices, and launch strategy (often dark-launched and guarded by feature flags). Autonomy to release must be matched by responsibility to support. Mature organisations add two gates: a production-readiness review before first launch and a handover-readiness review before any transfer to a central SRE group. If a handed-over service falters, it goes back to its product team until it's ready again. This isn't "no-ops"; it raises demand for operational skill across engineering. Make the transition humane: invest in training, rotate people into product teams, and offer fair exits to those who don't want the new role.
With product teams owning what they ship, central IT can focus on leverage: reducing system complexity and building platforms that make safe speed the default. Cloud is the decisive enabler here, not because it's fashionable, but because instant, API-driven self-service removes ticket queues and calendar bottlenecks. The only acceptable definition of a successful "private cloud" is one that lets engineers provision environments and deploy on demand via APIs and measurably improves lead time, deployment frequency, recovery time, and change fail rate. Everything else is ceremony. Public cloud risks (lock-in, data sovereignty) are real but manageable with sensible architecture and strong encryption with disciplined key management. Security theatre about "the firewall" is not.
If you build your own service delivery platform, do it like any other product: small cross-functional team, open-source foundations where possible, internal customers from day one, and success measured by teams' flow and reliability, not by infrastructure vanity metrics. Treat disaster recovery as a practice field, not a binder on a shelf. Run failure-injection exercises, game days, and full DiRT-style events. Make postmortems blameless and actionable, then verify fixes with follow-up drills. If you won't test failure for real, don't run mission-critical infrastructure yourself.
Standardisation still matters, but not as a veto. Let product teams choose the tools and components that best fit their outcomes and insist that choice comes with ownership of the risks and operating costs. Optimise for the lean definition of performance: faster flow, higher quality, lower total cost, and compliance by design. Any policy that blocks this is a candidate for redesign.
Legacy complexity won't yield to a platform alone. In the short term, create radical transparency about priorities and coupling: publish a simple, organisation-wide, regularly refreshed priority list, and have leaders of dependent systems meet often to align delivery and negotiate tradeoffs. In the medium term, reduce integration friction by introducing abstraction layers over hard-to-change systems and by using virtualised services or test doubles so you can exercise end-to-end flows daily without a perfect staging clone. In the long term, rearchitect for independent deployment using the strangler pattern: map your value chains, decide which capabilities are strategic (build and evolve those) and which are utilities (buy as SaaS or vanilla COTS), and avoid customising packages. Change your processes to fit the package, not the other way around; upgrades and agility are far cheaper than bespoke code that petrifies. The payoff looks like fewer systems, unified codebases for shared experiences, nightly regression safety nets, and measurable savings you can reinvest in customer touchpoints.
None of this works if IT remains a ticket-taking service desk. Make product teams accountable for the costs and SLAs of what they run; give them the freedom to ship safely, quickly, and often; and measure the system, not the silo: lead time for change, release frequency, time to restore service, and change fail rate. With that clarity, use the Improvement Kata to chip away at constraints week by week. The result is not just cheaper IT, it's an organisation that learns faster than competitors and turns technology into advantage.
Chapter 15. Start Where You Are
If you do something and it turns out pretty good, then you should go do something else wonderful, not dwell on it for too long. Just figure out what's next. Steve Jobs
A year from now you will wish you had started today. Karen Lamb
Lasting change in large organisations isn't about a one-off transformation programme; it's about embedding continuous improvement into daily work. Event-based change, triggered by crises or leadership shifts, creates short bursts of activity followed by a return to the old normal. To survive and thrive in a volatile environment, organisations need a permanent sense of curiosity and urgency, balanced with a culture that reduces the anxiety of learning new skills.
The Improvement Kata provides a repeatable pattern for setting direction, understanding current conditions, defining target conditions, and experimenting toward them in rapid cycles. To make it stick, leaders must also practise the Coaching Kata; helping teams internalise the approach, avoid jumping to solutions, and make time for improvement work alongside delivery. This requires limiting work in process so there's capacity for experimentation and accepting that early progress will feel bumpy as habits change.
At the strategic level, the same pattern becomes strategy deployment (Hoshin Kanri, Ambition to Action): agree on purpose and "true north," assess the current situation, choose a small number of high-priority problems, set measurable target conditions, and cascade them through the organisation. The cascade isn't a one-way push; it's a collaborative "catchball" process where each level interprets and translates higher-level objectives into its own context and feeds insights back up. Regular reviews adjust plans based on evidence, and cross-functional conversations along value streams keep alignment strong.
The UK Government Digital Service case shows how to apply these principles: start small with a cross-functional team, deliver visible value quickly, grow iteratively, and replace systems incrementally using guiding principles instead of prescriptive rules. Multidisciplinary teams, continuous delivery, and automation enabled GOV.UK to release frequently, cut costs, and improve the citizen experience, even in a complex, regulated environment.
To begin your own journey:
- Define a clear, inspiring direction in measurable terms; even if it's an ambitious stretch.
- Limit the initial scope to a motivated slice of the organisation, with support from both leadership and the front line.
- Set target objectives without overplanning the path; equip teams to experiment and learn.
- Start with growth-mindset people who are open to trying new ways of working; spread success through early adopters to the majority.
- Show real results early to build credibility and momentum.
- Share learning openly through showcases and retrospectives; refine the vision as you go.
The destination is a resilient, lean enterprise that continually understands its purpose, assesses current conditions, and experiments toward better outcomes; for customers, employees, and the business. Whether you're iterating on a product, refining a process, or shifting culture, the underlying discipline is the same: small, scientific steps in the face of uncertainty. Leaders' most important work is to cultivate the high-performance culture described throughout this book so that the organisation can adapt rapidly and prosper amid constant technological, social, and economic change.