Author
Philip Tetlock, Dan Gardner
Year
2015
Review
A great deep dive into the world of forecasting with some practical advice scattered throughout. Using forecasting as a lens to explore team management best practices (psychological safety, diversity, etc) was illuminating. It's an excellent, evidence-based read, particularly helpful for product managers seeking to enhance their product sense.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
- Predictability has its limits, but we shouldn’t dismiss all prediction as futile.
- You can learn to be a superforecaster if you adopt their techniques. Commitment to self-improvement might be the strongest predictor of performance.
- System 1 thinking is designed to jump to conclusions from little evidence. A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It is designed to deliver strong conclusions at lightning speed. If we want to forecast accurately, we need to slow down and engage System 2. Chess champion Magnus Carlsen respects his intuition, but he also does a lot of “double-checking” because he knows that sometimes intuition can let him down and conscious thought can improve his judgment.
- Keeping a track record is the key to assessing forecasters, but also a helpful learning tool for forecasters. If we are serious about measuring and improving forecasts: terms must be precise, timelines must be stated, probabilities must be expressed in numbers and we must have lots of forecasts. Outside prediction tournaments, predictions are rarely apples to apples. So it’s hard to compare forecasters.
- Having many tabulated set of probabilistic forecasts enables us to determine the track record of a forecaster. The Brier score is a way to measure how good your predictions are. It looks at both calibration (how accurate your predictions are overall) and resolution (how specific and decisive your predictions are). A perfect score is 0, which means all your predictions were spot on. If you always predict a 50/50 chance for everything, or if you just guess randomly, your Brier score will be around 0.5. The worst score you can get for a single prediction is 2.0, it happens if you say something is 100% certain to happen, but you’re wrong.
- The best predictions are the ones that are both accurate and decisive. Try to be as accurate and specific as possible.
- How do you compare to benchmarks? Random, assuming no change, other forecasters?
- Hedgehogs hold firm beliefs and use more information to reinforce them, while foxes are pragmatic, versatile, discuss probabilities, and are open to changing their minds. Foxes outperformed hedgehogs in predictions, exhibiting better foresight, calibration, and resolution.
- The Wisdom of Crowds: Aggregating the judgment of many consistently beats the accuracy of the average member of the group. This is true when information is dispersed widely. All the valid information points in one direction, and all the errors cancel themselves out.
- Foxes approach forecasting by doing a kind of aggregation, by seeking out information from many sources and then synthesising it all into a single conclusion. They benefit from a kind of wisdom of the crowds by integrating different perspectives and the information contained within them.
- Enrico Fermi understood that by breaking down a question, we can better separate the knowable and the unknowable. Doing so brings our guessing process out into the light of day where we can inspect it. The net result is a more accurate estimate.
- Starting a forecast with the a base rate (e.g. the outside view, how common something is within a broader class) will reduce the anchoring effect.
- Thesis → Antithesis → Synthesis. You now need to merge the outside view and the inside view. How does one affect the other. You can train yourself to generate different perspectives. Writing down your judgments, scrutinise your view. Seek evidence that you’re wrong. Beliefs are hypotheses to be tested, not treasures to be guarded.
- Dragonfly forecasting: superforecasters pursue point-counterpoint discussions routinely. Constantly encountering different perspectives, they are actively open-minded.
- Superforecasters tend to be probabilistic thinkers.
- When a question is loaded with irreducible uncertainty be cautious, keep estimates inside the maybe zone between 35% and 65% and moving out tentatively.
- The best forecasters are precise. They sometimes debate differences that most of us see as inconsequential 3% vs 4% or 1% vs 0.5%. Granularity was a predictor of accuracy.
- A common method emerged among Superforecasters:
- Unpack the question into components.
- Distinguish between the known and unknown and leave no assumptions unscrutinised.
- Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.
- Then adopt the inside view that plays up the uniqueness of the problem.
- Explore the similarities and differences between your views and those of others and from the wisdom from crowds.
- Synthesise all these different views into a single vision as acute as that of a dragonfly.
- Express your judgment as precisely as you can, using a finely grained scale of probability.
- Update to reflect the latest available information
- Superforecasters update forecasts more regularly, but they make smaller changes (e.g. 3.5%). Train your brain to think in smaller units of doubt.
- The Bayesian belief updating equation: your new belief should depend on your prior belief (and all the knowledge that informed it) multiplied by the “diagnostic value” of the new information. Bayes’ core insight is to gradually get closer to the truth by updating in proportion to the weight of the evidence.
- Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress. Superforecasters are in perpetual beta, always learning.
- Superforecasters have a ‘growth mindset’ they believe their abilities are largely the product of effort. Failure is an opportunity to learn - to identify mistakes, spot new alternatives, and try again.
- We learn new skills by doing. Informed practice will accelerate your progress (knowing what mistakes to look out for and what best practice looks like).
- Typically meteorologists and bridge players don’t suffer from over confidence as they both get clear prompt feedback.
- Put as much effort into postmortems with teammates as you do to initial forecasts.
- Superforecasters are cautious, humble and nondeterministic. They tend to be actively open-minded, intellectually curious, introspective and self critical. They aren’t wedded to ideas. They’re capable of stepping back. They value and synthesise diverse views. They think in small units of doubt, update forecasts thoughtfully and aware of their cognitive biases.
- Group Think: Members of any small cohesive group tend to unconsciously develop a number of shared illusions and related norms that interfere with critical thinking and reality testing. Groups that get along too well don’t question assumptions or confront uncomfortable facts.
- Aggregation can only do its magic when people form judgments independently.
- Precision questioning (from Dennis Matthies and Monica Worline) can help you tactfully dissect the vague claims people often make.
- Do a team pre-mortem: assume a course of action has failed and to explain why. It helps team members feel safe and express doubts.
- Aim for a group of opinionated people who engage one another in pursuit of the truth. Foster a culture of sharing.
- Diversity trumps ability, the aggregation of different perspectives is a potent way to improve judgment. The more diverse the team, the greater the chance that some will possess scraps of information that others don’t.
- The principle of "Auftragstaktik" or "mission command" emphasises that decision-making power should be decentralised. Commanders should provide the goal but not dictate the methods, allowing those on the ground to adapt quickly to changing circumstances. This strategy blends strategic coherence with decentralised decision making.
- No plan survives contact with the enemy. Two cases never will be exactly the same.
- Improvisation is essential.
- Decisive action is required, so draw a line between deliberation and implementation. Once a decision has been made. Forget uncertainty and complexity. Act!
- Mission Command: Let your people know what you want them to accomplish, but don’t tell them how to achieve those goals.
- Smart people are always tempted by a simple cognitive shortcut: I know the answer, I don’t need to think long and hard about it. Don’t fall for it
- What makes Superforecasters good is what they do: the hard work of research, the careful thought and self-criticism, the gathering and synthesising of other perspectives, the granular judgments and relentless updating.
- Our training guidelines urge forecasters to mentally tinker with “the question asked” (e.g. explore how answers to a timing question might change if the cutoff date were six months out instead of twelve). Such thought experiments can stress-test the adequacy of your mental model.
- The ‘black swan’ is an event literally inconceivable before it happens. But Taleb also offers a more modest definition of a black swan as a highly improbable consequential event. To the extent that such forecasts can anticipate the consequences of events like 9/11, and these consequences make a black swan what it is, we can forecast black swans.
- Though there are limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems.
- Humility should not obscure the fact that people can, with considerable effort, make accurate forecasts about at least some developments that really do matter.
Ten Commandments for Superforecasters
- Triage. Focus on questions where work can pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close) or on impenetrable “cloud-like” questions (where fancy models won’t help). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.
- Break seemingly intractable problems into tractable sub-problems. Channel the playful but disciplined spirit of Enrico Fermi. Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
- Strike the right balance between inside and outside views. Nothing is 100% unique. Look for comparison classes even for seemingly unique events. Ask: How often do things of this sort happen in situations of this sort?
- Strike the right balance between under- and overreacting to new evidence. Belief updating pays off in the long term. Skilful updating requires spotting non-obvious lead indicators about what would have to happen before X could.
- Look for the clashing causal forces at work in each problem. Acknowledge counter arguments. List in advance, the signs that would nudge you toward the other. Synthesis is an art that requires reconciling irreducibly subjective judgments. Create a nuanced view.
- Strive to distinguish as many degrees of doubt as the problem permits but no more. Nuance matters. The more degrees of uncertainty you can distinguish, the better a forecaster you are likely to be. In poker you need to know a 55/45 from 45/55.
- Strike the right balance between under- and overconfidence, between prudence and decisiveness. Long-term accuracy requires getting good scores on both calibration and resolution. Know your track record, and find creative ways to tamp down both types of forecasting errors (misses and false alarms).
- Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases. Don’t try to justify or excuse your failures. Own them! Conduct unflinching postmortems. Ask: Where exactly did I go wrong? Don’t forget to do postmortems on your successes too.
- Bring out the best in others and let others bring out the best in you. Master perspective taking (understanding the arguments of the other side), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable).
- Master the error-balancing bicycle. Implementing each commandment requires balancing opposing errors. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding.
- Don’t treat commandments as commandments. Guidelines are the best we can do in a world where nothing is certain or exactly repeatable. Superforecasting requires constant mindfulness, even when dutifully trying to follow these commandments.
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Chapter 1: An Optimistic Skeptic
- Predictability has its limits, but we shouldn’t dismiss all prediction as futile.
- Asking if the future predictable or not is a false dichotomy. Unpredictability and predictability coexist uneasily. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.
- The Good Judgment Project was sponsored by (IARPA) an agency within the intelligence community that aims to improve American intelligence.
- The project demonstrated that some people are great at forecasting, and that it’s not who they are but what they do that makes them great.
- Foresight isn’t something you’re born with, it’s more akin to a process that you follow. A way of thinking, of gathering information, of updating beliefs. Habits that can be learned and cultivated by any intelligent, thoughtful, determined people.
- A basic tutorial, that takes about 60 minutes to read improved accuracy by 10%.
The heavyweights know the difference between a 60⁄40 bet and a 40⁄60 bet. Annie Duke
- Super-forecasting demands thinking that is open-minded, careful, curious, self-critical and focused.
- Commitment to self-improvement to be the strongest predictor of performance.
Chapter 2: Illusions of Knowledge
- If we don’t examine how we make mistakes, we will keep making them. This stagnation can go on for a lifetime.
- Austin Bradford Hill in 1921 laid out a template for modern medical investigation.
- Randomly assigning people to one group or the other would mean whatever differences there are among them should balance out if enough people participated in the experiment.
- Snap judgments are sometimes essential. System 1 is designed to jump to conclusions from little evidence. A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. System 1 is designed to deliver strong conclusions at lightning speed.
- The problem is that we move too fast from confusion and uncertainty to a clear and confident conclusion without spending any time in between.
- Scientists must be able to answer the question “What would convince me I am wrong?” If they can’t, it’s a sign they have grown too attached to their beliefs.
- Attribute substitution is when we replace a complex question with a simpler one. For example, if it's difficult to assess the risk of a shadow in the grass, we may instead ask if we can recall a similar dangerous situation.
- Magnus Carlsen respects his intuition, but he also does a lot of “double-checking” because he knows that sometimes intuition can let him down and conscious thought can improve his judgment.
Chapter 3 Keeping Score
- A track record can help you become a better forecaster but it requires judgement. Ambiguous language can make judging forecasts difficult or impossible. For example a forecast without a time frame is absurd.
- Forecasts rely on implicit understandings of key terms. Vague verbiage is more the rule than the exception. And it too renders forecasts untestable.
- If you forecast there’s a 70% likelihood Trump wins the next election, if he doesn’t win that doesn’t mean you made a bad forecast. It’s hard to know how accurate you were without rerunning history hundreds of times.
- Forecasting is all about estimating the likelihood of something happening.
- Describing a forecast with vague language is a bad idea, people have will interpret phrases like “there’s a serious possibility that…” to mean very different things. Intelligence communities have become aware that using vague language can unexpectedly mislead politicians.
- Sherman Kent suggested terms should have assigned numerical meanings inside the intelligence community but it was never adopted. The safe thing to do was to stick with elastic language.
- If we are serious about measuring and improving forecasts: terms must be precise, timelines must be stated, probabilities must be expressed in numbers and we must have lots of forecasts.
- Everything changes when we have many probabilistic forecasts. Forecasts can be tabulated and a track record determined. This enables calibration.
- The Brier score is a way to measure how good your predictions are. It looks at two things:
- Calibration: How accurate your predictions are overall.
- Resolution: How specific and decisive your predictions are.
- A perfect score is 0, which means all your predictions were spot on. If you always predict a 50/50 chance for everything, or if you just guess randomly, your Brier score will be around 0.5. The worst score you can get is 2.0. That happens if you say something is 100% certain to happen, but then it doesn't happen at all.
- When making predictions, try to be as accurate and specific as possible. Don't just hedge your bets with 50/50 chances, but don't be overconfident either. The best predictions are the ones that are both accurate and decisive.
- To compare track records you need both benchmarks and comparability.
- Do you do better than random?
- Do you do better than assuming ‘no change’? (as with state outcomes in US elections)
- Do you do better than other forecasters?
- Are you forecasting things that are equally challenging? Equally Variable? You’d expect a lower Brier score if you’re forecasting things that are highly variable or highly far out into the future.
- Outside prediction tournaments, predictions are rarely apples to apples.
- The result from the ‘Expert Political Judgment’ paper was that the average expert was roughly as accurate as a dart-throwing chimpanzee.
- The study found two groups "hedgehogs" and "foxes":
- Hedgehogs held rigid beliefs and focused their thinking around big ideas. More information increased confidence but not their accuracy as it confirmed their beliefs.
- Foxes were pragmatic, used a variety of tools and sources, discussed possibilities and probabilities instead of certainties. Foxes were willing to admit mistakes and change their minds.
- The foxes outperformed the hedgehogs, showing greater foresight, calibration, and resolution in their predictions.
- The Wisdom of Crowds: Aggregating the judgment of many consistently beats the accuracy of the average member of the group. In any group there are likely to be individuals who beat the group (but typically they’re lucky, and if you repeat the exercise the people change). It’s hard to consistently beat the crowd.
- This is true when information is dispersed widely, when one person has a scrap, another holding a more important piece, a third having a few bits, and so on.
- All the valid information points in one direction, and all the errors cancel themselves out.
- So aggregating the judgments of many people who know nothing produces a lot of nothing. The more they know the stronger the effect.
- Foxes approach forecasting by doing a kind of aggregation, by seeking out infromation from many sources and then synthesising it all into a single conclusion. They benefit from a kind of wisdom of the crowds by integrating different perspectives and the information contained within them.
Certainty | General Area of Possibility |
100% | Certain |
93% (give or take about 6%) | Almost certain |
75% (give or take about 12%) | Probable |
50% (give or take about 10%) | Chances about even |
30% (give or take about 10%) | Probably not |
7% (give or take about 5%) | Almost certainly not |
0% | Impossible |
4 Superforecasters
- The WMD debacle prompted postmortems that revealed the intelligence community had never seriously explored the idea that it could be wrong.
- The CIA gives analysts a manual written by Richards Heuer that lays out relevant insights from psychology including biases that can trip up an analyst’s thinking. But we don’t know if it improves people’s judgement. It has never been tested.
- Organisations routinely buy forecasts without checking for accuracy.
- IARPA sponsored a tournament to see who could invent the best methods of forecasts that intelligence analysts make every day. Research teams would compete against one another and an independent control group.
- Taking the top forecasters, aggregating their forecasts and making them a little more extreme (by pushing closer to 100% or zero) beats every control group and even Intelligence Community analysts with access to secret information.
- Doug Lorch’s accuracy was as impressive, here’s how his Brier score compared:
- 0 = Perfect
- 0.14 = Doug (with a Superforecaster team)
- 0.22 = Doug
- 0.5 = Random guessing
- 2..0 = the perfect opposite of reality,
- But were they just lucky? No, they saw the opposite of regression to the mean: the superforecasters as a whole, including Doug Lorch, actually increased their lead over all other forecasters.
5 Supersmart?
- Enrico Fermi was renowned for being the master of back-of-the-envelope estimates. He understood that by breaking down a question, we can better separate the knowable and the unknowable. Doing so brings our guessing process out into the light of day where we can inspect it. The net result is a more accurate estimate.
- You can nail the famous ‘how many piano tuners are there in Chicago’ question by guessing just 4 facts:
- The number of pianos in Chicago
- How often pianos are tuned each year
- How long it takes to tune a piano
- How many hours a year the average piano tuner works
- Fermi would advise setting a confidence interval at a range that you are 90% sure contains the right answer.
- The first thing you should do is find out the ‘outside view’ the base rate, how common something is within a broader class (the ‘inside view’ would be the specifics of the particular case). Then look at the situation today and adjust that number up or down. The outside view should come first to lessen the effect of anchoring. Let the outside view be the anchor, a better anchor is a distinct advantage.
- A good exploration of the inside view is targeted and purposeful: it is an investigation. What would it take for the hypothesis to be true? What would it take for each of those supporting elements to be true? DO any research that can help.
- Thesis → Antithesis → Synthesis
- You now need to merge the outside view and the inside view. How does one affect the other.
- Always be looking for other view s you can synthesise into your own. You can even train yourself to generate different perspectives. Writing down your judgments is a way of distancing yourself from it, allowing you to step back and scrutinise your view. Do I agree with this? Are there holes in this? Should I be looking for something else to fill this in? Would I be convinced by this if I were somebody else?”
- Merely asking people to to seriously consider why their forecast might be wrong, and to make a second judgement is almost as powerful as getting another forecast from a second person. This is ‘the crowd within’ step back, judge your own thinking and offer a different perspective.
- Sophisticated forecasters know about confirmation bias and seek out evidence that they’re wrong.
- Dragonfly forecasting: superforecasters pursue point-counterpoint discussions routinely. Constantly encountering different perspectives, they are actively open-minded.
- For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.
6 Superquants?
- Encourage people around you to tell you not what they think you want to hear, but what they believe.
- People typically have three predictive mindsets: this is going to happen, this won’t happen, and maybe. However, since nothing is guaranteed, the only viable mindset is 'maybe'.
- A parent may pay significantly more to lower her child's disease risk from 5% to 0% than from 10% to 5%. This is because a drop to 0% provides certainty, which we value more than mere percentage reductions.
- Many people translate high probability forecasts (80%) into ‘this will happen.’
- Superforecasters tend to be probabilistic thinkers. An awareness of irreducible uncertainty is the core of probabilistic thinking. There are two types of uncertainty.
- Epistemic uncertainty: something you don’t know but is, at least in theory, knowable.
- Aleatory uncertainty: something you not only don’t know; it is unknowable. No matter how much you want to know whether it will rain in London on this day next year.
- When a question is loaded with irreducible uncertainty be cautious, keep estimates inside the maybe zone between 35% and 65% and moving out tentatively.
- The best forecasters are precise. They sometimes debate differences that most of us see as inconsequential 3% vs 4% or 1% vs 0.5%. Granularity was a predictor of accuracy.
- Probabilistic thinkers are less distracted by “why” questions and focus on ‘how’. They reject the notion of fate.
7 Supernewsjunkies?
- A common method emerged among Superforecasters:
- Unpack the question into components.
- Distinguish between the known and unknown and leave no assumptions unscrutinized.
- Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.
- Then adopt the inside view that plays up the uniqueness of the problem.
- Explore the similarities and differences between your views and those of others and from the wisdom from crowds.
- Synthesise all these different views into a single vision as acute as that of a dragonfly.
- Express your judgment as precisely as you can, using a finely grained scale of probability.
- Update to reflect the latest available information
- Superforecasters update much more frequently, on average, than regular forecasters.
- The challenge is to identify and respond to subtler information and zero in on the eventual outcome faster than others.
- You need to avoid under reacting and overreacting.
- If people make a public commitment, they’re more likely to be resistant to changing it. Superforecasters aren’t experts or professionals, having little ego invested in each forecast is an advantage.
- A typical day in the stock market would suggest that there’s a lot of over reaction to news based on the volume and volatility of trading.
- Superforecasters update forecasts more regularly, but they make smaller changes (e.g. 3.5%). Train your brain to think in smaller units of doubt.
- The Bayesian belief-updating equation: Posterior Odds = Likelihood Ratio * Prior Odds
- Your new belief should depend: your prior belief (and all the knowledge that informed it) multiplied by the “diagnostic value” of the new information. Bayes’ core insight is to gradually get closer to the truth by updating in proportion to the weight of the evidence.
8 Perpetual Beta
- Superforecasters have a ‘growth mindset’ they believe their abilities are largely the product of effort. Pay attention to information that can stretch your knowledge, make learning a priority.
- For Keynes failure was an opportunity to learn - to identify mistakes, spot new alternatives, and try again.
- We learn new skills by doing. We improve those skills by doing more. Tacit knowledge like riding a bicycle is the sort we only get from bruising experience.
- Not all practice improves skill, it needs to be informed practice. Knowing what mistakes to look out for and what best practice looks like really helps. Fortune favours the prepared mind.
- Research on calibration routinely finds people are too confident. But meteorologists and bridge players don’t suffer from over confidence, as they both get clear, prompt feedback. To learn from failure, we must know when we fail.
- Vague language and feedback delays stunt your forecasting feedback loop. Ambiguous language and relying on flawed memories to retrieve old forecasts makes it impossible to learn from experience.
- Hindsight Bias: When knowing the outcome skews your perception of what you thought before you knew the outcome.
- Put as much effort into postmortems with teammates as you do to initial forecasts.
- Grit is passionate perseverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress. Superforecasters are perpetual beta, always learning.
- Superforecasters are cautious, humble and nondeterministic. They tend to be actively open-minded, intellectually curious, introspective and self critical.
- They aren’t wedded to ideas. They’re capable of stepping back. They value and synthesise diverse views. They think in small units of doubt, update forecasts thoughtfully and aware of their cognitive biases.
- Superforecasters tend to have a growth mindset and a considerable amount of grit.
9 Superteams
- Members of any small cohesive group tend to unconsciously develop a number of shared illusions and related norms that interfere with critical thinking and reality testing. Groups that get along too well don’t question assumptions or confront uncomfortable facts.
- Teams can cause terrible mistakes through group think or they can also sharpen judgment and accomplish together what cannot be done alone. by sharing information and perspectives.
- Aggregation can only do its magic when people form judgments independently. The independence of judgments helps keep errors more or less random, so they cancel each other out.
- If you can keep questioning yourself and your teammates, and welcome debate, your group can become be more than the sum of its parts.
- Precision questioning (from Dennis Matthies and Monica Worline) can help you tactfully dissect the vague claims people often make.
- Do a team premortem: assume a course of action has failed and to explain why. It helps team members feel safe and express doubts.
- There is no way any individual can cover as much ground (in terms of information collection) as a good team does. Even if you had unlimited hours, it would be less fruitful, given different research styles. Each team member brings something different.
- Markets make mistakes. Sometimes they lose their collective minds. Superteams beat prediction markets by 15% to 30%.
- How the group thinks collectively is an emergent property. Aim for a group of opinionated people who engage one another in pursuit of the truth.
- Foster a culture of sharing: Adam Grant categorises people as “givers,” “matchers,” and “takers.” A pro-social example of the giver can improve the behaviour of others, helping everyone.
- Diversity trumps ability, the aggregation of different perspectives is a potent way to improve judgment. Combining uniform perspectives only produces more of the same. The more diverse the team, the greater the chance that some will possess scraps of information that others don’t.
10 The Leader’s Dilemma
- The superforecaster model can make good leaders superb and make organisations more smart, adaptable, and effective.
- Helmuth von Moltke was a Prussian general who unified Germany. His military tactics are applicable to businesses:
- No plan survives contact with the enemy. Two cases never will be exactly the same. Improvisation is essential.
- If necessary, discuss your orders. Even criticise them. And if you absolutely must and you better have a good reason disobey them.
- Clarification of the enemy situation is an obvious necessity, but waiting for information in a tense situation is seldom the sign of strong leadership—more often of weakness.
- The first criterion in war remains decisive action. Draw a line between deliberation and implementation. Once a decision has been made. Forget uncertainty and complexity. Act!
- You must possess determination to overcome obstacles and accomplish your goals while remaining open to the possibility that you might have to try something else. The art of leadership is a timely recognition of circumstances and of the moment when a new decision is required.
- The principle of "Auftragstaktik" or "mission command" emphasises that decision-making power should be decentralised. Commanders should provide the goal but not dictate the methods, allowing those on the ground to adapt quickly to changing circumstances. This strategy blends strategic coherence with decentralised decision making.
- Bosses everywhere feel the tension between control and innovation, which is why Moltke’s spirit can be found in organisations that have nothing to do with bullets and bombs.
- Mission Command: Let your people know what you want them to accomplish, but don’t tell them how to achieve those goals.
- Smart people are always tempted by a simple cognitive shortcut: I know the answer, I don’t need to think long and hard about it.
- If you’re not learning all the time, you will fail. You need intellectual humility, it compels the careful reflection necessary for good judgment.
11 Are They Really So Super?
- Superforecasters tend to be more intelligent and open-minded than most. What makes them so good though is what they do: the hard work of research, the careful thought and self-criticism, the gathering and synthesising of other perspectives, the granular judgments and relentless updating.
- Our training guidelines urge forecasters to mentally tinker with “the question asked” (e.g. explore how answers to a timing question might change if the cutoff date were six months out instead of twelve). Such thought experiments can stress-test the adequacy of your mental model.
- The ‘black swan’ is an event literally inconceivable before it happens. But Taleb also offers a more modest definition of a black swan as a highly improbable consequential event.
- To the extent that such forecasts can anticipate the consequences of events like 9/11, and these consequences make a black swan what it is, we can forecast black swans.
- There are though limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems.
- Humility should not obscure the fact that people can, with considerable effort, make accurate forecasts about at least some developments that really do matter.
12 What’s Next?
- Forecast, measure, revise: it is the surest path to seeing better.
- Evidence-based policy is a movement modelled on evidence-based medicine, with the goal of subjecting government policies to rigorous analysis so that legislators will actually know—not merely think they know—whether policies do what they are supposed to do.
- What would help is a sweeping commitment to evaluation: Keep score. Analyse results. Learn what works and what doesn’t. Doing so requires numbers, and numbers would leave you vulnerable to the wrong-side-of-maybe fallacy, lacking any cover the next time they blow a big call.
- Numbers must be constantly scrutinised and improved, which can be an unnerving process because it is unending. Progressive improvement is attainable. Perfection is not.
- The tournament questions were narrow. But it might be desirable to answer a question like: How does this all turn out?” Use bayesian question clustering. Under a big question like that, ask and forecast smaller questions. Patterns will emerge and so will your confidence in answering the bigger questions.
Ten Commandments for Superforecasters
- Triage. Focus on questions where work can pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close) or on impenetrable “cloud-like” questions (where fancy models won’t help). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.
- Break seemingly intractable problems into tractable sub-problems. Channel the playful but disciplined spirit of Enrico Fermi. Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.
- Strike the right balance between inside and outside views. Nothing is 100% unique. Look for comparison classes even for seemingly unique events. Ask: How often do things of this sort happen in situations of this sort?
- Strike the right balance between under- and overreacting to new evidence. Belief updating pays off in the long term. Skilful updating requires spotting non-obvious lead indicators about what would have to happen before X could.
- Look for the clashing causal forces at work in each problem. Acknowledge counter arguments. List in advance, the signs that would nudge you toward the other. Synthesis is an art that requires reconciling irreducibly subjective judgments. Create a nuanced view.
- Strive to distinguish as many degrees of doubt as the problem permits but no more. Nuance matters. The more degrees of uncertainty you can distinguish, the better a forecaster you are likely to be. In poker you need to know a 55/45 from 45/55.
- Strike the right balance between under- and overconfidence, between prudence and decisiveness. Long-term accuracy requires getting good scores on both calibration and resolution. Know your track record, and find creative ways to tamp down both types of forecasting errors (misses and false alarms).
- Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases. Don’t try to justify or excuse your failures. Own them! Conduct unflinching postmortems. Ask: Where exactly did I go wrong? Don’t forget to do postmortems on your successes too.
- Bring out the best in others and let others bring out the best in you. Master perspective taking (understanding the arguments of the other side), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable).
- Master the error-balancing bicycle. Implementing each commandment requires balancing opposing errors. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding.
- Don’t treat commandments as commandments. Guidelines are the best we can do in a world where nothing is certain or exactly repeatable. Superforecasting requires constant mindfulness, even when dutifully trying to follow these commandments.