Super-Forecasting – not really a Dark Art

AKA “stating the bleeding obvious in 340 pages”

Front cover of Super-Forecasting, the art and science of Prediction, book by Philip Tetlock and Dan Gardner, published by Penguin
Super-Forecasting, the book, published by Penguin
What’s Super-Forecasting? The person who appears to be running Britain from behind the scenes – Dominic Cummings – has famously advertised for “weirdos” to join his team of Super-Forecasters, and has several times publicly referred to this book.

I don’t think of myself as a weirdo, but if that’s the methodology he’s using to direct the country’s leadership, I thought I had better read it. It’s been widely advertised too, so I thought it was new thinking – but no, it was published in 2015, so it turns out that I’m 5 years behind the curve. Couldn’t have super-forecasted that….

Like most business books, it becomes tediously repetitive, and never expresses anything in one sentence if a whole chapter will do. If you’re as behind the times as I am, haven’t read the book, and you care about how your government is planning the future, I’m sure you want to know what this is all about too.

So to save you ploughing through 340 pages (OK, 289 excluding cross-references and index), let’s see if I can explain – as I understand it – in around 340 words (OK, 940 words).

The book describes projects carried out over the last 30 years or so by the authors, Philip Tetlock and Dan Gardner, that proved that “ordinary people” could come up with more accurate forecasts of world political events than the “experts” working for US government agencies. How did they do that?

Firstly, they weren’t ordinary people. They all had first class education and were avid consumers of news media. They’re of the older generation, so had long life experience.

What did they do to beat the experts? They did what I would have expected the experts to do – the only difference being that they didn’t have access to classified intelligence. They answered relatively short term questions like “will the FTSE100 index recover to 7000 by March 2021?” (but not that one, since the book was written in 2015).

They did a lot of research. They looked at all the component drivers of the situation as it currently appeared. They examined the nearest similar events in the past and how they had turned out. Having done that, they gave their best estimate (guess) of the outcome – as a percentage probability, never a “yes” or “no” (apparently much to the frustration of the politicians who were relying on their forecasts).

Translation: Google everything that might be relevant, add your gut feel and put down a number.

So my answer to my hypothetical question above is “I think it’s 60% certain to pass 7000 by March 2021”. (I’m allowed to pretend to be a super-forecaster for the purpose of this article, because I’m well educated, old-ish and read the newspaper every day).

But then, tomorrow, and the next day, and the next day, super-forecasters will go back to the last forecast they made and adjust it in the light of new information. So tomorrow I might be 65% certain or 55% certain, depending on what’s happened overnight. But that alone wouldn’t make me a super-forecaster – I’d need to be saying 64% or 56% and tweak it by another 1% either way the next day.

A super-forecast is always right. Basically, because it never states certainty. If I say I’m 64% confident of something happening, that also means I think there’s a 36% probability that it won’t.  

The book doesn’t say at what point the adjudicator said “stop” and subsequently tested the forecasts to compare the experts with the super-forecasters. Obviously if, right up to 28 February 2021, I’m still allowed to adjust my forecast for the FTSE 100 index in March, I’m likely to 100% right!

Note that long term horizon events and things affected by force majeure are out of contention for super-forecasting. So the coronavirus pandemic couldn’t have been forecasted, nor can personally-useful questions like “will I have enough to retire on in 2040?”.

So, digesting the digested,
  1. Google everything you can think of that is relevant either currently or historically
  2. Add your gut feel
  3. Assign a percentage probability
  4. Set up a Google alert for every relevant driver (so you don’t have to search every day)
  5. As the situation evolves, refine your forecast, moving the percentage probability up or down a point or two
  6. Be prepared to admit to being massively wrong about some or other driver, in which case you can refine your forecast by a massive amount too
  7. In the unlikely event you have access to secret intelligence, ignore it
  8. Be Calm because you’ll never be wrong (unless you are misguided enough to forecast anything as 0% or 100%).

Does it work? I asked myself the question of whether global geo-political events – especially those involving the USA, where super-forecasting originated, and where it has presumably become the established norm – have turned out better in recent years. Have governments taken actions they wouldn’t otherwise have taken? Did those make the world a better place?

The book references events like the Bay of Pigs and Cuban Missile crisis of the early 1960’s, and the WMD theories about Iraq that led to the war there, and suggests these would never have happened had super-forecasting been around then (and, of course, if the politicians of the time had listened to the super-forecasters). But, just in the five years since the book was published, we have the Syrian migrant crisis, civil war in Yemen, massive deforestation in the Amazon, a trade war between the USA and China….

I’m not seeing happy outcomes and world peace; but then, I’m not a super-forecaster.

This article was first published on my LinkedIn feed on March 12, 2020