9 Comments
User's avatar
prof serious's avatar

This is an excellent and very informative post. Thank you.

Jim Maltby's avatar

I think there’s some great stuff in here, but the a few things that seem muddled, conflated and confused.

Firstly, many (Tetlock, Silver, Taleb, Popper, King) demonstrate clearly that prediction is only useful to a limited set of problems. Tetlock calls it the Goldilocks Zone. This is where probability works and/ or works as a proxy. Beyond those conditions treating it as a predictable is (in their words) dangerous!

When you say ascribing numbers is essential, what I think your are conflating here (assuming this is adopted from Superforecasting practice) is that the act of doing so force’s the person to unpack their reasoning (showing their mental working). So it is the act of making your reasoning explicit not ascribing numbers that is the value here. When things are truly uncertain (in the Knightly sense) I.e. beyond the Goldilocks Zone, the axioms of probability don’t hold so you both can’t use numbers and doing so creates an “illusion of concreteness”.

The description of the use of “inductive probabilities” and “inductive premises” sounds like it’s being confused with abductive reasoning. Abductive reasoning is The type of thinking used in thought experiments and speculation, and is essential in the consideration of the future. As Fischer (2001) says it’s illogical for the future to be based purely on the past (i.e. induction).

Having said that I think you are right that governments are obsessed by predictions - in spite of the forecasting tournaments like Cosmic Bazaar, etc. - but are not very good at forecasting (making predictions).

However, I unconvinced by the argument for treating everything as predictable. It is unsurprising that the AI folks quoted make claims that their tools are useful here, but they are not experts in psychology and their reasoning is flawed. In my opinion it is a cause for grave concern to do so in conditions of true (radical, deep, etc) uncertainty i.e. most policy, and governments are even worse at thinking about this than they are at forecasting.

Keith Dear's avatar

Thanks for commenting - I don’t think it’s muddled, but regret you found it so. The logic laid out is clear - and for me stands the challenges you offer. Let our arguments sit side by side and others can judge.

One thing is worth correcting: I don’t think the Government is obsessed with prediction. Explicitly in the article, quite the opposite.

Jim Maltby's avatar

Just to correct myself, my comment about Government prediction was a little unclear.

My observations are that Government thinking lacks the depth and explicit reasoning required, that I think you expressed too here (e.g. starting with a ‘base rate’ etc). But I disagree that these are all about predictions. The assertion I was making is that although unable to do so effectively Government would like everything to be predictable - that’s what I meant by “obsessed with prediction”.

Chris's avatar

Sam Freedman in a substack published around same time as yours describes his Fathers predictions in a similar manner so you are in good company.

A league table for AI called Prophet Arena was recently launched to see which AI is best on the polymarkets. OpenAI currently edging the table.

Andy's avatar

Love the argument Keith- but I think you unpick your own thesis in the section on Wellington and Friston. You argue here (and I agree wholeheartedly) everything is prediction AND action. They are inseparable- the game is in closing the prediction gap. (John Boyd, Andy Clark)- we are in an ongoing attempt to reduce prediction ERROR as we go about our lives

Keith Dear's avatar

Glad you enjoyed it- and thank you for commenting! My view is that in this sense action is a prediction. Any given act predicts a given outcome. Wellington's 'to endeavour to find out what you don't know by what you do' forms the first prediction. Then when you act, the act itself is a prediction, if you act to, for example, recce an area - go see the other side of the hill, you've predicted this will have some value.

Having written that, I think we might be vehemently agreeing. But in any case I enjoyed the comment, the nods to Boyd and Clark, and thinking it through.

Ian Reeves's avatar

Smells like an all source analysis approach. Blending technical (quantifiable) and non technical (professional assessments) sources to derive a qualified judgement - against which subsequent decisions/actions are made. Loop.

Keith Dear's avatar

Yes, I agree. Both in the broadest possible sense of 'all-source' - and still applicable in even the narrowest 'single source' decision cycle.