I don't claim to have a good answer, but will suggest ways of thinking about this question. One backgroundIn what real world contexts is it both practical and useful to attempt to estimate numerical probabilities?

A qualitative sense of likelihood, for instance a conscious recognition of some future events as likely and some as unlikely, is part of the common sense that the human species is endowed with.

- There is a NRS-11 pain scale with
ratings of
**pain**from 0 to 10. - Movie ratings are often given on a scale of 1 to 10, for instance on IMDb.
- Crime: one can use average length of prison sentences as indicators of seriousness of crimes.

- A pain rated 6 is twice as painful as a pain rated 3
- A movie rated 6 is twice as good as one rated 3
- A crime for which a conviction typically gets 6 years in prison is twice as serious as one getting 3 years.

Somewhat bizarrely, such ratings have even been used when asking for expert probability forecasts -- see this graphic from the 2016 Global Risks Landscape in which participants were asked to assess likelihood on a scale of 1 to 7. Doing so precludes the retrospective analysis of accuracy which can be done in proper prediction tournaments.

As a more substantial example, the Intergovernmental Panel on Climate Change (IPCC) issues periodic reports, widely regarded as the most authoritative analysis of scientific understanding of climate change caused by human activity. Future predictions involve uncertainty, and they want their many authors to be consistent in how they write about uncertainty, so provide technical documents such as Guidance Notes for Lead Authors of the IPCC Fourth Assessment Report on Addressing Uncertainties from which I have extracted the table below, there labelled "A simple typology of uncertainties".

Type | Indicative examples of sources | Typical approaches or considerations |
---|---|---|

Unpredictability | Projections of human behaviour not easily amenable to prediction (e.g. evolution of political systems). Chaotic components of complex systems. | Use of scenarios spanning a plausible range, clearly stating assumptions, limits considered, and subjective judgments. Ranges from ensembles of model runs. |

Structural uncertainty | Inadequate models, incomplete or competing conceptual frameworks, lack of agreement on model structure, ambiguous system boundaries or definitions, significant processes or relationships wrongly specified or not considered. | Specify assumptions and system definitions clearly, compare models with observations for a range of conditions, assess maturity of the underlying science and degree to which understanding is based on fundamental concepts tested in other areas. |

Value uncertainty | Missing, inaccurate or non-representative data, inappropriate spatial or temporal resolution, poorly known or changing model parameters. | Analysis of statistical properties of sets of values (observations, model ensemble results, etc); bootstrap and hierarchical statistical tests; comparison of models with observations. |

This table is addressing the issue of uncertainty and mathematical modeling. It makes the point that, within a complex setting (such as future climate change), any asserted numerical probability is (at best) an output from some complicated model in which all these different kinds of uncertainty are present. This point is obvious once you think about it; but it's just different from what's said in textbooks on the mathematics or philosophy of probability.

So all this is background for what I regard as the fundamental conceptual questionWhenever we think about probabilities, we are consciously recognizing unpredictability or uncertainty. But not conversely. There are many settings where we recognize unpredictability but do not naturally think in terms of chance. And there are many settings where we do think in terms of likely/unlikely but do not care to attempt a quantitative assessment of probability.

In what real world contexts is it both practical and useful to attempt to estimate numerical probabilities?