Alternatives to the Kolmogorov axioms

Despite the Kolmogorov axioms providing the widely accepted framework for mathematical probability, there are many ways in which they might be regarded as unsatisfactory for thinking about chance in the real world. Here are the three that strike me as most thought-provoking.

Making a model requires you to prespecify all the relevant events that might happen and assign a probability to every combination of happen/not happen. In practice, all but very simple models do this implicitly via structural assumptions such as independence, even though such assumptions are often unrealistic. Many axiomatic setups for thinking about uncertainty without such prespecifications have been considered -- see for instance Halpern's 2005 monograph Reasoning about Uncertainty -- but none have gained widespread acceptance.

We are more confident about our abilities to assess probabilities accurately in some contexts than in others. One can tackle this in an ad hoc way by repeating a calculation with varying probabilities, as a kind of scenario analysis. More systematic approaches via assigning intervals (upper and lower bounds) to a probability are generally called the Dempster-Shafer theory of evidence. This methodology has gained some traction amongst engineers, but involves rather arbitrary rules for combining numerical assessments of evidence, so mathematicians would call it heuristics.

A probability model is a description of how data is produced, not a prescription for when observed data can be regarded as "random". This constrasts with our everyday perception of randomness, which is centered on actual observations -- coincidences and lucky outcomes. Here there is a partly relevant theory, in the case of data expressed as a sequence of 0's and 1's (which in principle all digital data is). A sophisticated overview can be found in this 2019 Downey - Hirschfeldt article, which describes three approaches, as follows.
Martin-Lof randomness: the statistician's approach to defining algorithmic randomness, based on the intuition that random sequences should avoid having statistically rare properties.
The gambler's approach: random sequences should be unpredictable.
The coder's approach: random sequences should not have regularities that allow us to compress the information they contain.
But all these approaches are fundamentally asymptotic; defining any actual number to indicate the random-ness of a given finite sequence 011011101000101 ......... 100010110 involves rather arbitrary choices. (By contrast, measuring ``spread" of numerical data by standard deviation is arbitrary in the sense that there are other ways to measure spread, but in the present context the definition of an actual number implicitly involves some arbitrary ordering of a sequence of tests). Moreover these theories are in the sequential setting, and it is not clear how to extend naturally to data structured in some non-sequential way.

-------------------------------------

Somewhat confusingly, there is a topic called uncertainty quantification with a rather different focus, aimed at analyzing errors in engineering type models based on physical principles.