Implicit and explicit lists of representative examples of chance

Our Annotated list of contexts where we perceive chance has several purposes. On this page we take it as a descriptive reference point for comparing other writers' focus and breadth to our own. Elsewhere (xxx not yet written) we comment on using such lists as a basis for taxonomic analyses.

Lindley's list below is the only loosely comparable explicit list I know. Of course, by reading any author and seeing what examples they employ one can extract some implicit short list of what they appear to view as representative examples. Such lists comprise the remainder of this page.

Lindley's list

This list, discussed in the opening chapter of Understanding Uncertainty, gives 20 instances intended to illustrate the author's view of the range of uncertainty. As a Bayesian, he regards all instances of uncertainty as expressible in probability terms. 18 of his instances fit reasonably well with my own list, as follows (here and below numbers refer to my list, as of July 2010, and I have re-ordered items according to my list order). I would exclude from my list instances where it hard to visualize both alternatives Analysis. At first sight this list has a pretty broad range. At second sight, the first half tend toward familiar textbook examples of probability/statistics, and the second half has a tight focus on the correctness of scientific/historical theories. Alas these examples are not actually discussed seriously in the rest of the book.

Silver's chapters

In many ways, the book that best complements our list is Nate Silver's The Signal and the Noise. Even though restricting itself to predictions about the future (rather than all of chance and uncertainty) the 13 chapters of this book mostly correspond closely to contexts on our list. Analysis. The book gives an interesting chapter of discussion of each topic, with a nice combination of details and overview. The author has deliberately chosen contexts where there is a lot of past data, and the central issue (his signal/noise analogy) is determining which aspects of the data are useful in predicting the future.

Life of Norm

Of the 27 short chapters in Michael Blastland and David Spiegelhalter's The Norm Chronicles: Stories and Numbers About Danger, 22 are on quite specific topics (rather than chance in general) and aside from the rather broad "life expectancy" chapter, they fit well into our categorization.

Hacking's list

In Hacking An Introduction to Probability and Inductive Logic, the brief chapter on philosophical interpretations of probability concludes with the following comments (typical of many other discussions of the frequentist/Bayes philosophies).

Our prototypical examples [of probability] are artificial randomizers. But as we start to think hard about more real-life examples, we get further and further away from the core examples. Then our examples tend to cluster into belief-type examples, and frequency-type examples, and in the end we develop ideas of two different kinds of probability.

The text is accompanied by a graphic, in which the following 13 "examples" are arranged in a circle around a central entry "artificial randomizers".

Analysis. xx

(xxx move to a taxonomy page). Short lists of examples are appropriate and indispensible for illustrating a distinction implied by a definition (e.g. qualitative vs quantitative variable) or a distinction that is uncontroversially substantive and useful (e.g. marine mammal vs fish). But if you wish to put forward an argument that some distinction is substantive and useful then a short list of iconic examples on both sides is not at all convincing. You need to xxx show that "most" examples can be decisively put on one side or the other, and that xxx not too unbalanced. xxx need long list xxx need list not chosen by you!

von Mises examples

von Mises gives, in Probability, Statistics and Truth, a "summary of his theory in sixteen propositions", and here are the first three.
  1. The statements of the theory of probability cannot be understood correctly if the word `probability' is used in the meaning of everyday speech; they hold only for a definite, artificially limited rational concept of probability.
  2. This rational concept of probability acquires a precise meaning only if the collective to which it is applied is defined exactly in each case. A collective is a mass phenomenon or repetitive event that satisfies certain conditions; generally speaking, it consists of a sequence of observations which can be continued indefinitely.
  3. The probability of an attribute (a result of observations) within a collective is the limiting value of the relative frequency with which this attribute recurs in the indefinitely prolonged sequence of observations. This limiting value is not affected by any place selection applied to the sequence.
So this is the "dogmatic frequentist" position: He explicitly excludes ``everyday" examples and e.g. weather forecasts. Here is what remains, as his in text examples. Analysis. xxx

Eagle's list

An interesting implicit list from the philosopher Antony Eagle appears in his article Randomness Is Unpredictability (specifically section 1, Randomness in science ). Many items are indicated by only brief in-sentence phrases, quoted below. Analysis. Covers a broader range than many philosophers do. The article's premise -- that "Randomness Is Unpredictability" is a rather novel philosophical idea that needs justification -- seems bizarre to a statistician or scientist. To the question "what does it mean to say that the result of a die roll is random?", surely the most common answer "it's random in the sense of unpredictable". Moreover a very standard topic (our (54): Residuals (errors) in estimation) is that the errors in your best prediction must be random in a certain sense, otherwise you could improve the prediction.

The examples in the four books above were rather easy to fit into our contexts. I suspect this is because the authors actually started by thinking of a "context" then invented an example. When you take actual specific real-world examples (like the final one above) it becomes harder to fit into prespecified contexts. Unsurprisingly!

Mlodinow's examples

As a representative of ``popular science" style books on Probability, let us take Leonard Mlodinow's The Drunkard's Walk: How Randomness Rules Our Lives.

Analaysis. xxx

2015 New Scientist erxamples

The 12 March 2015 New Scientist cover emphasizes their special report (14 pages; 3 authors plus 6 interview columns) entitled Chance: how randomness rules our world. Their examples are Analysis. This has a nice broad range, but the specific topics chosen seem very conventional and could mostly been described similarly 30 years ago. The only imaginative topic here is avalanche prediction.

xxx Rosenthal lightning book.