When strong priors meet contradictory evidence
A question which is seldom discussed in the Bayesian context is how to assess
information that contradicts a strongly held belief.
Here are four examples.
I don't have any novel bottom line, but I suggest Devil's Advocate arguments for the first two.
How old is the shepherd?
This is a quite well know example originating in a
1986 Kurt Reusser paper.
Young children were asked
There are 125 sheep and 5 dogs in a flock. How old is the shepherd?
3 out of 4 children gave some numerical answer, speaking their thoughts in some way like
125 + 5 = 130 … this is too big, and 125 - 5 = 120 is still too big … while
125/5 = 25 … that works … I think the shepherd is 25 years old.
This experiment is generally regarded as demonstrating a failure of elementary mathematics education.
But regardless of quality of education,
I would argue that the children are being at least somewhat rational, provided I assume that all the arithmetic
problems they had ever seen before had definite numerical answers.
They are familiar with not knowing how to do a problem and trying various methods and
just hoping one works without really understanding what's going on.
The alternative to "just hoping" would be recognize this as some unfamiliar setting;
but there are a host of unfamiliar possibilities
("maybe I'm dreaming") and thinking of the specific unfamiliar possibility "the teacher has deliberately
devised a question which makes no sense"
requires some imagination.
The Milgram experiment
Quoting from Wikipedia's detailed account, the Milgram experiment
the willingness of study participants … to obey an authority figure who instructed them to perform acts
… even if apparently causing serious injury and distress [to others].
Milgram's own interpretation has been widely accepted:
Ordinary people, simply doing their jobs, …
can become agents in a terrible destructive process. Moreover, even when the destructive
effects of their work become patently clear, and they are asked to carry out actions
incompatible with fundamental standards of morality,
relatively few people have the resources needed to resist authority.
No doubt this is true to a certain extent.
But in the experiment the subjects undoubtedly started with a prior belief, that the investigators would act properly.
Commentators assert that the subjects should have changed their belief in the light of evidence,
but commentators are reluctant the acknowledge the key fact that the subjects were in fact correct in their belief.
The National UFO Reporting Center
shows 5,516 reported sightings in 2016.
I have a strongly held belief that UFOs -- in the common sense of alien spaceship-like entities --
are not around Earth right now.
In this context I personally have no problem simply ignoring the existence of all these reports and maintaining
the same strongly held belief at the end of 2016 that I had at the start of 2016.
Updating after an unlikely event
A general issue in Bayes methods is: if a very unlikely (according to your model) event
happens, do you continue to update probabilities by Bayes rule,
or do you decide your model might be wrong and start again?
For instance, Xbox Live uses its
TrueSkill ranking system
which assigns to each player both a rating
\(\mu\) and and uncertainty \(\sigma\) in the rating.
In oversimplified terms,
if the difference \(\mu_i - \mu_j\) in skills between two players is large
and the \(\sigma\)'s are small then
the system implicitly believes the chance of \(i\) beating \(j\) is very small.
If this small chance happens, then the effect of their pseudo-Bayes update scheme is that \(\mu_i\) is increased
and \(\mu_j\) is decreased (as one would wish), but also both \(\sigma\)'s can only decrease.
Here common sense suggests that increasing the uncertainties would be more natural.