When strong priors meet contradictory evidence
A question which is seldom discussed in the Bayesian context is how to assess
information that contradicts a strongly held belief.
Here are three examples.
I don't have any novel bottom line, but I suggest Devil's Advocate arguments for the last two.
The National UFO Reporting Center
shows 5,516 reported sightings in 2016.
I have a strongly held belief that UFOs -- in the common sense of alien spaceship-like entities --
are not around Earth right now.
In this context I personally have no problem simply ignoring the existence of all these reports and maintaining
the same strongly held belief at the end of 2016 that I had at the start of 2016.
How old is the shepherd?
This is a quite well know example originating in a
1986 Kurt Reusser paper.
Young children were asked
There are 125 sheep and 5 dogs in a flock. How old is the shepherd?
3 out of 4 children gave some numerical answer, speaking their thoughts in some way like
125 + 5 = 130 … this is too big, and 125 - 5 = 120 is still too big … while
125/5 = 25 … that works … I think the shepherd is 25 years old.
This experiment is generally regarded as demonstrating a failure of elementary mathematics education.
But regardless of quality of education,
I would argue that the children are being at least somewhat rational, provided I assume that all the arithmetic
problems they had ever seen before had definite numerical answers.
They are familiar with not knowing how to do a problem and trying various methods and
just hoping one works without really understanding what's going on.
The alternative to "just hoping" would be recognize this as some unfamiliar setting;
but there are a host of unfamiliar possibilities
("maybe I'm dreaming") and thinking of the specific unfamiliar possibility "the teacher has deliberately
devised a question which makes no sense"
requires some imagination.
The Milgram experiment
Quoting from Wikipedia's detailed account, the Milgram experiment
the willingness of study participants … to obey an authority figure who instructed them to perform acts
… even if apparently causing serious injury and distress [to others].
Milgram's own interpretation has been widely accepted:
Ordinary people, simply doing their jobs, …
can become agents in a terrible destructive process. Moreover, even when the destructive
effects of their work become patently clear, and they are asked to carry out actions
incompatible with fundamental standards of morality,
relatively few people have the resources needed to resist authority.
No doubt this is true to a certain extent.
But in the experiment the subjects undoubtedly started with a prior belief, that the investigators would act properly.
Commentators assert that the subjects should have changed their belief in the light of evidence,
but commentators are reluctant the acknowledge the key fact that the subjects were in fact correct in their belief.