We will envisage the setting where the probabilities are not close to 0 or 1; that is, not the setting of very unlikely events with very large consequences if they happen. To enable us to do a little mathematics we envisage the error \(p_{est} - p_{true}\) as being small.
The insight here is that, if we make a story that fits this setting, we will usually find that the cost of the error scales as \( (p_{est} - p_{true})^2\) rather than as \( |p_{est} - p_{true}|\) . In other words, small errors are rather less costly than one might think. The mathematics is outlined below, for two stories. A conceptual point is a contrast with a familiar fact from freshman Statistics. The accuracy of an opinion poll (or other sampling exercise) involving \(N\) samples is (under ideal conditions) expressed by saying the error scales as \(N^{-1/2}\). This is relevant in an election context where we are mainly interested in whether a population percentage is greater than 50%. But in most sampling contexts we are not focussed on determining whether or not the percentage exceed a given threshold. If instead we intend to make some decision, based on a sample percentage, then stories like these suggest that the cost of sampling estimation error typically scales as \( N^{-1}\) rather than \(N^{-1/2}\).
What is the cost of not knowing \(p\)? If \(p_{est}\) and \(p\) are on the same side of \( p_{crit} \) then we take the optimal action and there is zero cost; if they are on opposite sides we take the sub-optimal action and the cost is \[ \mbox{ $|p - p_{crit} | z$ where $z = a- b - c + d > 0$.} \] So what happens over many repeated different decision problems of this type? Assume the different utilities are all of order \(1\) and are independent (over problems) of the probabilities, and hence \(p_{crit}\) is independent of \(p\) and \(p_{est}\). Then the proportion of times that \(p_{crit}\) happens to be in the interval between \(p\) and \(p_{est}\) should be of order \(| p - p_{est} |\), assuming the latter is small; and when this occurs the mean cost is also of order \(| p - p_{est} |\).
Combining these two factors, in this particular "decision under uncertainty" context the cost of errors is indeed of order \( ( p - p_{est} )^2 \).
Our analysis suggests an interesting, albeit rather abstracted, strategy for bookmakers. One can regard their business as first taking a percentage commission and then offering odds corresponding to probabilities adding up to one. What if they offered those odds based on a probability slightly different from their true assessment? This has (according to our story above) only a second order cost in payouts. But if it increases the amount of bets received, there is a first-order effect from increased commission earned. For a monopoly bookmaker, a small change in odds would likely cause only a small change in amount bet, but competition with other bookmakers could cause a substantial change in amount bet.
The same principle holds when (as often happens) much more money is bet on the favorite than on the underdog. Offering worse odds on the favorite would decrease the amount bet and, even if your offered odds are slightly wrong, this strategy will lower your profit in the long run.
Some papers loosely relevant to "setting the odds" are Levitt (2004) and Green-Lee-Rothschild (2019).