## Warren Buffett's billion dollar gamble (February 2014)

In a widely publicized announcement on January 21, 2014, Quicken Loans is offering a billion (sic) dollar prize to any contestant who can fill out the bracket perfectly in the March NCAA basketball tournament. Because they don't have a spare billion in the bank, they have insured against the possibility of a winner with Berkshire Hathaway (BH), paying an undisclosed premium believed to be around 10 million dollars. Is this is good deal for BH?

Some relevant data: to win you must predict all 63 game winners correctly. The number of entries is limited to 10 million. The prize is actually 500 million cash (or 1 billion over 40 years). Presumably Warren Buffett asked his actuary "are you very confident that the chance of someone winning is considerably less than $$1/50$$". How would you have answered?

I put this forward as an interesting topic for open-ended classroom discussion. First emphasize that the naive model (each entry has chance 1 in $$2^{63}$$ to win) is ridiculous. Then elicit the notions that a better model might involve some combination of

• modeling typical probabilities for individual games
• modeling the strategies used by contestants
• empirical data from similar past forecasting tournaments.
Here are two of many possible lines of thought.

(1) The arithmetic $$\mbox{ (5 million)} \times (3/4)^{63} \approx 1/14$$ suggests that if half the contestants are able to consistently predict game winners with chance 3/4, then it's a bad deal for BH. Fortunately for BH this scenario seems inconsistent with past data. Because the same calculation, applied to entries in a similar (but only 1 million dollar prize) ESPN contest last year, says that about $$\mbox{ (4 million)} \times (3/4)^{32} \approx 400$$ entries should have predicted all 32 first-round games correctly. But none did (5 people got 30 out of 32 correct).

(2) The optimal strategy, as intuition suggests, is to predict the winner of each game to be the team you think (from personal opinion or external authority) more likely to win. For various reasons, not every contestant does this. For instance, as an aspect of a general phenomenon psychologists call probability matching, a contestant might think that because some proportion of games are won by the underdog, they should bet on the underdog that proportion of times. And there are other reasons (supporting a particular team; personal opinions about the abilities of a subset of teams) why a contestant might predict the higher ranked team in most, but not all, games. So let us imagine, as a purely hypothetical scenario, that each contestant predicts the higher-ranked team to win in all except k randomly-picked games. Then the chance that someone wins the prize is about $$\Pr(\mbox{in fact exactly $$k$$ games won by underdog}) \times \frac{ \mbox{10 million} }{{63 \choose k} }$$ provided the second term is $$\ll 1$$. The second term $$\approx 0.15$$ for k = 6 and $$\approx 0.02$$ for k = 7. The first term cannot be guessed -- as a student project one could get data from past tournaments to estimate it -- but is surely quite small for k = 6 or 7. This suggests a worst-case hypothetical scenario from BH's viewpoint: that an unusually small number of games are won by the underdog, and that a large proportion of contestants forecast that most games are won by the higher-ranked team. But even in this worst case it seems difficult to imagine the chance of a winner becoming anywhere close to 1/50.

#### Other estimates

A brief search for other estimates of the chance that an individual skilled forecaster could win the prize finds
Neither source explains how these chances were calculated, though coincidently (?) $${63 \choose 10} \approx$$ 128 billion.

Update (March 2014) Since the original post many more discussions have appeared: for instance Nate Silver estimates that betting on the favorite each time will give you a 1 in 7.4 billion chance of winning.