On utility in Decision Theory and Game Theory
Quantitative aspects of Decision theory
and Game theory
rely on some notion of expected utility.
Where all the consequences of an action can be expressed as (relatively small amounts of) money
this utility aspect is conceptually straightforward
(though the probabilities involved may be difficult to estimate, as in
stock market investing).
Difficulties arise as soon as we leave this narrow context, even if only money is involved.
It is rational to be risk-averse and implement this by having a concave utility function for wealth, but does anyone
actually do this consistently?
Here is one of the many more subtle issues described in the literature represented by Kahneman's
Thinking, Fast and Slow.
Suppose you have a ticket entitling you to a 99% probability of winning $30,000.
How much would you pay to increase this to 100%?
The "rational" (maximal expected utility) answer is "somewhat less than $300", less because of your concave utility function.
Most people (in this very hypothetical situation) say an amount more than $300.
One possible explanation is that they wrongly perceive this as the same as the "insurance" situation
There is a 1% probability that you will incur a loss of $30,000 for reasons outside your control.
How much would you pay to insure against this loss?
In this situation the "rational" answer is indeed "somewhat more than $300".
(To see the difference between the two situations, suppose your prior wealth is $50,000 and draw a concave utility function.)
However another possible explanation, along the lines of
all value is subjective,
is the "feeling foolish" effect -- somehow the loss of opportunity, like a soccer player missing an easy open goal,
seems more shameful than an accident outside your control.
Moving away from money, in the wedding example
we assumed one could assign utilities.
But actually doing so seems impossible, in that all the issues involved are subjective
and also involve other people.
Here is an example I have used in the Berkeley course.
Almost everyone who owns a house in California buys property insurance, to cover
fires, theft etc. But it does not cover earthquake damage, so if you want earthquake insurance you must buy it separately.
Some people
do buy earthquake insurance, and some don't. Imagine you own a house in Berkeley.
Go to the
california earthquake authority premium calculator
and enter [hypothetical data, omitted here] to
obtain an estimate for the cost of earthquake insurance for your hypothetical house.
Question: How could you make a (somewhat rational) decision whether or not to buy
this earthquake insurance? What quantitative information went into your decision (cite online sources)?
Students, not to mention myself and my colleagues, find this hard to do!
My bottom line
My own view of these decision theory matters is that, outside very narrow "only money" settings,
- Simple textbook formulations of utility are unrealistic.
- At a practical level, for making real world decisions,
assigning realistic utilities is too difficult -- no-one actually does it.
- At a theory level, there are so many different aspects of reality which could be
put into a more complex utility function that one can likely reproduce any data
by some choice of utility. As remarked elsewhere,
a framework that can explain anything, in fact explains nothing.
The same issues arise in game theory.
One conceptually interesting topic, outside the "only money" setting,
was popularized by
John Maynard Smith's classic book
Evolution and the Theory of Games.
Here the role of money is played by
fitness,
in the biology sense relating to mean number of offspring.
As with other toy models, this leads to plausible possible explanations of observed behavior.
But typically one can't observe numerical values for fitness, and models typically
account for only one type of interaction amongst many possible types,
so we are back at the "can explain anything" issue.
Bottom line: be skeptical of game theory explanations of real world phenomena in cases where
you cannot observe the payoff matrix.
Selected practical advice for decisions in everyday life
One of the 100 non-technical books I have reviewed is
Dance with Chance: Making Luck Work for You
by Spyros Makridakis et al, and you can see
the entire review here.
Here is a summary of their conclusions -- one could find partly similar conclusions in
Tetlock's Superforecasting.
- The future is never exactly like the past.
- Complex statistical models fit past data well but don't necessarily predict the future.
- Simple models predict the future better.
- Both statistical models and people have been unable to capture the full extent of future
uncertainty and been surprised by large forecasting errors.
- Expert judgement is typically inferior to simple statistical models.
- Averaging (whether of models or of expert opinions) usually improves forecasting accuracy.
I liked their categorization of 4 ways to
make a decision: in the list below I have suggested how they apply to a student's
choice of Major.
They invented the word sminking.
- sminking: "using some simple explicit rule". For instance basing
major decision on one or two particular factors, such as
what do you enjoy, what will lead to a well-paid career.
- thinking: "trying to take everything into account".
Putting a lot of effort into the decision, considering many factors and comparing
with other Majors.
- blinking: ``instant gut reaction".
I didn't need to think about it, I already knew what was right for me.
- ask an expert: Relying mostly on advice, from e.g. Berkeley advisor or parent.
A view from the other side
This course emphasizes quantifiable aspects of chance, but let's see an opposing perspective,
from Jason Puskar, author of
Accident Society: Fiction, Collectivity, and the Production of Chance.
Modern risk analysis turns a whole range of activities into gambles:
statistical prediction and analysis makes eating shellfish, driving small cars,
breathing urban air, or even exposing oneself to the sun seem like wagers in an uncertain game.
The point is not that these kinds of activities are gambles in any essential way,
but that modern society defines them as such, by estimating odds, publishing that
information widely, and then asking citizens both to choose wisely among the various
options and to bear responsibility for the results.
Purveyors of risk information usually claim that their rational and scientific assessments
help individuals choose safer or more beneficial courses of action. Perhaps so, but in the
process they also confront risk consumers with an ever proliferating array of private risk
situations. Attempting to mitigate one risk, such as the risk of breast cancer, forces an
encounter with a new risk, such as the risk of radiation from a mammogram. The island of
safety and security that risk analysis promises to deliver never comes into view because
each risk decision only delivers us to ever more numerous and vexing risk assessments still.