Modern probabilists regard the martingale optional stopping theorem as one of the most important results in their subject. As propaganda for martingales we give below four quick applications of that theorem, and a few more will appear later. All of these results could be proved in alternative, elementary ways. For the reader unfamiliar with martingales, Durrett [133] Chapter 4 contains much more than you need to know: Karlin and Taylor [208] Chapter 6 is a gentler introduction.
Given a non-empty subset and a function defined for , there exists a unique extension of to all satisfying
Proof. If satisfies the equations above then for any initial distribution the process is a martingale. So by the optional stopping theorem
(2.28) |
This establishes uniqueness. Conversely, if we define by (2.28) then the desired equations hold by conditioning on the first step.
If is harmonic, i.e. if
then is constant.
Proof. Clearly a constant function is harmonic. So the Corollary follows from the uniqueness assertion of Lemma 2.27, taking to be some singleton.
The sum does not depend on .
Proof. This repeats Corollary 2.14 with a different argument. The first-step recurrence for is
By Corollary 2.28 it is enough to show that is a harmonic function. We calculate
But , so is indeed harmonic.
For any stopping time and any states ,
Proof. Recall that “before” means strictly before. The assertion of the lemma is intuitively obvious, because each time the chain visits it has chance to make a transition , and one can formalize this as in the proof of Proposition 2.4. A more sophisticated proof is to observe that is a martingale, where
And the assertion of the lemma is just the optional stopping theorem applied to the martingale and the stopping time .
Let be a non-empty subset of and let satisfy
(i)
(ii) .
Then .
Proof. For arbitrary , define by
and then define
Then is a martingale, so the optional sampling theorem says
But the hypotheses on imply .
A theme running throughout the book is the idea of getting inequalities for a “hard” chain by making a comparison with some “easier” chain for which we can do exact calculations. Here is a simple example.
Let be a discrete-time chain on states such that whenever . Write , and suppose . Then .
Proof. The proof implicitly compares the given chain to the continuous-time chain with . Write , and extend by linear interpolation to real . Then is concave and for
where is the left derivative. The result now follows from Lemma 2.31.
As mentioned previously, the results above don’t really require martingales. Next we record a genuine martingale result, not directly involving Markov chains but ultimately useful in their analysis. Part (c) is Wald’s equation and part (b) is Wald’s equation for martingales. The result is a standard consequence of the optional sampling theorem: see [133] (3.1.6) for (c) and [133] Theorem 4.7.5 for (a).
(a) Let be such that
for a constant . Then for any stopping time ,
(b) If in the hypothesis we replace “” by “”,
then .
(c) In particular, if for
i.i.d. nonnegative then
.