Our convention is that a graph has finite vertex-set and edge-set , is connected and undirected, has no multiple edges, and has no self-loops. In a weighted graph, each edge also has a weight , and we allow a weighted graph to have self-loops.
Given a weighted graph, there is a natural definition of a Markov chain on the vertices. This requires an arbitrary choice of convention: do we want to regard an absent edge as having weight or weight ? In terms of electrical networks (Section 3.3) the question is whether to regard weights as conductances or as resistances of wires. Conceptually one can make good arguments for either choice, but formulas look simpler with the conductance convention (absent edges have weight ), so we’ll adopt that convention. Define discrete-time random walk on a weighted graph to be the Markov chain with transition matrix
(3.13) |
where
Note that is the total edge-weight, when each edge is counted twice, i.e., once in each direction. The fundamental fact is that this chain is automatically reversible with stationary distribution
(3.14) |
because (3.1) is obviously satisfied by . Our standing convention that graphs be connected implies that the chain is irreducible. Conversely, with our standing convention that chains be irreducible, any reversible chain can be regarded as as random walk on the weighted graph with edge-weights . Note also that the “aperiodic” condition for a Markov chain (occurring in the convergence theorem ††margin: 9/10/99 versionChapter 2 Theorem 2) is just the condition that the graph be not bipartite.
An unweighted graph can be fitted into this setup by simply assigning weight to each edge. Since we’ll be talking a lot about this case, let’s write out the specialization explicitly. The transition matrix becomes
where is the degree of vertex . The stationary distribution becomes
(3.15) |
where is the number of edges of the graph. In particular, on an unweighted regular graph the stationary distribution is uniform.
In continuous time there are two different ways to associate a walk with a weighted or unweighted graph. One way (and we use this way unless otherwise mentioned) is just to use (3.13) as the definition of the transition rates . In the language of Chapter 2 ††margin: 9/10/99 versionthis is the continuization of the discrete-time walk, and has the same stationary distribution and mean hitting times as the discrete-time walk. The alternative definition, which we call the fluid model, uses the weights directly as transition rates:
(3.16) |
In this model the stationary distribution is always uniform (cf. Section 3.2.1). In the case of an unweighted regular graph the two models are identical up to a deterministic time rescaling, but for non-regular graphs there are typically no exact relations between numerical quantities for the two continuous-time models. Note that, given an arbitrary continuous-time reversible chain, we can define edge-weights via
but the weights do not completely determine the chain: we can specify the independently and then solve for the ’s.
Though there’s no point in writing out all the specializations of the general theory of ††margin: 9/10/99 versionChapter 2, let us emphasize the simple expressions for mean return times of discrete-time walk obtained from ††margin: 9/10/99 versionChapter 2 Lemma 5 and the expressions (3.14)–(3.15) for the stationary distribution.
For random walk on an -vertex graph,
Chess ††margin: The example has been copied here from the start of Example 18 of Chapter 5 (4/22/96 version); reminder: that example needs to be modified accordingly! moves.
Here is a classic homework problem for an undergraduate Markov chains course.
Start a knight at a corner square of an otherwise-empty chessboard. Move the knight at random, by choosing uniformly from the legal knight-moves at each step. What is the mean number of moves until the knight returns to the starting square?
It’s a good question, because if you don’t know Markov chain theory it looks too messy to do by hand, whereas using Markov chain theory it becomes very simple. The knight is performing random walk on a graph (the 64 squares are the vertices, and the possible knight-moves are the edges). It is not hard to check that the graph is connected, so by the elementary Lemma 3.5, for a corner square the mean return time is
and by drawing a sketch in the margin the reader can count the number of edges to be .
The following cute variation of Lemma 3.5 is sometimes useful. Given the discrete-time random walk , consider the process
recording the present position at time and also the previous position. Clearly is a Markov chain whose state-space is the set of directed edges, and its stationary distribution (, say) is
in the general weighted case, and hence
in the unweighted case. Now given an edge , we can apply ††margin: 9/10/99 versionChapter 2 Lemma 5 to and the state to deduce the following.
Given an edge define
Then
For an edge ,
We shall soon see (Section 3.3.3) this inequality has a natural interpretation in terms of electrical resistance, but it is worth remembering that the result is more elementary than that.
Here is another variant of Lemma 3.5.
For random walk on a weighted -vertex graph,
where the sum is over undirected edges.
Proof. Writing for the sum over directed edges , the left side equals
Imagine a finite number of identical buckets which can hold unit quantity of fluid. Some pairs of buckets are connected by tubes through their bottoms. If a tube connects buckets and then, when the quantities of fluid in buckets and are and , the flow rate through the tube should be proportional to the pressure difference and hence should be in the direction , where is a parameter. Neglecting the fluid in the tubes, the quantities of fluid at time will evolve according to the differential equations
These of course are the same equations as the forward equations [(4) of ††margin: 9/10/99 versionChapter 2] for (the probability of being in state at time ) for the continuous-time chain with transition rates . Hence we call this particular way of defining a continuous-time chain in terms of a weighted graph the fluid model. Our main purpose in mentioning this notion is to distinguish it from the electrical network analogy in the next section. Our intuition about fluids says that as the fluid will distribute itself uniformly amongst buckets, which corresponds to the elementary fact that the stationary distribution of the “fluid model” chain is always uniform. Our intuition also says that increasing a “specific flow rate” parameter will make the fluid settle faster, and this corresponds to a true fact about the “fluid model” Markov chain (in terms of the eigenvalue interpretation of asymptotic convergence rate—see Corollary 3.28). On the other hand the same assertion for the usual discrete-time chain or its continuization is simply false.