5 Examples: Special Graphs and Trees (April 23 1996)

5.2 Special graphs

In this section we record results about some specific easy-to-analyze graphs. As in Section 5.1.3, we focus on the parameters τ*,τ0,τ1,τ2,τc\tau^{*},\tau_{0},\tau_{1},\tau_{2},\tau_{c} discussed in Chapter 4; orders of magnitudes of these parameters (in the asymptotic setting discussed with each example) are summarized in terms of nn, the number of vertices, in the following table. A minor theme is that some of the graphs are known or conjectured to be extremal for our parameters. In the context of extremality we ignore the parameter τ1\tau_{1} since its exact definition is a little arbitrary.

jjj David: (1) Shall I add complete bipartite to table? (2) Please fill in missing entries for torus.

Orders of magnitude of parameters [τ=Θ(entry)\tau=\Theta(\mbox{entry})] for special graphs.

Example τ*\tau^{*} τ0\tau_{0} τ1\tau_{1} τ2\tau_{2} τc\tau_{c}
5.7. cycle n2n^{2} n2n^{2} n2n^{2} n2n^{2} nn
5.8. path n2n^{2} n2n^{2} n2n^{2} n2n^{2} nn
5.9. complete graph nn nn 11 11 11
5.10. star nn nn 11 11 11
5.11. barbell n3n^{3} n3n^{3} n3n^{3} n3n^{3} n2n^{2}
5.12. lollipop n3n^{3} n2n^{2} n2n^{2} n2n^{2} nn
5.13. necklace n2n^{2} n2n^{2} n2n^{2} n2n^{2} nn
5.14. balanced rr-tree nlognn\log n nlognn\log n nn nn nn
5.15dd-cube (d=log2nd=\log_{2}n) nn nn dlogdd\log d dd dd
5.16. dense regular graphs nn nn 11 11 11
5.17dd-dimensional torus
      d=2d=2 jjj? nlognn\log n n2/dn^{2/d} n2/dn^{2/d} jjj?n1/dn^{1/d}
      d3d\geq 3 jjj? nn n2/dn^{2/d} n2/dn^{2/d} jjj?n1/dn^{1/d}
5.19. rook’s walk nn nn 11 11 11

In simpler cases we also record the tt-step transition probabilities Pi(Xt=j)P_{i}(X_{t}=j) in discrete and continuous time. In fact one could write out exact expressions for Pi(Xt=j)P_{i}(X_{t}=j) and indeed for hitting time distributions in almost all these examples, but complicated exact expressions are seldom very illuminating.

qqq names of graphs vary—suggestions for “standard names” from readers of drafts are welcome.

Example 5.7

The nn-cycle.

This is just the graph 001122\cdots(n-1)(n-1)00 on nn vertices. By rotational symmetry, it is enough to give formulas for random walk started at 00. If (X^t)(\hat{X}_{t}) is random walk on (all) the integers, then Xt=ϕ(X^t)X_{t}=\phi(\hat{X}_{t}) is random walk on the nn-cycle, for

ϕ(i)=i mod n.\phi(i)=i\mbox{ mod }n.

Thus results for the nn-cycle can be deduced from results for the integers. For instance,

E0Ti=i(n-i)E_{0}T_{i}=i(n-i) (5.24)

by Lemma 5.2, because this is the mean exit time from (i-n,i)(i-n,i) for random walk on the integers. We can now calculate

maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} =\displaystyle= n2/4\displaystyle\lfloor n^{2}/4\rfloor
τ*maxij(EiTj+EjTi)\displaystyle\tau^{*}\equiv\max_{ij}(E_{i}T_{j}+E_{j}T_{i}) =\displaystyle= 2n2/4\displaystyle 2\lfloor n^{2}/4\rfloor (5.25)
τ0=n-1jE0Tj\displaystyle\tau_{0}=n^{-1}\sum_{j}E_{0}T_{j} =\displaystyle= (n2-1)/6\displaystyle(n^{2}-1)/6 (5.26)

where for the final equality we used the formula


As at (5.9) and (5.10) we can get an expression for the distribution at time tt from the Binomial distribution (in discrete time) or the Poisson distribution (in continuous time). The former is

P0(Xt=i)=j:2j-t=imodnt!j!(t-j)!2-t.P_{0}(X_{t}=i)=\sum_{j:2j-t=i\ {\rm mod}\ n}\frac{t!}{j!(t-j)!}2^{-t}.

A more useful expression is obtained from the spectral representation. The nn eigenvalues of the transition matrix are cos(2πm/n)\cos(2\pi m/n), 0mn-10\leq m\leq n-1. That is, 11 and (if nn is even) -1-1 are simple eigenvalues, with respective normalized eigenvectors ui0=1/nu_{i0}=1/\sqrt{n} and ui,n/2=(-1)i/nu_{i,n/2}=(-1)^{i}/\sqrt{n} (0in-10\leq i\leq n-1). The multiplicity of cos(2πm/n)\cos(2\pi m/n) is 22 for 0<m<n/20<m<n/2; the corresponding normalized eigenvectors are uim=2/ncos(2πim/n)u_{im}=\sqrt{2/n}\cos(2\pi im/n) and ui,-m=2/nsin(2πim/n)u_{i,-m}=\sqrt{2/n}\sin(2\pi im/n) (0in-10\leq i\leq n-1). Thus

P0(Xt=i)=1nm=0n-1(cos(2πm/n))tcos(2πim/n),P_{0}(X_{t}=i)=\frac{1}{n}\sum_{m=0}^{n-1}(\cos(2\pi m/n))^{t}\cos(2\pi im/n),

a fact most easily derived using Fourier analysis.

jjj Cite Diaconis book [123]? Continue same paragraph:

So the relaxation time is


As an aside, note that the eigentime identity (Chapter 3 yyy) gives the curious identity

n2-16=m=1n-111-cos(2πm/n)\frac{n^{2}-1}{6}=\sum_{m=1}^{n-1}\frac{1}{1-\cos(2\pi m/n)}

whose nn\rightarrow\infty limit is the well-known formula m=1m-2=π2/6\sum_{m=1}^{\infty}m^{-2}=\pi^{2}/6.

If nn is even, the discrete-time random walk is periodic. This parity problem vanishes in continuous time, for which we have the formula

P0(X(t)=i)=1nm=0n-1exp(-t(1-cos(2πm/n)))cos(2πim/n).P_{0}(X(t)=i)=\frac{1}{n}\sum_{m=0}^{n-1}\exp(-t(1-\cos(2\pi m/n)))\cos(2\pi im% /n). (5.27)

Turning to total variation convergence, we remain in continuous time and consider the distance functions d¯n(t)\bar{d}_{n}(t) and dn(t)d_{n}(t) of Chapter 4 yyy. The reader familiar with the notion of weak convergence of random walks to Brownian motion (on the circle, in this setting) will see immediately that


where the limit is “d¯\bar{d} for Brownian motion on the circle”, which can be written as

d¯(t)1-2P((t1/2Z) mod 1(1/4,3/4))\bar{d}_{\infty}(t)\equiv 1-2P((t^{1/2}Z)\mbox{ mod }1\ \in(1/4,3/4))

where ZZ has the standard Normal distribution. So

τ1cn2\tau_{1}\sim cn^{2}

for the constant cc such that d¯(c)=e-1\bar{d}_{\infty}(c)=e^{-1}, whose numerical value c0.063c\doteq 0.063 has no real significance.

jjj David: You got 0.0540.054. Please check. Continue same paragraph:


dn(tn2)d(t)1201|ft(u)-1|du,d_{n}(tn^{2})\rightarrow d_{\infty}(t)\equiv{1\over 2}\int_{0}^{1}|f_{t}(u)-1|% \,du,

where ftf_{t} is the density of (t1/2Z) mod 1(t^{1/2}Z)\mbox{ mod }1.

As for τc\tau_{c}, the sup in its definition is attained by some AA of the form [0,i-1][0,i-1], so

τc=maxiin(1-in)1/n=1nn24n2.\tau_{c}=\max_{i}\frac{\frac{i}{n}(1-\frac{i}{n})}{1/n}=\frac{1}{n}\left% \lfloor\frac{n^{2}}{4}\right\rfloor\sim\frac{n}{2}.

As remarked in Chapter 4 yyy, this provides a counter-example to reversing inequalities in Theorem yyy. But if we consider maxA(π(A)EπTA)\max_{A}(\pi(A)E_{\pi}T_{A}), the max is attained with A=[n2-αn,n2+αn]A=[\frac{n}{2}-\alpha n,\frac{n}{2}+\alpha n], say, where 0α<1/20\leq\alpha<1/2. By Lemma 5.2, for x(-12+α,12-α)x\in(-\frac{1}{2}+\alpha,\frac{1}{2}-\alpha),

E(xmod 1)nTA(12-α-x)(12-α+x)n2,E_{\lfloor(x\ \mbox{\rm\scriptsize mod }1)n\rfloor}T_{A}\sim\left({\textstyle% \frac{1}{2}}-\alpha-x\right)\left({\textstyle\frac{1}{2}}-\alpha+x\right)n^{2},

and so

EπTAn2-12+α12-α(12-α-x)(12-α+x)dx=4(12-α)3n23.E_{\pi}T_{A}\sim n^{2}\int_{-\frac{1}{2}+\alpha}^{\frac{1}{2}-\alpha}\left(% \frac{1}{2}-\alpha-x\right)\left(\frac{1}{2}-\alpha+x\right)\,dx=\frac{4(\frac% {1}{2}-\alpha)^{3}n^{2}}{3}.


maxA(π(A)EπTA)n2sup0<α<1/24(12-α)3 2α3=9n2512,\max_{A}(\pi(A)E_{\pi}T_{A})\sim n^{2}\sup_{0<\alpha<1/2}\frac{4(\frac{1}{2}-% \alpha)^{3}\ 2\alpha}{3}=\frac{9n^{2}}{512},

consistent with Chapter 4 Open Problem yyy.

xxx level of detail for d¯\bar{d} results, here and later.

Remark. Parameters τ*\tau^{*}, τ0\tau_{0}, τ1\tau_{1}, and τ2\tau_{2} are all Θ(n2)\Theta(n^{2}) in this example, and in Chapter 6 we’ll see that they are O(n2)O(n^{2}) over the class of regular graphs. However, the exact maximum values over all nn-vertex regular graphs (or the constants cc in the cn2\sim cn^{2} asymptotics) are not known. See Chapter 6 for the natural conjectures.

Example 5.8

The nn-path.

This is just the graph 001122\cdots(n-1)(n-1) on nn vertices. If (X^t)(\hat{X}_{t}) is random walk on (all) the integers, then Xt=ϕ(X^t)X_{t}=\phi(\hat{X}_{t}) is random walk on the nn-path, for the “concertina” map

ϕ(i)={iif ii mod 2(n-1)n-12(n-1)-(i mod 2(n-1))otherwise.\phi(i)=\left\{\begin{array}[]{cl}i&\mbox{if $i$ mod }2(n-1)\leq n-1\\ \\ 2(n-1)-(i\mbox{ mod }2(n-1))&\mbox{otherwise.}\end{array}\right.

Of course the stationary distribution is not quite uniform:

πi=1n-1, 1in-2; π0=πn-1=12(n-1).\pi_{i}=\frac{1}{n-1},\ 1\leq i\leq n-2;\ \ \pi_{0}=\pi_{n-1}=\frac{1}{2(n-1)}.

Again, results for the nn-path can be deduced from results for the integers. Using Lemma 5.2,

EiTj=(j-i)(j+i), 0i<jn-1.E_{i}T_{j}=(j-i)(j+i),\ 0\leq i<j\leq n-1. (5.28)

¿From this, or from the more general results in Section 5.1.2, we obtain

maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} =\displaystyle= (n-1)2\displaystyle(n-1)^{2} (5.29)
τ*maxij(EiTj+EjTi)\displaystyle\tau^{*}\equiv\max_{ij}(E_{i}T_{j}+E_{j}T_{i}) =\displaystyle= 2(n-1)2\displaystyle 2(n-1)^{2} (5.30)
τ0=jπjE0Tj\displaystyle\tau_{0}=\sum_{j}\pi_{j}E_{0}T_{j} =\displaystyle= 13(n-1)2+16\displaystyle\frac{1}{3}(n-1)^{2}+\frac{1}{6} (5.31)

We can also regard XtX_{t} as being derived from random walk X~t\tilde{X}_{t} on the (2n-2)(2n-2)-cycle via Xt=min(X~t,2n-2-X~t)X_{t}=\min(\tilde{X}_{t},2n-2-\tilde{X}_{t}). So we can deduce the spectral representation from that in the previous example:


where, for 0mn-10\leq m\leq n-1,

λm=cos(πm/(n-1))\lambda_{m}=\cos(\pi m/(n-1))


u0m=πm;   un-1,m=πm(-1)m;u_{0m}=\sqrt{\pi_{m}};\ \ \ u_{n-1,m}=\sqrt{\pi_{m}}(-1)^{m};
uim=2πmcos(πim/(n-1)),  1in-2.u_{im}=\sqrt{2\pi_{m}}\cos(\pi im/(n-1)),\ \ 1\leq i\leq n-2.

In particular, the relaxation time is


Furthermore, d¯n(t)=d~¯2n-2(t)\bar{d}_{n}(t)=\bar{\tilde{d}}_{2n-2}(t) and dn(t)=d~2n-2(t)d_{n}(t)=\tilde{d}_{2n-2}(t) for all tt, so

dn(t(2n)2)d(t)d_{n}(t(2n)^{2})\rightarrow d_{\infty}(t)

where the limits are those in the previous example. Thus τ1cn2\tau_{1}\sim cn^{2}, where c0.25c\doteq 0.25 is 44 times the corresponding constant for the nn-cycle.

xxx explain: BM on [0,1][0,1] and circle described in Chapter 16.

It is easy to see that

τc={n-12if nn is evenn-12-12(n-1)if nn is odd\tau_{c}=\left\{\begin{array}[]{cl}\frac{n-1}{2}&\mbox{if $n$ is even}\\ \\ \frac{n-1}{2}-\frac{1}{2(n-1)}&\mbox{if $n$ is odd}\\ \end{array}\right.

In Section 5.3.2 we will see that the nn-path attains the exact maximum values of our parameters over all nn-vertex trees.

Example 5.9

The complete graph.

For the complete graph on nn vertices, the tt-step probabilities for the chain started at ii can be calculated by considering the induced 22-state chain which indicates whether or not the walk is at ii. This gives, in discrete time,

Pi(Xt=i)\displaystyle P_{i}(X_{t}=i) =\displaystyle= 1n+(1-1n)(-1n-1)t\displaystyle\frac{1}{n}+\left(1-\frac{1}{n}\right)\left(-\frac{1}{n-1}\right)% ^{t}
Pi(Xt=j)\displaystyle P_{i}(X_{t}=j) =\displaystyle= 1n-1n(-1n-1)t, ji\displaystyle\frac{1}{n}-\frac{1}{n}\left(-\frac{1}{n-1}\right)^{t},\ \ j\neq i (5.32)

and, in continuous time,

Pi(Xt=i)\displaystyle P_{i}(X_{t}=i) =\displaystyle= 1n+(1-1n)exp(-ntn-1)\displaystyle\frac{1}{n}+\left(1-\frac{1}{n}\right)\exp\left(-\frac{nt}{n-1}\right)
Pi(Xt=j)\displaystyle P_{i}(X_{t}=j) =\displaystyle= 1n-1nexp(-ntn-1), ji\displaystyle\frac{1}{n}-\frac{1}{n}\exp\left(-\frac{nt}{n-1}\right),\ \ j\neq i (5.33)

It is clear that the hitting time to jij\neq i has Geometric(1/(n-1)1/(n-1)) distribution (in continuous time, Exponential(1/(n-1)1/(n-1)) distribution), and so in particular

EiTj=n-1,  ji.E_{i}T_{j}=n-1,\ j\neq i. (5.34)

Thus we can calculate the parameters

τ*maxij(EjTi+EiTj)\displaystyle\tau^{*}\equiv\max_{ij}(E_{j}T_{i}+E_{i}T_{j}) =\displaystyle= 2(n-1)\displaystyle 2(n-1) (5.35)
maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} =\displaystyle= n-1\displaystyle n-1 (5.36)
τ0n-1jEiTj\displaystyle\tau_{0}\equiv n^{-1}\sum_{j}E_{i}T_{j} =\displaystyle= (n-1)2/n.\displaystyle(n-1)^{2}/n. (5.37)

From (5.32) the discrete-time eigenvalues are λ2=λ3==λn=-1/(n-1).\lambda_{2}=\lambda_{3}=\cdots=\lambda_{n}=-1/(n-1). So the relaxation time is

τ2=(n-1)/n.\tau_{2}=(n-1)/n. (5.38)

The total variation functions are

d¯(t)=exp(-ntn-1), d(t)=n-1nexp(-ntn-1),\bar{d}(t)=\exp\left(-\frac{nt}{n-1}\right),\ \ d(t)=\frac{n-1}{n}\exp\left(-% \frac{nt}{n-1}\right),


τ1=(n-1)/n.\tau_{1}=(n-1)/n. (5.39)

It is easy to check


We have already proved (Chapter 3 yyy) that the complete graph attains the exact minimum of τ*,maxijEiTj,τ0\tau^{*},\max_{ij}E_{i}T_{j},\tau_{0}, and τ2\tau_{2} over all nn-vertex graphs. The same holds for τc\tau_{c}, by considering (in an arbitrary graph) a vertex of minimum degree.

Example 5.10

The nn-star.

This is the graph on n3n\geq 3 vertices {0,1,2,,n-1}\{0,1,2,\ldots,n-1\} with edges 0011, 0022, 0033, …, 00(n-1)(n-1). The stationary distribution is

π0=1/2,  πi=1/(2(n-1)), i1.\pi_{0}=1/2,\ \pi_{i}=1/(2(n-1)),\ \ i\geq 1.

In discrete time the walk is periodic. Starting from the leaf 11, the walk at even times is simply independent and uniform on the leaves, so

P1(X2t=i)=1/(n-1),i1P_{1}(X_{2t}=i)=1/(n-1),\ i\geq 1

for t1t\geq 1. At odd times, the walk is at 00. Writing these tt-step probabilities as

P1(Xt=i)=12(n-1)(1+(-1)t)1(i1)+12(1+(-1)t+1)1(i=0),t1P_{1}(X_{t}=i)=\frac{1}{2(n-1)}(1+(-1)^{t})1_{(i\geq 1)}+\frac{1}{2}(1+(-1)^{t% +1})1_{(i=0)},\ \ t\geq 1

we see that the discrete-time eigenvalues are λ2==λn-1=0,  λn=-1\lambda_{2}=\cdots=\lambda_{n-1}=0,\ \lambda_{n}=-1 and hence the relaxation time is


The mean hitting times are

E1Tj=2(n-1), j2,E_{1}T_{j}=2(n-1),\ \ j\geq 2,

where the latter comes from the fact that Tj/2T_{j}/2 has Geometric(1/(n-1)1/(n-1)) distribution, using the “independent uniform on leaves at even times” property. Then


We can calculate the parameters

τ*\displaystyle\tau^{*}\equiv maxij(EiTj+EjTi)\displaystyle\max_{ij}(E_{i}T_{j}+E_{j}T_{i}) =4n-4\displaystyle=4n-4 (5.40)
maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} =2n-2\displaystyle=2n-2 (5.41)
τ0=\displaystyle\tau_{0}= jE0Tjπj\displaystyle\sum_{j}E_{0}T_{j}\pi_{j} =n-32.\displaystyle=n-{\textstyle\frac{3}{2}}. (5.42)

In continuous time we find

P1(Xt=1)\displaystyle P_{1}(X_{t}=1) =\displaystyle= 12(n-1)(1+e-2t)+n-2n-1e-t\displaystyle\frac{1}{2(n-1)}(1+e^{-2t})+\frac{n-2}{n-1}e^{-t}
P1(Xt=i)\displaystyle P_{1}(X_{t}=i) =\displaystyle= 12(n-1)(1+e-2t)-1n-1e-t,  i>1\displaystyle\frac{1}{2(n-1)}(1+e^{-2t})-\frac{1}{n-1}e^{-t},\ i>1
P1(Xt=0)\displaystyle P_{1}(X_{t}=0) =\displaystyle= 12(1-e-2t)\displaystyle{\textstyle\frac{1}{2}}(1-e^{-2t})
P0(Xt=0)\displaystyle P_{0}(X_{t}=0) =\displaystyle= 12(1+e-2t)\displaystyle{\textstyle\frac{1}{2}}(1+e^{-2t})
P0(Xt=1)\displaystyle P_{0}(X_{t}=1) =\displaystyle= 12(n-1)(1-e-2t)\displaystyle\frac{1}{2(n-1)}(1-e^{-2t})

This leads to

d¯(t)=e-t, d(t)=12(n-1)e-2t+n-2n-1e-t,\bar{d}(t)=e^{-t},\ \ d(t)=\frac{1}{2(n-1)}e^{-2t}+\frac{n-2}{n-1}e^{-t},

from which


Clearly (isolate a leaf)


We shall see in Section 5.3.2 that the nn-star attains the exact minimum of our parameters over all nn-vertex trees.

Example 5.11

The barbell.

Here is a graph on n=2m1+m2n=2m_{1}+m_{2} vertices (m12,m20m_{1}\geq 2,m_{2}\geq 0). Start with two complete graphs on m1m_{1} vertices. Distinguish vertices vlvLv_{l}\neq v_{L} in one graph (“the left bell”) and vertices vRvrv_{R}\neq v_{r} in the other graph (“the right bell”). Then connect the graphs via a path vLv_{L}w1w_{1}w2w_{2}\cdotswm2w_{m_{2}}vRv_{R} containing m2m_{2} new vertices.

xxx picture

A point of the construction is that the mean time to go from a typical point vlv_{l} in the left bell to a typical point vrv_{r} in the right bell is roughly m12m2m_{1}^{2}m_{2}. To argue this informally, it takes mean time about m1m_{1} to hit vLv_{L}; then there is chance 1/m11/m_{1} to hit w1w_{1}, so it takes mean time about m12m_{1}^{2} to hit w1w_{1}; and from w1w_{1} there is chance about 1/m21/m_{2} to hit the right bell before returning into the left bell, so it takes mean time about m12m2m_{1}^{2}m_{2} to enter the right bell.

The exact result, argued below, is

maxijEiTj=EvlTvr=m1(m1-1)(m2+1)+(m2+1)2+4(m1-1)+4m2+1m1.\max_{ij}E_{i}T_{j}=E_{v_{l}}T_{v_{r}}=m_{1}(m_{1}-1)(m_{2}+1)+(m_{2}+1)^{2}+4% (m_{1}-1)+4\frac{m_{2}+1}{m_{1}}. (5.43)

It is cleaner to consider asymptotics as

n,  m1/nα,  m2/n1-2αn\rightarrow\infty,\ m_{1}/n\rightarrow\alpha,\ m_{2}/n\rightarrow 1-2\alpha

with 0<α<1/20<\alpha<1/2. Then

maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} \displaystyle\sim α2(1-2α)n3\displaystyle\alpha^{2}(1-2\alpha)n^{3}
\displaystyle\sim n327 for α=1/3\displaystyle\frac{n^{3}}{27}\mbox{\ for\ }\alpha=1/3

where α=1/3\alpha=1/3 is the asymptotic maximizer here and for the other parameters below. Similarly

τ*\displaystyle\tau^{*} \displaystyle\sim 2α2(1-2α)n3\displaystyle 2\alpha^{2}(1-2\alpha)n^{3}
\displaystyle\sim 2n327 for α=1/3.\displaystyle\frac{2n^{3}}{27}\mbox{ for }\alpha=1/3.

The stationary distribution π\pi puts mass 1/2\rightarrow 1/2 on each bell. Also, by (5.45)–(5.47) below, EvlTv=o(EvlTvr)E_{v_{l}}T_{v}=o(E_{v_{l}}T_{v_{r}}) uniformly for vertices vv in the left bell and EvlTvEvlTvrα2(1-2α)n3E_{v_{l}}T_{v}\sim E_{v_{l}}T_{v_{r}}\sim\alpha^{2}(1-2\alpha)n^{3} uniformly for vertices vv in the right bell. Hence

τ0vπvEvlTv12EvlTvr12α2(1-2α)n3\tau_{0}\equiv\sum_{v}\pi_{v}E_{v_{l}}T_{v}\sim\frac{1}{2}E_{v_{l}}T_{v_{r}}% \sim\frac{1}{2}\alpha^{2}(1-2\alpha)n^{3}

and so we have proved the “τ0\tau_{0}” part of

each of {τ0,τ1,τ2}\displaystyle\mbox{each of }\{\tau_{0},\tau_{1},\tau_{2}\} \displaystyle\sim 12α2(1-2α)n3\displaystyle{\textstyle\frac{1}{2}}\alpha^{2}(1-2\alpha)n^{3} (5.44)
\displaystyle\sim n354 for α=1/3.\displaystyle\frac{n^{3}}{54}\mbox{ for }\alpha=1/3.

Consider the relaxation time τ2\tau_{2}. For the function gg defined to be +1+1 on the left bell, -1-1 on the right bell and linear on the bar, the Dirichlet form gives

(g,g)=2(m2+1)(m1(m1-1)+m2+1)2α2(1-2α)n3.\mbox{${\cal E}$}(g,g)=\frac{2}{(m_{2}+1)(m_{1}(m_{1}-1)+m_{2}+1)}\sim\frac{2}% {\alpha^{2}(1-2\alpha)n^{3}}.

Since the variance of gg tends to 11, the extremal characterization of τ2\tau_{2} shows that 12α2(1-2α)n3{1\over 2}\alpha^{2}(1-2\alpha)n^{3} is an asymptotic lower bound for τ2\tau_{2}. But in general τ2τ0\tau_{2}\leq\tau_{0}, so having already proved (5.44) for τ0\tau_{0} we must have the same asymptotics for τ2\tau_{2}. Finally, without going into details, it is not hard to show that for fixed t>0t>0,

d¯n(12α2(1-2α)n3t)e-t,  dn(12α2(1-2α)n3t)12e-t\bar{d}_{n}\left({1\over 2}\alpha^{2}(1-2\alpha)n^{3}t\right)\rightarrow e^{-t% },\ d_{n}\left({1\over 2}\alpha^{2}(1-2\alpha)n^{3}t\right)\rightarrow{1\over 2% }e^{-t}

from which the “τ1\tau_{1}” assertion of (5.44) follows.

jjj Proof? (It’s not so terrifically easy, either! How much do we want to include?) I’ve (prior to writing this) carefully written out an argument similar to the present one, also involving approximate exponentiality of a hitting time distribution, for the balanced rr-tree below. Here is a rough sketch for the argument for d¯\bar{d} here; note that it uses results about the next example (the lollipop). (The argument for dd is similar.) The pair (vl,vr)(v_{l},v_{r}) of initial states achieves d¯(t)\bar{d}(t) for every tt (although the following can be made to work without knowing this “obvious fact” a priori). Couple chains starting in these states by having them move symmetrically in the obvious fashion. Certainly these copies will couple by the time TT the copy started at vlv_{l} has reached the center vertex wm2/2w_{m_{2}/2} of the bar. We claim that the distribution of TT is approximately exponential, and its expected value is 12m12m212α2(1-2α)n3\sim\frac{1}{2}m^{2}_{1}m_{2}\sim\frac{1}{2}\alpha^{2}(1-2\alpha)n^{3} by the first displayed result for the lollipop example, with m2m_{2} changed to m2/2m_{2}/2 there. (In keeping with this observation, I’ll refer to the “half-stick” lollipop in the next paragraph.)

jjj (cont.) To get approximate exponentiality for the distribution of TT, first argue easily that it’s approximately the same as that of Twm/2T_{w_{m/2}} for the half-stick lollipop started in stationarity. But that distribution is, in turn, approximately exponential by Chapter 3 Proposition yyy, since τ2=Θ(n2)=o(n3)\tau_{2}=\Theta(n^{2})=o(n^{3}) for the half-stick lollipop.    

Proof of (5.43). The mean time in question is the sum of the following mean times:

EvlTvL=m1-1\displaystyle E_{v_{l}}T_{v_{L}}=m_{1}-1 (5.45)
EvLTvR=m1(m1-1)(m2+1)+(m2+1)2\displaystyle E_{v_{L}}T_{v_{R}}=m_{1}(m_{1}-1)(m_{2}+1)+(m_{2}+1)^{2} (5.46)
EvRTvr=3(m1-1)+4m2+1m1.\displaystyle E_{v_{R}}T_{v_{r}}=3(m_{1}-1)+4\frac{m_{2}+1}{m_{1}}. (5.47)

Here (5.45) is just the result (5.34) for the complete graph. And (5.46) is obtained by summing over the edges of the “bar” the expression

EwiTwi+1=m1(m1-1)+2i+1, i=0,,m2E_{w_{i}}T_{w_{i+1}}=m_{1}(m_{1}-1)+2i+1,\ \ i=0,\ldots,m_{2} (5.48)

obtained from the general formula for mean hitting time across an essential edge of a graph (Lemma 5.1), where w0=vLw_{0}=v_{L} and wm2+1=vRw_{m_{2}+1}=v_{R}. To argue (5.47), we start with the 11-step recurrence

EvRTvr=1+1m1Ewm2Tvr+m1-2m1ExTvrE_{v_{R}}T_{v_{r}}=1+\frac{1}{m_{1}}E_{w_{m_{2}}}T_{v_{r}}+\frac{m_{1}-2}{m_{1% }}E_{x}T_{v_{r}}

where xx denotes a vertex of the right bell distinct from vRv_{R} and vrv_{r}. Now


using the formula (5.48) for the mean passage time from wm2w_{m_{2}} to vRv_{R}. Starting from xx, the time until a hit on either vRv_{R} or vrv_{r} has Geometric(2/(m1-1)2/(m_{1}-1)) distribution, and the two vertices are equally likely to be hit first. So


The last three expressions give an equation for EvRTvrE_{v_{R}}T_{v_{r}} whose solution is (5.47). And it is straightforward to check that EvlTvrE_{v_{l}}T_{v_{r}} does achieve the maximum, using (5.45)–(5.47) to bound the general EiTjE_{i}T_{j}.    

It is straightforward to check

Example 5.12

The lollipop.

xxx picture

This is just the barbell without the right bell. That is, we start with a complete graph on m1m_{1} vertices and add m2m_{2} new vertices in a path. So there are n=m1+m2n=m_{1}+m_{2} vertices, and wm2w_{m_{2}} is now a leaf. In this example, by (5.45) and (5.46), with m2m_{2} in place of m2+1m_{2}+1, we have

maxijEiTj=EvlTwm2=m1(m1-1)m2+(m1-1)+m22.\max_{ij}E_{i}T_{j}=E_{v_{l}}T_{w_{m_{2}}}=m_{1}(m_{1}-1)m_{2}+(m_{1}-1)+m_{2}% ^{2}.

In the asymptotic setting with

n,m1/nα,m2/n1-αn\rightarrow\infty,m_{1}/n\rightarrow\alpha,m_{2}/n\rightarrow 1-\alpha

where 0<α<10<\alpha<1, we have

maxijEiTj\displaystyle\max_{ij}E_{i}T_{j} \displaystyle\sim α2(1-α)n3\displaystyle\alpha^{2}(1-\alpha)n^{3} (5.49)
\displaystyle\sim 4n327 for α=2/3,\displaystyle\frac{4n^{3}}{27}\mbox{\ for\ }\alpha=2/3,

where α=2/3\alpha=2/3 gives the asymptotic maximum.

Let us discuss the other parameters only briefly, in the asymptotic setting. Clearly Ewm2TvL=m22(1-α)2n2E_{w_{m_{2}}}T_{v_{L}}=m^{2}_{2}\sim(1-\alpha)^{2}n^{2} and it is not hard to check

Ewm2Tvl=maxvEvTvl(1-α)2n2,E_{w_{m_{2}}}T_{v_{l}}=\max_{v}E_{v}T_{v_{l}}\sim(1-\alpha)^{2}n^{2}, (5.50)


τ*=maxij(EiTj+EjTi)=EvlTwm2+Ewm2Tvlα2(1-α)n3.\tau^{*}=\max_{ij}(E_{i}T_{j}+E_{j}T_{i})=E_{v_{l}}T_{w_{m_{2}}}+E_{w_{m_{2}}}% T_{v_{l}}\sim\alpha^{2}(1-\alpha)n^{3}.

Because the stationary distribution puts mass Θ(1/n)\Theta(1/n) on the “bar”, (5.50) is also enough to show that τ0=O(n2)\tau_{0}=O(n^{2}). So by the general inequalities between our parameters, to show

 each of {τ0,τ1,τ2}=Θ(n2)\mbox{ each of }\{\tau_{0},\tau_{1},\tau_{2}\}=\Theta(n^{2}) (5.51)

it is enough to show that τ2=Ω(n2)\tau_{2}=\Omega(n^{2}). But for the function gg defined to be 00 on the “bell”, 11 at the end wm2w_{m_{2}} of the “bar,” and linear along the bar, a brief calculation gives

(g,g)=Θ(n-3), varπg=Θ(n-1)\mbox{${\cal E}$}(g,g)=\Theta(n^{-3}),\ \ {\rm var}\ _{\pi}g=\Theta(n^{-1})

so that τ2(varπg)/(g,g)=Ω(n2)\tau_{2}\geq({\rm var}\ _{\pi}g)/\mbox{${\cal E}$}(g,g)=\Omega(n^{2}), as required.

Finally, in the asymptotic setting it is straightforward to check that τc\tau_{c} is achieved by A={w1,,wm2}A=\{w_{1},\ldots,w_{m_{2}}\}, giving

τc2(1-α)n.\tau_{c}\sim 2(1-\alpha)n.

Remark. The barbell and lollipop are the natural candidates for the nn-vertex graphs which maximize our parameters. The precise conjectures and known results will be discussed in Chapter 6.

jjj We need to put somewhere—Chapter 4 on τc\tau_{c}? Chapter 6 on max parameters over nn-vertex graphs? in the barbell example?—the fact that max τc\mbox{\bf max\,}\tau_{c} is attained, when nn is even, by the barbell with m2=0m_{2}=0, the max value being (n2-2n+2)/8n2/8(n^{2}-2n+2)/8\sim n^{2}/8. Similarly, when nn is odd, the maximizing graph is formed by joining complete graphs on n/2\lfloor n/2\rfloor and n/2\lceil n/2\rceil vertices respectively by a single edge, and the max value is easy to write down (I’ve kept a record) but not so pretty; however, this value too is n2/8\sim n^{2}/8, which is probably all we want to say anyway. Here is the first draft of a proof:

For random walk on an unweighted graph, τc\tau_{c} is the maximum over nonempty proper subsets AA of the ratio

(deg A)(deg Ac)2||(A,Ac),\frac{(\mbox{deg\,}A)(\mbox{deg\,}A^{c})}{2|\mbox{${\cal E}$}|(A,A^{c})}, (5.52)

where deg A\mbox{deg\,}A is defined to be the sum of the degrees of vertices in AA and (A,Ac)(A,A^{c}) is the number of directed cut edges from AA to AcA^{c}.

jjj Perhaps it would be better for exposition to stick with undirected edges and introduce factor 1/21/2?

Maximizing now over choice of graphs, the max in question is no larger than the maximum MM, over all choices of n1>0n_{1}>0, n2>0n_{2}>0, e1e_{1}, e2e_{2}, and ee^{\prime} satisfying n1+n2=nn_{1}+n_{2}=n and 0ei(ni2)0\leq e_{i}\leq{n_{i}\choose 2} for i=1,2i=1,2 and 1en1n21\leq e^{\prime}\leq n_{1}n_{2}, of the ratio

(2e1+e)(2e2+e)2(e1+e2+e)e.\frac{(2e_{1}+e^{\prime})(2e_{2}+e^{\prime})}{2(e_{1}+e_{2}+e^{\prime})e^{% \prime}}. (5.53)

(We don’t claim equality because we don’t check that each nin_{i}-graph is connected. But we’ll show that MM is in fact achieved by the connected graph claimed above.)

Simple calculus shows that the ratio (5.53) is (as one would expect) increasing in e1e_{1} and e2e_{2} and decreasing in ee^{\prime}. Thus, for given n1n_{1}, (5.53) is maximized by considering complete graphs of size n1n_{1} and n2=n-n1n_{2}=n-n_{1} joined by a single edge. Call the maximum value M(n1)M(n_{1}). If nn is even, it is then easy to see that Mn1M_{n_{1}} is maximized by n1=n/2n_{1}=n/2, giving M=(n2-2n+2)/8M=(n^{2}-2n+2)/8, as desired.

For the record, here are the slightly tricky details if nn is odd. Write ν=n/2\nu=n/2 and n1=ν-yn_{1}=\nu-y and put x=y2x=y^{2}. A short calculation gives M(n1)=1+g(x)M(n_{1})=1+g(x), where g(x)[(a-x)(b-x)-1]/(2x+c)g(x)\equiv[(a-x)(b-x)-1]/(2x+c) with a=ν2a=\nu^{2}, b=(ν-1)2b=(\nu-1)^{2}, and c=2ν(ν-1)+2c=2\nu(\nu-1)+2. Easy calculus shows that gg is UU-shaped over [0,ν][0,\nu] and then that g(1/4)g(ν2)g(1/4)\geq g(\nu^{2}). Thus M(n1)M(n_{1}) is maximized when n1=ν-12=n/2n_{1}=\nu-\frac{1}{2}=\lfloor n/2\rfloor.    

Example 5.13

The necklace.

This graph, pictured below, is 33-regular with n=4m+2n=4m+2 vertices, consisting of mm subgraphs linked in a line, the two end subgraphs being different from the intervening ones. This is an artificial graph designed to mimic the nn-path while being regular, and this construction (or some similar one) is the natural candidate for the nn-vertex graph which asymptotically maximizes our parameters over regular graphs.

cabdefghm-2m-2 repeats

This example affords a nice illustration of use of the commute interpretation of resistance. Applying voltage 11 at vertex aa and voltage 00 at ee, a brief calculation gives the potentials at intervening vertices as

g(b)=4/7,  g(c)=5/7,  g(d)=4/7g(b)=4/7,\ g(c)=5/7,\ g(d)=4/7

and gives the effective resistance rae=7/8r_{ae}=7/8. Since the effective resistance between ff and gg equals 11, we see the maximal effective resistance is

rah=78+(2m-3)+78=2m-54.r_{ah}={\textstyle\frac{7}{8}}+(2m-3)+{\textstyle\frac{7}{8}}=2m-{\textstyle% \frac{5}{4}}.


τ*=EaTh+EhTa=3×(4m+2)×(2m-54)3n22.\tau^{*}=E_{a}T_{h}+E_{h}T_{a}=3\times(4m+2)\times\left(2m-\frac{5}{4}\right)% \sim\frac{3n^{2}}{2}.

One could do elementary exact calculations of other parameters, but it is simpler to get asymptotics from the Brownian motion limit, which implies that the asymptotic ratio of each parameter (excluding τc\tau_{c}) in this example and the nn-path is the same for each parameter. So

τ0n24,  τ23n22π2.\tau_{0}\sim\frac{n^{2}}{4},\ \tau_{2}\sim\frac{3n^{2}}{2\pi^{2}}.

jjj I haven’t checked this carefully, and I also have abstained from writing anything further about τ1\tau_{1}.

Finally, it is clear that τc3n/4\tau_{c}\sim 3n/4, achieved by breaking a “link” between “beads” in the middle of the necklace.

Example 5.14

The balanced rr-tree.

Take r2r\geq 2 and h1h\geq 1. The balanced rr-tree is the rooted tree where all leaves are at distance hh from the root, where the root has degree rr, and where the other internal vertices have degree r+1r+1. Call hh the height of the tree. For h=1h=1 we have the (r+1)(r+1)-star, and for r=2r=2 we have the balanced binary tree. The number of vertices is


The chain X^\hat{X} induced (in the sense of Chapter 4 Section yyy) by the function

f(i)=h-(distance from ii to the root)f(i)=h-(\mbox{distance from $i$ to the root})

is random walk on {0,,h}\{0,\ldots,h\}, biased towards 00, with reflecting barriers, as in Example 5.5 with


In fact, the transition probabilities for XX can be expressed in terms of X^\hat{X} as follows. Given vertices v1v_{1} and v2v_{2} with f(v1)=f1f(v_{1})=f_{1} and f(v2)=f2f(v_{2})=f_{2}, the paths [root,v1][\mbox{root},v_{1}] and [root,v2][\mbox{root},v_{2}] intersect in the path [root,v3][\mbox{root},v_{3}], say, with f(v3)=f3f1f2f(v_{3})=f_{3}\geq f_{1}\vee f_{2}. Then

Pv1(Xt=v2)=m=f3hPf1(max0stX^s=m,X^t=f2)r-(m-f2).P_{v_{1}}(X_{t}=v_{2})=\sum_{m=f_{3}}^{h}P_{f_{1}}\left(\max_{0\leq s\leq t}% \hat{X}_{s}=m,\hat{X}_{t}=f_{2}\right)r^{-(m-f_{2})}.

As a special case, suppose that v1v_{1} is on the path from the root to v2v_{2}; in this case v3=v1v_{3}=v_{1}. Using the essential edge lemma (or Theorem 5.20 below) we can calculate

Ev2Tv1=2(r-1)-2(rf1+1-rf2+1)-2(r-1)-1(f1-f2)-(f1-f2),E_{v_{2}}T_{v_{1}}=2(r-1)^{-2}(r^{f_{1}+1}-r^{f_{2}+1})-2(r-1)^{-1}(f_{1}-f_{2% })-(f_{1}-f_{2}),
Ev1Tv2=2(n-1)(f1-f2)-Ev2Tv1.E_{v_{1}}T_{v_{2}}=2(n-1)(f_{1}-f_{2})-E_{v_{2}}T_{v_{1}}. (5.54)

Using this special case we can deduce the general formula for mean hitting times. Indeed, Ev1Tv2=Ev1Tv3+Ev3Tv2E_{v_{1}}T_{v_{2}}=E_{v_{1}}T_{v_{3}}+E_{v_{3}}T_{v_{2}}, which leads to

Ev1Tv2\displaystyle E_{v_{1}}T_{v_{2}} =\displaystyle= 2(n-1)(f3-f2)+2(r-1)-2(rf2+1-rf1+1)\displaystyle 2(n-1)(f_{3}-f_{2})+2(r-1)^{-2}(r^{f_{2}+1}-r^{f_{1}+1}) (5.55)
 -2(r-1)-1(f2-f1)-(f2-f1).\displaystyle\ \ -2(r-1)^{-1}(f_{2}-f_{1})-(f_{2}-f_{1}).

The maximum value 2h(n-1)2h(n-1) is attained when v1v_{1} and v2v_{2} are leaves and v3v_{3} is the root. So

12τ*=maxv,xEvTx=2(n-1)h.{\textstyle\frac{1}{2}}\tau^{*}=\max_{v,x}E_{v}T_{x}=2(n-1)h. (5.56)

(The τ*\tau^{*} part is simpler via (5.88) below.) Another special case is that, for a leaf vv,

EvTroot=2(r-1)-2(rh+1-r)-2h(r-1)-1-h2n/(r-1),E_{v}T_{\mbox{\scriptsize root}}=2(r-1)^{-2}(r^{h+1}-r)-2h(r-1)^{-1}-h\sim 2n/% (r-1), (5.57)
ErootTv=2(n-1)h-EvTroot2nhE_{\mbox{\scriptsize root}}T_{v}=2(n-1)h-E_{v}T_{\mbox{\scriptsize root}}\sim 2nh (5.58)

where asymptotics are as hh\rightarrow\infty for fixed rr. Since ErootTwE_{\mbox{\scriptsize root}}T_{w} is decreasing in f(w)f(w), it follows that

τ0=wπwErootTw(1+o(1))2nh.\tau_{0}=\sum_{w}\pi_{w}E_{\mbox{\scriptsize root}}T_{w}\leq(1+o(1))2nh.

On the other hand, we claim τ0(1+o(1))2nh\tau_{0}\geq(1+o(1))2nh, so that

τ02nh.\tau_{0}\sim 2nh.

To sketch the proof, given a vertex ww, let vv be a leaf such that ww lies on the path from vv to the root. Then

ErootTw=ErootTv-EwTv,E_{\mbox{\scriptsize root}}T_{w}=E_{\mbox{\scriptsize root}}T_{v}-E_{w}T_{v},

and EwTv2(n-1)f(w)E_{w}T_{v}\leq 2(n-1)f(w) by (5.54). But the stationary distribution puts nearly all its mass on vertices ww with f(w)f(w) of constant order, and n=o(nh)n=o(nh).

We claim next that

τ1τ22n/(r-1).\tau_{1}\sim\tau_{2}\sim 2n/(r-1).

Since τ2τ1\tau_{2}\leq\tau_{1}, it is enough to show

τ1(1+o(1))2nr-1\tau_{1}\leq(1+o(1))\frac{2n}{r-1} (5.59)


τ2(1+o(1))2nr-1.\tau_{2}\geq(1+o(1))\frac{2n}{r-1}. (5.60)

Proof of (5.59). Put


for brevity. We begin the proof by recalling the results (5.22) and (5.19) for the induced walk X^\hat{X}:

Eπ^T^h2rh+1(r-1)2tn.E_{\hat{\pi}}\hat{T}_{h}\sim\frac{2r^{h+1}}{(r-1)^{2}}\sim t_{n}. (5.61)

By Proposition yyy of Chapter 3,

supt|Pπ^(T^h>t)-exp(-tEπ^T^h)|τ^2Eπ^T^h=Θ(n-1)=o(1).\sup_{t}\left|P_{\hat{\pi}}(\hat{T}_{h}>t)-\exp\left(-\frac{t}{E_{\hat{\pi}}% \hat{T}_{h}}\right)\right|\leq\frac{\hat{\tau}_{2}}{E_{\hat{\pi}}\hat{T}_{h}}=% \Theta(n^{-1})=o(1). (5.62)

For X^\hat{X} started at 00, let S^\hat{S} be a stopping time at which the chain has exactly the stationary distribution. Then, for 0st0\leq s\leq t,

P0(T^h>t)P0(S^>s)+Pπ^(T^h>t-s).P_{0}(\hat{T}_{h}>t)\leq P_{0}(\hat{S}>s)+P_{\hat{\pi}}(\hat{T}_{h}>t-s).

Since τ^1(2)=O(h)=O(logn)\hat{\tau}^{(2)}_{1}=O(h)=O(\log n) by (5.23), we can arrange to have E0S^=O(logn)E_{0}\hat{S}=O(\log n). Fixing ϵ>0\epsilon>0 and choosing t=(1+ϵ)tnt=(1+\epsilon)t_{n} and (say) s=(logn)2s=(\log n)^{2}, (5.62) and (5.61) in conjunction with Markov’s inequality yield

P0(T^h>(1+ϵ)tn)\displaystyle P_{0}(\hat{T}_{h}>(1+\epsilon)t_{n}) =\displaystyle= exp[-(1+ϵ)tn-(logn)2Eπ^T^h]\displaystyle\exp\left[-\frac{(1+\epsilon)t_{n}-(\log n)^{2}}{E_{\hat{\pi}}% \hat{T}_{h}}\right]
 +O((logn)-1)+O(n-1)\displaystyle\ \ \ +O((\log n)^{-1})+O(n^{-1})
\displaystyle\rightarrow e-(1+ϵ).\displaystyle e^{-(1+\epsilon)}.

Returning to the continuous-time walk on the tree, for nn sufficiently large we have

Pv(Troot>(1+ϵ)tn)P0(T^h>(1+ϵ)tn)e-1P_{v}(T_{\mbox{\scriptsize root}}>(1+\epsilon)t_{n})\leq P_{0}(\hat{T}_{h}>(1+% \epsilon)t_{n})\leq e^{-1}

for every vertex vv. Now a simple coupling argument (jjj spell out details?: Couple the induced walks and the tree-walks will agree when the induced walk starting farther from the origin has reached the origin) shows that

d¯n((1+ϵ)tn)e-1\bar{d}_{n}((1+\epsilon)t_{n})\leq e^{-1}

for all large nn. Hence τ1(1+ϵ)tn\tau_{1}\leq(1+\epsilon)t_{n} for all large nn, and (5.59) follows.    

Proof of (5.60).

jjj [This requires further exposition in both Chapters 3 and 4-1. In Chapter 3, it needs to be made clear that one of the inequalities having to do with CM hitting time distributions says precisely that EαATAEπTA/π(Ac)EπTAE_{\alpha_{A}}T_{A}\geq E_{\pi}T_{A}/\pi(A^{c})\geq E_{\pi}T_{A}. In Chapter 4-1 (2/96 version), it needs to be noted that Lemma 2(a) (concerning τ2\tau_{2} for the joining of two copies of a graph) extends to the joining of any finite number of copies.]

Let GG denote a balanced rr-tree of height hh. Let G′′G^{\prime\prime} denote a balanced rr-tree of height h-1h-1 with root yy and construct a tree GG^{\prime} from G′′G^{\prime\prime} by adding an edge from yy to an additional vertex zz. We can construct GG by joining rr copies of GG^{\prime} at the vertex zz, which becomes the root of GG. Let π\pi^{\prime} and π′′\pi^{\prime\prime} denote the respective stationary distributions for the random walks on GG^{\prime} and G′′G^{\prime\prime}, and use the notation TT^{\prime} and T′′T^{\prime\prime}, respectively, for hitting times on these graphs. By Chapter 4 jjj,

τ2=EαTz\tau_{2}=E_{\alpha^{\prime}}T^{\prime}_{z} (5.63)

where α\alpha^{\prime} is the quasistationary distribution on GG^{\prime} associated with the hitting time TzT^{\prime}_{z}. By Chapter 3 jjj, the expectation (5.63) is no smaller than EπTzE_{\pi^{\prime}}T^{\prime}_{z}, which by the collapsing principle equals

π(G′′)(Eπ′′Ty′′+EyTz)=π(G′′)(Eπ′′Ty′′+EyTz).\pi^{\prime}(G^{\prime\prime})\left(E_{\pi^{\prime\prime}}T^{\prime\prime}_{y}% +E_{y}T^{\prime}_{z}\right)=\pi^{\prime}(G^{\prime\prime})\left(E_{\pi^{\prime% \prime}}T^{\prime\prime}_{y}+E_{y}T_{z}\right).

But it is easy to see that this last quantity equals (1+o(1))EπTz(1+o(1))E_{\pi}T_{z}, which is asymptotically equivalent to 2n/(r-1)2n/(r-1) by (5.61).    

¿From the discussion at the beginning of Section 5.3.1, it follows that τc\tau_{c} is achieved at any of the rr subtrees of the root. This gives


An extension of the balanced rr-tree example is treated in Section 5.2.1 below.

Example 5.15

The dd-cube.

This is a graph with vertex-set 𝐈={0,1}d{\bf I}=\{0,1\}^{d} and hence with n=2dn=2^{d} vertices. Write 𝐢=(i1,,id){\bf i}=(i_{1},\ldots,i_{d}) for a vertex, and write |𝐢-𝐣|=u|iu-ju||{\bf i}-{\bf j}|=\sum_{u}|i_{u}-j_{u}|. Then (𝐢,𝐣)({\bf i},{\bf j}) is an edge if and only if |𝐢-𝐣|=1|{\bf i}-{\bf j}|=1, and in general |𝐢-𝐣||{\bf i}-{\bf j}| is the graph-distance between i and j. Thus discrete-time random walk proceeds at each step by choosing a coordinate at random and changing its parity.

It is easier to use the continuized walk 𝐗(t)=(X1(t),,Xd(t)){\bf X}(t)=(X_{1}(t),\ldots,X_{d}(t)), because the component processes (Xu(t))(X_{u}(t)) are independent as uu varies, and each is in fact just the continuous-time random walk on the 22-path with transition rate 1/d1/d. This follows from an elementary fact about the superposition of (marked) Poisson processes.

Thus, in continuous time,

P𝐢(𝐗(t)=𝐣)\displaystyle P_{{\bf i}}({\bf X}(t)={\bf j}) =\displaystyle= u=1d[12(1+(-1)|iu-ju|e-2t/d)]\displaystyle\prod_{u=1}^{d}\left[\frac{1}{2}\left(1+(-1)^{|i_{u}-j_{u}|}e^{-2% t/d}\right)\right] (5.64)
=\displaystyle= 2-d(1-e-2t/d)|𝐢-𝐣|(1+e-2t/d)d-|𝐢-𝐣|.\displaystyle 2^{-d}\left(1-e^{-2t/d}\right)^{|{\bf i}-{\bf j}|}\left(1+e^{-2t% /d}\right)^{d-|{\bf i}-{\bf j}|}.

By expanding the right side, we see that the continuous-time eigenvalues are

λk=2k/d with multiplicity (dk), k=0,1,,d.\lambda_{k}=2k/d\mbox{\ with multiplicity\ }{d\choose k},\ \ k=0,1,\ldots,d. (5.65)

(Of course, this is just the general fact that the eigenvalues of a dd-fold product of continuous-time chains are

(λi1++λid;1i1,,idn)(\lambda_{i_{1}}+\cdots+\lambda_{i_{d}};1\leq i_{1},\ldots,i_{d}\leq n) (5.66)

where (λi;1in)(\lambda_{i};1\leq i\leq n) are the eigenvalues of the marginal chain.)

In particular,

τ2=d/2.\tau_{2}=d/2. (5.67)

By the eigentime identity (Chapter 3 yyy)

τ0=m21λm=d2k=1dk-1(dk)\displaystyle\tau_{0}=\sum_{m\geq 2}\frac{1}{\lambda_{m}}=\frac{d}{2}\sum_{k=1% }^{d}k^{-1}{d\choose k}
=2d(1+d-1+O(d-2)),\displaystyle=2^{d}(1+d^{-1}+O(d^{-2})), (5.68)

the asymptotics being easy analysis.

From (5.64) it is also straightforward to derive the discrete-time tt-step transition probabilities:

P𝐢(𝐗t=j)=2-dm=0d(1-2md)tr(-1)r(|𝐢-𝐣|r)(d-|𝐢-𝐣|m-r).P_{\bf i}({\bf X}_{t}=j)=2^{-d}\sum_{m=0}^{d}\left(1-\frac{2m}{d}\right)^{t}% \sum_{r}(-1)^{r}{{|{\bf i}-{\bf j}|}\choose r}{{d-|{\bf i}-{\bf j}|}\choose{m-% r}}.

Starting the walk at 𝟎{\bf 0}, let Yt=|𝐗(t)|Y_{t}=|{\bf X}(t)|. Then YY is the birth-and-death chain on states {0,1,,d}\{0,1,\ldots,d\} with transition rates (transition probabilities, in discrete time)

qi,i+1=d-id,  qi,i-1=id,  0id.q_{i,i+1}=\frac{d-i}{d},\ q_{i,i-1}=\frac{i}{d},\ \ 0\leq i\leq d.

xxx box picture

This is the Ehrenfest urn model mentioned in many textbooks. In our terminology we may regard YY as random walk on the weighted linear graph (Section 5.1.2) with weights

wi=(d-1i-1), w=2d.w_{i}={{d-1}\choose{i-1}},\ \ w=2^{d}.

In particular, writing TYT^{Y} for hitting times for YY, symmetry and (5.13) give

12τ*Y=12(E0TdY+EdT0Y)=E0TdY=2d-1i=1d1(d-1i-1).\frac{1}{2}\tau^{*Y}=\frac{1}{2}(E_{0}T^{Y}_{d}+E_{d}T^{Y}_{0})=E_{0}T^{Y}_{d}% =2^{d-1}\sum_{i=1}^{d}\frac{1}{{{d-1}\choose{i-1}}}.

On the dd-cube, it is “obvious” that E𝟎T𝐣E_{\bf 0}T_{\bf j} is maximized by 𝐣=𝟏{\bf j}={\bf 1}, and this can be verified by observing in (5.64) that P𝟎(𝐗(t)=𝐣)P_{\bf 0}({\bf X}(t)={\bf j}) is minimized by 𝐣=𝟏{\bf j}={\bf 1}, and hence Z𝟎𝐣Z_{{\bf 0}{\bf j}} is minimized by 𝐣=𝟏{\bf j}={\bf 1}, so we can apply Chapter 2 yyy. Thus

12τ*=max𝐢𝐣E𝐢T𝐣=E𝟎T𝟏=2d-1i=1d1(d-1i-1)2d(1+1/d+O(1/d2)).{1\over 2}\tau^{*}=\max_{{\bf i}{\bf j}}E_{\bf i}T_{\bf j}=E_{\bf 0}T_{\bf 1}=% 2^{d-1}\sum_{i=1}^{d}\frac{1}{{{d-1}\choose{i-1}}}\sim 2^{d}(1+1/d+O(1/d^{2})). (5.69)

The asymptotics are the same as in (5.68). In fact it is easy to use (5.64) to show

Z𝐢𝐢=2-dτ0=1+d-1+O(d-2)Z_{{\bf i}{\bf i}}=2^{-d}\tau_{0}=1+d^{-1}+O(d^{-2})
Z𝐢𝐣=O(d-2) uniformly over |𝐢-𝐣|2Z_{{\bf i}{\bf j}}=O(d^{-2})\mbox{\ uniformly over\ }|{\bf i}-{\bf j}|\geq 2

and then by Chapter 2 yyy

E𝐢T𝐣=2d(1+d-1+O(d-2)) uniformly over |𝐢-𝐣|2.E_{{\bf i}}T_{{\bf j}}=2^{d}(1+d^{-1}+O(d^{-2}))\mbox{ uniformly over }|{\bf i% }-{\bf j}|\geq 2.



it follows that

E𝐢T𝐣=2d-1 if |𝐢-𝐣|=1.E_{{\bf i}}T_{{\bf j}}=2^{d}-1\mbox{ if }|{\bf i}-{\bf j}|=1.

xxx refrain from write out exact E𝐢T𝐣E_{\bf i}T_{\bf j}—refs

To discuss total variation convergence, we have by symmetry (and writing 𝐝{\bf d} to distinguish from dimension dd)

𝐝¯(t)=P𝟎(𝐗(t))-P𝟏(𝐗(t))\bar{{\bf d}}(t)=\|P_{\bf 0}({\bf X}(t)\in\cdot)-P_{\bf 1}({\bf X}(t)\in\cdot)\|
𝐝(t)=P𝟎(𝐗(t))-π().{\bf d}(t)=\|P_{\bf 0}({\bf X}(t)\in\cdot)-\pi(\cdot)\|.

Following Diaconis et al [114] we shall sketch an argument leading to

𝐝(14dlogd+sd)L(s)P(|Z|12e-2s),-<s<{\bf d}\left({\textstyle\frac{1}{4}}d\log d+sd\right)\rightarrow L(s)\equiv P% \left(|Z|\leq{\textstyle\frac{1}{2}}e^{-2s}\right)\ ,-\infty<s<\infty (5.70)

where ZZ has the standard Normal distribution. This implies

τ114dlogd.\tau_{1}\sim{\textstyle\frac{1}{4}}d\log d. (5.71)

For the discrete-time walk made aperiodic by incorporating chance 1/(d+1)1/(d+1) of holding, (5.70) and (5.71) remain true, though rigorous proof seems complicated: see [114].

Fix uu, and consider 𝐣=𝐣(u){\bf j}={\bf j}(u) such that |𝐣|-d/2ud1/2/2|{\bf j}|-d/2\sim ud^{1/2}/2. Using 1-exp(-δ)δ-12δ21-\exp(-\delta)\approx\delta-\frac{1}{2}\delta^{2} as δ0\delta\rightarrow 0 in (5.64), we can calculate for t=t(d)=14dlogd+sdt=t(d)=\frac{1}{4}d\log d+sd with ss fixed that

2dP𝟎(𝐗(t)=𝐣)exp(-e-4s2-ue-2s).2^{d}P_{\bf 0}({\bf X}(t)={\bf j})\rightarrow\exp\left(-\frac{e^{-4s}}{2}-ue^{% -2s}\right).

Note the limit is >1>1 when u<u0(s)-e-2s/2u<u_{0}(s)\equiv-e^{-2s}/2. Now

𝐝(t)=12𝐣|P𝟎(𝐗(t)=𝐣)-2-d|(P𝟎(𝐗(t)=𝐣)-2-d){\bf d}(t)=\frac{1}{2}\sum_{{\bf j}}|P_{\bf 0}({\bf X}(t)={\bf j})-2^{-d}|\sim% \sum(P_{\bf 0}({\bf X}(t)={\bf j})-2^{-d})

where the second sum is over 𝐣(u){\bf j}(u) with u<u0(s)u<u_{0}(s). But from (5.64) we can write this sum as

P(B(12(1-d-1/2e-2s))|𝐣(u0(s))|)-P(B(12)|𝐣(u0(s))|)P\left(B\left({\textstyle\frac{1}{2}}(1-d^{-1/2}e^{-2s})\right)\leq|{\bf j}(u_% {0}(s))|\right)-P\left(B\left({\textstyle\frac{1}{2}}\right)\leq|{\bf j}(u_{0}% (s))|\right)

where B(p)B(p) denotes a Binomial(d,p)(d,p) random variable. By the Normal approximation to Binomial, this converges to

P(Z-u0(s))-P(Zu0(s))P(Z\leq-u_{0}(s))-P(Z\leq u_{0}(s))

as stated.

As an aside, symmetry and Chapter 4 yyy give

τ0E𝟎T𝟏τ1(2)+τ0\tau_{0}\leq E_{\bf 0}T_{\bf 1}\leq\tau^{(2)}_{1}+\tau_{0}

and so the difference E𝟎T𝟏-τ0E_{\bf 0}T_{\bf 1}-\tau_{0} is O(dlogd)O(d\log d), which is much smaller than what the series expansions (5.68) and (5.69) imply.

The fact that the “half-cube” A={𝐢𝐈: id=0}A=\{{\bf i}\in{\bf I}:\,i_{d}=0\}, yielding


achieves the sup in the definition of τc\tau_{c} can be proved using a slightly tricky induction argument. However, the result follows immediately from (5.67) together with the general inequality τ2τc\tau_{2}\geq\tau_{c}.

Example 5.16

Dense regular graphs.

Consider an rr-regular nn-vertex graph with r>n/2r>n/2. Of course here we are considering a class of graphs rather than a specific example. The calculations below show that these graphs necessarily mimic the complete graph (as far as smallness of the random walk parameters is concerned) in the asymptotic setting r/nc>1/2r/n\rightarrow c>1/2.

The basic fact is that, for any pair i,ji,j of vertices, there must be at least 2r-n2r-n other vertices kk such that i-k-ji-k-j is a path. To prove this, let a1a_{1} (resp., a2a_{2}) be the number of vertices ki,jk\neq i,j such that exactly 11 (resp., 22) of the edges (k,i),(k,j)(k,i),(k,j) exist. Then a1+a2n-2a_{1}+a_{2}\leq n-2 by counting vertices, and a1+2a22(r-1)a_{1}+2a_{2}\geq 2(r-1) by counting edges, and these inequalities imply a22r-na_{2}\geq 2r-n.

Thus, by Thompson’s principle (Chapter 3, yyy) the effective resistance rij22r-nr_{ij}\leq\frac{2}{2r-n} and so the commute interpretation of resistance implies

τ*2rn2r-n2cn2c-1.\tau^{*}\leq\frac{2rn}{2r-n}\sim\frac{2cn}{2c-1}. (5.72)

A simple “greedy coupling” argument (Chapter 14, Example yyy) shows

τ1r2r-nc2c-1.\tau_{1}\leq\frac{r}{2r-n}\sim\frac{c}{2c-1}. (5.73)

This is also a bound on τ2\tau_{2} and on τc\tau_{c}, because τcτ2τ1\tau_{c}\leq\tau_{2}\leq\tau_{1} always, and special case 2 below shows that this bound on τc\tau_{c} cannot be improved asymptotically (nor hence can the bound on τ1\tau_{1} or τ2\tau_{2}). Because EπTjnτ2E_{\pi}T_{j}\leq n\tau_{2} for regular graphs (Chapter 3 yyy), we get


This implies


which also follows from (5.72) and τ0τ*/2\tau_{0}\leq\tau^{*}/2. We can also argue, in the notation of Chapter 4 yyy, that

maxijEiTjτ1(2)+maxjEπTj4ee-1τ1+nτ1(1+o(1))nr2r-ncn2c-1.\max_{ij}E_{i}T_{j}\leq\tau^{(2)}_{1}+\max_{j}E_{\pi}T_{j}\leq\frac{4e}{e-1}% \tau_{1}+n\tau_{1}\leq(1+o(1))\frac{nr}{2r-n}\sim\frac{cn}{2c-1}.

Special case 1. The orders of magnitude may change for c=1/2c=1/2. Take two complete (n/2)(n/2)-graphs, break one edge in each (say edges (v1,v2)(v_{1},v_{2}) and (w1,w2)(w_{1},w_{2})) and add edges (v1,w1)(v_{1},w_{1}) and (v2,w2)(v_{2},w_{2}). This gives an nn-vertex ((n/2)-1)((n/2)-1)-regular graph for which all our parameters are Θ(n2)\Theta(n^{2}).

jjj I haven’t checked this.

Special case 2. Can the bound τcr/(2r-n)c/(2c-1)\tau_{c}\leq r/(2r-n)\sim c/(2c-1) be asymptotically improved? Eric Ordentlicht has provided the following natural counter-example. Again start with two (n/2)(n/2)-complete graphs on vertices (vi)(v_{i}) and (wi)(w_{i}). Now add the edges (vi,wj)(v_{i},w_{j}) for which 0(j-i) mod (n/2)r-(n/2)0\leq(j-i)\mbox{ mod }(n/2)\leq r-(n/2). This gives an nn-vertex rr-regular graph. By considering the set AA consisting of the vertices viv_{i}, a brief calculation gives

Example 5.17

The dd-dimensional torus ZmdZ^{d}_{m}.

The torus is the set of dd-dimensional integers 𝐢=(i1,,id){\bf i}=(i_{1},\ldots,i_{d}) modulo mm, considered in the natural way as a 2d2d-regular graph on n=mdn=m^{d} vertices. It is much simpler to work with the random walk in continuous time, 𝐗(t)=(X1(t),,Xd(t)){\bf X}(t)=(X_{1}(t),\ldots,X_{d}(t)), because its component processes (Xu(t))(X_{u}(t)) are independent as uu varies; and each is just continuous-time random walk on the mm-cycle, slowed down by a factor 1/d1/d. Thus we can immediately write the time-tt transition probabilities for 𝐗{\bf X} in terms of the corresponding probabilities p0,j(t)p_{0,j}(t) for continuous-time random walk on the mm-cycle (see Example 5.7 above) as

p𝟎,𝐣(t)=u=1dp0,ju(t/d).p_{{\bf 0},{\bf j}}(t)=\prod_{u=1}^{d}p_{0,j_{u}}(t/d).

Since the eigenvalues on the mm-cycle are (1-cos(2πk/m),0km-1)(1-\cos(2\pi k/m),0\leq k\leq m-1), by (5.66) the eigenvalues of 𝐗{\bf X} are

λ(k1kd)=1du=1d(1-cos(2πku/m)), 0kum-1.\lambda_{(k_{1}\ldots k_{d})}=\frac{1}{d}\sum_{u=1}^{d}(1-\cos(2\pi k_{u}/m)),% \ 0\leq k_{u}\leq m-1.

In particular, we see that the relaxation time satisfies


where here and below asymptotics are as mm\rightarrow\infty for fixed dd. This relaxation time could more simply be derived from the NN-cycle result via the general “product chain” result of Chapter 4 yyy. But writing out all the eigenvalues enables us to use the eigentime identity.


(the sum excluding (0,,0)(0,\ldots,0)), and hence

τ0mdRd\tau_{0}\sim m^{d}R_{d} (5.74)


Rd010111du=1d(1-cos(2πxu))dx1dxdR_{d}\equiv\int_{0}^{1}\cdots\int_{0}^{1}\frac{1}{\frac{1}{d}\sum_{u=1}^{d}(1-% \cos(2\pi x_{u}))}\,dx_{1}\cdots dx_{d} (5.75)

provided the integral converges. The reader who is a calculus whiz will see that in fact Rd<R_{d}<\infty for d3d\geq 3 only, but this is seen more easily in the alternative approach of Chapter 15, Section yyy.

xxx more stuff: connection to transience, recurrent potential, etc

xxx new copy from lectures

xxx τ1,τc\tau_{1},\tau_{c}

jjj David: I will let you develop the rest of this example. Note that τ1\tau_{1} is considered very briefly in Chapter 15, eq. (17) in 3/6/96 version. Here are a few comments for τc\tau_{c}. First suppose that m>2m>2 is even and d2d\geq 2. Presumably, τc\tau_{c} is achieved by the following half-torus:

A:={𝐢=(i1,,id)Zmd: 0id<m/2}.A:=\{{\bf i}=(i_{1},\ldots,i_{d})\in Z^{d}_{m}:\,0\leq i_{d}<m/2\}.

In the notation of (5.52) observe

||=dn, deg A=dn, deg Ac=dn, (A,Ac)=2md-1=2n/m,|\mbox{${\cal E}$}|=dn,\ \ \mbox{deg\,}A=dn,\ \ \mbox{deg\,}A^{c}=dn,\ \ (A,A^% {c})=2m^{d-1}=2n/m,



[By Example 5.15 (the dd-cube) this last result is also true for m=2m=2, and (for even m2m\geq 2) it is by Example 5.7 (the nn-cycle) also true for d=1d=1.] If we have correctly conjectured the maximizing AA, then

τc=d4n1/d if mm is even,\tau_{c}={\textstyle\frac{d}{4}}n^{1/d}\mbox{\ if $m$ is even},

and presumably(??)


in any case.

Example 5.18

Chess moves.

Here is a classic homework problem for an undergraduate Markov chains course.

Start a knight at a corner square of an otherwise-empty chessboard. Move the knight at random, by choosing uniformly from the legal knight-moves at each step. What is the mean number of moves until the knight returns to the starting square?

It’s a good question, because if you don’t know Markov chain theory it looks too messy to do by hand, whereas using Markov chain theory it becomes very simple. The knight is performing random walk on a graph (the 64 squares are the vertices, and the possible knight-moves are the edges). It is not hard to check that the graph is connected, so by the elementary Chapter 3 yyy for a corner square vv the mean return time is

EvTv+=1πv=2||dv=||,E_{v}T^{+}_{v}=\frac{1}{\pi_{v}}=\frac{2|\mbox{${\cal E}$}|}{d_{v}}=|\mbox{${% \cal E}$}|,

and by drawing a sketch in the margin the reader can count the number of edges |||\mbox{${\cal E}$}| to be 168168.

Other chess pieces—queen, king, rook—define different graphs (the bishop’s is of course not connected, and the pawn’s not undirected). One might expect that the conventional ordering of the “strength” of the pieces as (queen, rook, knight, king) is reflected in parameters τ0\tau_{0} and τ2\tau_{2} (jjj how about the other taus?) being increasing in this ordering. The reader is invited to perform the computations. (jjj: an undergraduate project?) We have done so only for the rook’s move, treated in the next example.

The computations for the queen, knight, and king are simplified if the walks are made on a toroidal chessboard. (There is no difference for the rook.)

jjj Chess on a bagel, anyone? Continue same paragraph:

Then Fourier analysis (see Diaconis [123]) on the abelian group Zm2Z^{2}_{m} (with m=8m=8) can be brought to bear, and the eigenvalues are easy to compute. We omit the details, but the results for (queen, rook, knight, king) are asymptotically

τ0\displaystyle\tau_{0} =\displaystyle= (m2+712m+O(1),m2+m+O(1),\displaystyle(m^{2}+{\textstyle\frac{7}{12}}m+O(1),\,m^{2}+m+O(1),
 𝐣𝐣𝐣?(1+o(1))cknightm2logm,𝐣𝐣𝐣?(1+o(1))ckingm2logm)\displaystyle\ \ \mbox{{\bf jjj}?}(1+o(1))c_{\mbox{\scriptsize knight}}m^{2}% \log m,\,\mbox{{\bf jjj}?}(1+o(1))c_{\mbox{\scriptsize king}}m^{2}\log m)
τ2\displaystyle\tau_{2} \displaystyle\sim (43, 2, 15π2m2, 23π2m2)\displaystyle\left({\textstyle\frac{4}{3}},\,2,\,{\textstyle\frac{1}{5\pi^{2}}% }m^{2},\,{\textstyle\frac{2}{3\pi^{2}}}m^{2}\right)

as mm\rightarrow\infty, in conformance with our expectations, and numerically

τ0\displaystyle\tau_{0} =\displaystyle= (65.04, 67.38, 69.74, 79.36)\displaystyle(65.04,\,67.38,\,69.74,\,79.36)
τ2\displaystyle\tau_{2} =\displaystyle= (1.29, 1.75, 1.55, 4.55)\displaystyle(1.29,\,1.75,\,1.55,\,4.55)

for m=8m=8. The only surprise is the inverted τ2\tau_{2} ordering for (rook, knight).

Example 5.19

Rook’s random walk on an mm-by-mm chessboard.

jjj Do we want to do this also on a dd-dimensional grid? We need to mention how this is a serious example, used with Metropolis for sampling from log concave distributions; reference is [33][32]?

Number the rows and columns of the chessboard each 00 through m-1m-1 in arbitrary fashion, and denote the square of the chessboard at row i1i_{1} and column i2i_{2} by 𝐢=(i1,i2){\bf i}=(i_{1},i_{2}). In continuous time, the rook’s random walk (𝐗(t))({\bf X}(t)) is the product of two continuous-time random walks on the complete graph KmK_{m} on mm vertices, each run at rate 1/21/2. Thus (cf. Example 5.9)

P𝐢(𝐗(t)=𝐣)=u=12[1m+(δiu,ju-1m)exp(-mt2(m-1))],P_{\bf i}({\bf X}(t)={\bf j})=\prod_{u=1}^{2}\left[\frac{1}{m}+\left(\delta_{i% _{u},j_{u}}-\frac{1}{m}\right)\exp\left(-\frac{mt}{2(m-1)}\right)\right], (5.76)

which can be expanded to get the discrete-time multistep transition probabilities, if desired. We recall that the eigenvalues for discrete-time random walk on KmK_{m} are 11 with multiplicity 11 and -1/(m-1)-1/(m-1) with multiplicity m-1m-1. It follows [recall (5.66)] that the eigenvalues for the continuous-time rook’s walk are

0,  m2(m-1),  mm-1  with resp. multiplicities  1, 2(m-1),  (m-1)2.0,\ \frac{m}{2(m-1)},\ \frac{m}{m-1}\mbox{\ \ with resp.~{}multiplicities\ \ }% 1,\ 2(m-1),\ (m-1)^{2}.

In particular,

τ2=2(m-1)m,\tau_{2}=\frac{2(m-1)}{m}, (5.77)

which equals 1.751.75 for m=8m=8 and converges to 22 as mm grows. Applying the eigentime identity, a brief calculation gives

τ0=(m-1)2(m+3)m,\tau_{0}=\frac{(m-1)^{2}(m+3)}{m}, (5.78)

which equals 67.37567.375 for m=8m=8 and m2+m+O(1)m^{2}+m+O(1) for mm large.

Starting the walk 𝐗{\bf X} at 𝟎=(0,0){\bf 0}=(0,0), let Y(t)Y(t) denote the Hamming distance H(𝐗(t),𝟎)H({\bf X}(t),{\bf 0}) of 𝐗(t){\bf X}(t) from 𝟎{\bf 0}, i.e., the number of coordinates (00, 11, or 22) in which 𝐗(t){\bf X}(t) differs from 𝟎{\bf 0}. Then YY is a birth-and-death chain with transition rates

q01=1, q10=12(m-1), q12=12, q21=1m-1.q_{01}=1,\ \ q_{10}=\frac{1}{2(m-1)},\ \ q_{12}=\frac{1}{2},\ \ q_{21}=\frac{1% }{m-1}.

This is useful for computing mean hitting times. Of course

E𝐢T𝐣=0 if H(𝐢,𝐣)=0H({\bf i},{\bf j})=0.E_{\bf i}T_{\bf j}=0\mbox{\ if $H({\bf i},{\bf j})=0$.}



it follows that

E𝐢T𝐣=m2-1 if H(𝐢,𝐣)=1H({\bf i},{\bf j})=1.E_{\bf i}T_{\bf j}=m^{2}-1\mbox{\ if $H({\bf i},{\bf j})=1$.}

Finally, it is clear that E2T1Y=m-1E_{2}T^{Y}_{1}=m-1, so that



E𝐢T𝐣=m2+m-2 if H(𝐢,𝐣)=2H({\bf i},{\bf j})=2.E_{\bf i}T_{\bf j}=m^{2}+m-2\mbox{\ if $H({\bf i},{\bf j})=2$.}

These calculations show

12τ*=max𝐢𝐣E𝐢T𝐣=m2+m-2,{\textstyle\frac{1}{2}}\tau^{*}=\max_{{\bf i}{\bf j}}E_{\bf i}T_{\bf j}=m^{2}+% m-2,

which equals 7070 for m=8m=8, and they provide another proof of (5.78).

From (5.76) it is easy to derive

d¯m(t)=(2-2m)exp(-mt2(m-1))-(1-2m)exp(-mtm-1)\bar{d}_{m}(t)=\left(2-\frac{2}{m}\right)\exp\left(-\frac{mt}{2(m-1)}\right)-% \left(1-\frac{2}{m}\right)\exp\left(-\frac{mt}{m-1}\right)

and thence

τ1=-2m-1m[ln(1-(1-e-1m(m-2)(m-1)2)1/2)+ln(m-1m-2)],\tau_{1}=-2\frac{m-1}{m}\left[\ln\left(1-\left(1-e^{-1}\frac{m(m-2)}{(m-1)^{2}% }\right)^{1/2}\right)+\ln\left(\frac{m-1}{m-2}\right)\right],

which rounds to 2.542.54 for m=8m=8 and converges to -2ln(1-(1-e-1)1/2)3.17-2\ln(1-(1-e^{-1})^{1/2})\doteq 3.17 as mm becomes large.

Any set AA of the form {(i1,i2): iuJ}\{(i_{1},i_{2}):\,i_{u}\in J\} with either u=1u=1 or u=2u=2 and JJ a nonempty proper subset of {0,,m-1}\{0,\ldots,m-1\} achieves the value


A direct proof is messy, but this follows immediately from the general inequality τcτ2\tau_{c}\leq\tau_{2}, (5.77), and a brief calculation that the indicated AA indeed gives the indicated value.

xxx other examples left to reader? complete bipartite; ladders

jjj Note: I’ve worked these out and have handwritten notes. How much do we want to include, if at all? (I could at least put the results in the table.)

5.2.1 Biased walk on a balanced tree

Consider again the balanced rr-tree setup of Example 5.14. Fix a parameter 0<λ<0<\lambda<\infty. We now consider biased random walk (Xt)(X_{t}) on the tree, where from each non-leaf vertex other than the root the transition goes to the parent with probability λ/(λ+r)\lambda/(\lambda+r) and to each child with probability 1/(λ+r)1/(\lambda+r). As in Example 5.14 (the case λ=1\lambda=1), the chain X^\hat{X} induced by the function

f(i)=h-(distance from ii to the root)f(i)=h-(\mbox{distance from $i$ to the root})

is (biased) reflecting random walk on {0,,h}\{0,\ldots,h\} with respective probabilities λ/(λ+r)\lambda/(\lambda+r) and r/(λ+r)r/(\lambda+r) of moving to the right and left from any i0,hi\not=0,h; the ratio of these two transition probabilities is


The stationary distribution π^\hat{\pi} for X^\hat{X} is a modified geometric:

π^m=1w^×{1if m=0m=0(1+ρ)ρm-1if 1mh-11\leq m\leq h-1ρh-1if m=hm=h\hat{\pi}_{m}=\frac{1}{\hat{w}}\times\left\{\begin{array}[]{cl}1&\mbox{if $m=0% $}\\ (1+\rho)\rho^{m-1}&\mbox{if $1\leq m\leq h-1$}\\ \rho^{h-1}&\mbox{if $m=h$}\end{array}\right.


w^=2m=0h-1ρm={2(1-ρh)/(1-ρ)if ρ1\rho\neq 12hif ρ=1\rho=1.\hat{w}=2\sum_{m=0}^{h-1}\rho^{m}=\left\{\begin{array}[]{cl}2(1-\rho^{h})/(1-% \rho)&\mbox{if $\rho\neq 1$}\\ 2h&\mbox{if $\rho=1$.}\end{array}\right.

Since the stationary distribution π\pi for XX is assigns the same probability to each of the rh-f(v)r^{h-f(v)} vertices vv with a given value of f(v)f(v), a brief calculation shows that πvpvx=λf(v)/w^rh\pi_{v}p_{vx}=\lambda^{f(v)}/\hat{w}r^{h} for any edge (v= child,x= parent)(v=\mbox{\ child},x=\mbox{\ parent}) in the tree. In the same notation, it follows that XX is random walk on the balanced rr-tree with edge weights wvx=λf(v)w_{vx}=\lambda^{f(v)} and total weight w=v,xwvx=w^rhw=\sum_{v,x}w_{vx}=\hat{w}r^{h}.

The distribution π^\hat{\pi} concentrates near the root-level if ρ<1\rho<1 and near the leaves-level if ρ>1\rho>1; it is nearly uniform on the hh levels if ρ=1\rho=1. On the other hand, the weight assigned by the distribution π\pi to an individual vertex vv is a decreasing function of f(v)f(v) (thus favoring vertices near the leaves) if λ<1\lambda<1 (i.e., ρ<1/r\rho<1/r) and is an increasing function (thus favoring vertices near the root) if λ>1\lambda>1; it is uniform on the vertices in the unbiased case λ=1\lambda=1.

The mean hitting time calculations of Example 5.14 can all be extended to the biased case. For example, for λ1\lambda\neq 1 the general formula (5.55) becomes [using the same notation as at (5.55)]

Ev1Tv2\displaystyle E_{v_{1}}T_{v_{2}} =\displaystyle= w^rhλ-f3-λ-f2λ-1-1+2(ρ-1-1)-2(ρ-(f2+1)-ρ-(f1+1))\displaystyle\hat{w}r^{h}\frac{\lambda^{-f_{3}}-\lambda^{-f_{2}}}{\lambda^{-1}% -1}+2(\rho^{-1}-1)^{-2}\left(\rho^{-(f_{2}+1)}-\rho^{-(f_{1}+1)}\right) (5.79)
 -2(ρ-1-1)-1(f2-f1)-(f2-f1)\displaystyle\ \ -2(\rho^{-1}-1)^{-1}(f_{2}-f_{1})-(f_{2}-f_{1})

if ρ1\rho\neq 1 and

Ev1Tv2=w^rhλ-f3-λ-f2λ-1-1+f22-f12E_{v_{1}}T_{v_{2}}=\hat{w}r^{h}\frac{\lambda^{-f_{3}}-\lambda^{-f_{2}}}{% \lambda^{-1}-1}+f_{2}^{2}-f_{1}^{2}

if ρ=1\rho=1. The maximum value is attained when v1v_{1} and v2v_{2} are leaves and v3v_{3} is the root. So if λ1\lambda\neq 1,

12τ*=maxv,xEvTx=w^rhλ-h-1λ-1-1.{\textstyle\frac{1}{2}}\tau^{*}=\max_{v,x}E_{v}T_{x}=\hat{w}r^{h}\frac{\lambda% ^{-h}-1}{\lambda^{-1}-1}. (5.80)

The orders of magnitude for all of the τ\tau-parameters (with rr and λ\lambda, and hence ρ\rho, fixed as hh, and hence nn, becomes large) are summarized on a case-by-case basis in the next table. Following are some of the highlights in deriving these results; the details, and derivation of exact formulas and more detailed asymptotic results, are left to the reader.

Orders of magnitude of parameters [τ=Θ(entry)\tau=\Theta(\mbox{entry})]

for λ\lambda-biased walk on a balanced rr-tree of height hh (ρ=λ/r\rho=\lambda/r).

Value of ρ\rho τ*\tau^{*} τ0\tau_{0} τ1\tau_{1} τ2\tau_{2} τc\tau_{c}
ρ<1/r\rho<1/r ρ-h\rho^{-h} ρ-h\rho^{-h} ρ-h\rho^{-h} ρ-h\rho^{-h} ρ-h\rho^{-h}
ρ=1/r\rho=1/r (\equiv Example 5.14) nhnh nhnh nn nn nn
1/r<ρ<11/r<\rho<1 nn nn ρ-h\rho^{-h} ρ-h\rho^{-h} ρ-h\rho^{-h}
ρ=1\rho=1 nhnh nn hh hh hh
ρ>1\rho>1 nn nn hh 11 11

For τ0=xπxErootTx\tau_{0}=\sum_{x}\pi_{x}E_{\mbox{\scriptsize root}}T_{x} we have τ0E rootTleaf\tau_{0}\leq E_{\mbox{ \scriptsize root}}T_{\mbox{\rm\scriptsize leaf}}. If ρ<1/r\rho<1/r, this bound is tight:

τ0E rootTleaf2ρ-h(1-ρ)2(1-λ)(λ-ρ);\tau_{0}\sim E_{\mbox{ \scriptsize root}}T_{\mbox{\rm\scriptsize leaf}}\sim% \frac{2\rho^{-h}}{(1-\rho)^{2}(1-\lambda)}(\lambda-\rho);

for ρ>1/r\rho>1/r a more careful calculation is required.

If ρ<1\rho<1, then the same arguments as for the unbiased case (ρ=1/r\rho=1/r) show

τ1τ22ρ-(h-1)/(1-ρ)2.\tau_{1}\sim\tau_{2}\sim 2\rho^{-(h-1)}/(1-\rho)^{2}.

In this case it is not hard to show that


as well. If ρ=1\rho=1, then it is not hard to show that

τ1=Θ(h),   τc2(1-1r)h\tau_{1}=\Theta(h),\ \ \ \tau_{c}\sim 2(1-{\textstyle\frac{1}{r}})h

with τc\tau_{c} achieved at a branch of the root (excluding the root), and so


as well. If ρ>1\rho>1, then since X^\hat{X} has positive drift equal to (ρ-1)/(ρ+1)(\rho-1)/(\rho+1), it follows that


The value τc\tau_{c} is achieved by isolating a leaf, giving

τc1,\tau_{c}\rightarrow 1,

and so, by the inequalities τcτ28τc2\tau_{c}\leq\tau_{2}\leq 8\tau_{c}^{2} of Chapter 4, Section yyy,


as well.

jjj Limiting value of τ2\tau_{2} when ρ>1\rho>1 is that of τ2\tau_{2} for biased infinite tree? Namely?