13 Continuous State, Infinite State and Random Environment (June 23, 2001)

13.2 Infinite graphs

There is a huge research literature concerning random walks on infinite discrete groups, and more generally on infinite graphs, and the recent monograph of Woess [339] provides an in-depth treatment. This section focuses narrowly on two aspects of an issue not emphasized in [339]: what does study of random walk on infinite graphs tell us about random walks on finite graphs? One aspect of this issue is that random walks on certain specific infinite graphs may be used to get approximations or inequalities for random walks on specific finite graphs. We treat three examples.

  • The infinite lattice ZdZ^{d} as an approximation to the discrete torus ZNdZ^{d}_{N} for large NN (section 13.2.4).

  • The infinite degree-rr tree 𝐓r{\bf T}^{r} and bounds for rr-regular expander graphs of large size (section 13.2.6).

  • The hierarchical tree 𝐓hierr{\bf T}^{r}_{\rm hier} as an approximation to balanced (r-1)(r-1)-ary trees (section 13.2.9).

The second aspect concerns properties such as transience, non-trivial boundary, and “spectral radius <1<1”, which have been well-studied as qualitative properties which an infinite-state chain either possesses or does not possess. What are the quantitative finite-state analogs of such properties? Here actual theorems are scarce; we present conceptual discussion in sections 13.2.3 and 13.2.10 as a spur to future research.

13.2.1 Set-up

We assume the reader has some acquaintance with classical theory (e.g., [133] Chapter 5) for a countable-state irreducible Markov chain, which emphasizes the trichotomy transient or null-recurrent or positive-recurrent. We use the phrase general chain to refer to the case of an arbitrary irreducible transition matrix 𝐏{\bf P}, without any reversibility assumption.

Recall from Chapter 3 section 2 the identification, in the finite-state setting, of reversible chains and random walks on weighted graphs. Given a reversible chain we defined edge-weights wij=πipij=πjpjiw_{ij}=\pi_{i}p_{ij}=\pi_{j}p_{ji}; conversely, given edge-weights we defined random walk as the reversible chain

pvx=wvx/wv;  wv=xwvx.p_{vx}=w_{vx}/w_{v};\ w_{v}=\sum_{x}w_{vx}. (13.19)

In the infinite setting it is convenient (for reasons explained below) to take the “weighted graph” viewpoint. Thus the setting of this section is that we are given a connected weighted graph satisfying

wvxwvx<x,   vwv=w_{v}\equiv\sum_{x}w_{vx}<\infty\ \forall x,\ \ \ \sum_{v}w_{v}=\infty (13.20)

and we study the associated random walk (Xt)(X_{t}), i.e., the discrete-time chain with pvx=wvx/wvp_{vx}=w_{vx}/w_{v}. So in the unweighted setting (we1w_{e}\equiv 1), we have nearest-neighbor random walk on a locally finite, infinite graph.

To explain why we adopt this set-up, say π\pi is invariant for 𝐏{\bf P} if

iπipij=πjj; πj>0j.\sum_{i}\pi_{i}p_{ij}=\pi_{j}\ \forall j;\ \ \pi_{j}>0\ \forall j.

Consider asymmetric random walk on ZZ, say

pi,i+1=2/3,  pi,i-1=1/3;  -<i<.p_{i,i+1}=2/3,\ p_{i,i-1}=1/3;\ -\infty<i<\infty. (13.21)

One easily verifies that each of the two measures πi=1\pi_{i}=1 and πi=2i\pi_{i}=2^{i} is invariant. Such nonuniqueness makes it awkward to seek to define reversibility of 𝐏{\bf P} via the detailed balance equations

πipij=πjpjii,j\pi_{i}p_{ij}=\pi_{j}p_{ji}\ \forall i,j (13.22)

without a prior definition of π\pi. Stating definitions via weighted graphs avoids this difficulty.

The second assumption in (13.20), that vwv=\sum_{v}w_{v}=\infty, excludes the positive-recurrent case (see Theorem 13.4 below); because in that case the questions one asks, such as whether the relaxation time τ2\tau_{2} is finite, can be analyzed by the same techniques as in the finite-state setting.

Our intuitive interpretation of “reversible” in Chapter 3 was “a movie of the chain looks the same run forwards or run backwards”. But the chain corresponding to the weighted graph with weights wi,i+1=2iw_{i,i+1}=2^{i}, which is the chain (13.21) with πi=2i\pi_{i}=2^{i}, has a particle moving towards ++\infty and so certainly doesn’t satisfy this intuitive notion. On the other hand, a probabilistic interpretation of an infinite invariant measure π\pi is that if we start at time 00 with independent Poisson(πv\pi_{v}) numbers of particles at vertices vv, and let the particles move independently according to 𝐏{\bf P}, then the particle process is stationary in time. So the detailed balance equations (13.22) correspond to the intuitive “movie” notion of reversible for the infinite particle process, rather than for a single chain.

13.2.2 Recurrence and Transience

The next Theorem summarizes parts of the standard theory of general chains (e.g., [133] Chapter 5). Write ρv:=Pv(Tv+<)\rho_{v}:=P_{v}(T^{+}_{v}<\infty) and let Nv()N_{v}(\infty) be the total number of visits (including time 0) to vv.

Theorem 13.4

For a general chain, one of the following alternatives holds.

Recurrent.ρv=1\rho_{v}=1 and EvNv()=E_{v}N_{v}(\infty)=\infty and Pv(Nw()=)=1P_{v}(N_{w}(\infty)=\infty)=1 for all v,wv,w.

Transient.ρv<1\rho_{v}<1 and EvNv()<E_{v}N_{v}(\infty)<\infty and Pv(Nw()<)=1P_{v}(N_{w}(\infty)<\infty)=1 for all v,wv,w.

In the recurrent case there exists an invariant measure π\pi, unique up to constant multiples, and the chain is either

positive-recurrent: EvTv+<vE_{v}T_{v}^{+}<\infty\ \forall v and vπv<\sum_{v}\pi_{v}<\infty; or

null-recurrent: EvTv+=vE_{v}T_{v}^{+}=\infty\ \forall v and vπv=\sum_{v}\pi_{v}=\infty.

In the transient and null-recurrent cases, Pv(Xt=w)0P_{v}(X_{t}=w)\rightarrow 0 as tt\rightarrow\infty for all v,wv,w.

Specializing to random walk on a weighted graph, the measure (wv)(w_{v}) is invariant, and the second assumption in (13.20) implies that the walk cannot be positive-recurrent. By a natural abuse of language we call the weighted graph recurrent or transient. Because EvNv()=tpvv(t)E_{v}N_{v}(\infty)=\sum_{t}p^{(t)}_{vv}, Theorem 13.4 contains the “classical” method to establish transience or recurrence by considering the tt\rightarrow\infty behavior of pvv(t)p^{(t)}_{vv}. This method works easily for random walk on ZdZ^{d} (section 13.2.4).

Some of the “electrical network” story from Chapter 3 extends immediately to the infinite setting. Recall the notion of a flow 𝐟{\bf f}, and the net flow f(x)f_{(x)} out of a vertex xx. Say 𝐟{\bf f} is a unit flow from xx to infinity if f(x)=1f_{(x)}=1 and f(v)=0vxf_{(v)}=0\ \forall v\neq x. Thompson’s principle (Chapter 3 Proposition 35) extends to the infinite setting, by considering subsets AnϕA_{n}\downarrow\phi (the empty set) with AncA_{n}^{c} finite.

Theorem 13.5

Consider a weighted graph satisfying (13.20). For each vv,

inf{12efe2/we:𝐟 a unit flow from vv to infinity }=1wv(1-ρv).\inf\left\{{\textstyle\frac{1}{2}}\sum_{e}f_{e}^{2}/w_{e}:{\bf f}\mbox{ a unit% flow from $v$ to infinity }\right\}=\frac{1}{w_{v}(1-\rho_{v})}.

In particular, the random walk is transient iff for some (all) vv there exists a unit flow 𝐟{\bf f} from vv to infinity such that efe2/we<\sum_{e}f^{2}_{e}/w_{e}<\infty.

By analogy with the finite setting, we can regard the inf as the effective resistance between vv and infinity, although (see section LABEL:IEN) we shall not attempt an axiomatic treatment of infinite electrical networks.

Theorem 13.5 has the following immediate corollary: of course (a) and (b) are logically equivalent.

Corollary 13.6

(a) If a weighted graph is recurrent, then so is any subgraph.

(b) To show that a weighted graph is transient, it suffices to find a transient subgraph.

Thus the classical fact that Z2Z^{2} is recurrent implies that a subgraph of Z2Z^{2} is recurrent, a fact which is hard to prove by bounding tt-step transition probabilities. In the other direction, it is possible (but not trivial) to prove that Z3Z^{3} is transient by exhibiting a flow: indeed Doyle and Snell [131] construct a transient tree-like subgraph of Z3Z^{3}.

Here is a different formulation of the same idea.

Corollary 13.7

The return probability ρv=Pv(Tv+<)\rho_{v}=P_{v}(T_{v}^{+}<\infty) cannot increase if a new edge (not incident at vv) is added, or the weight of an existing edge (not incident at vv) is increased.

13.2.3 The finite analog of transience

Recall the mean hitting time parameter τ0\tau_{0} from Chapter 4. For a sequence of nn-state reversible chains, consider the property

n-1τ0(n) is bounded as n.n^{-1}\tau_{0}(n)\mbox{ is bounded as }n\rightarrow\infty. (13.23)

We assert, as a conceptual paradigm, that property (13.23) is the analog of the “transient” property for a single infinite-state chain. The connection is easy to see algebraically for symmetric chains (Chapter 7), where τ0=EπTv\tau_{0}=E_{\pi}T_{v} for each vv, so that by Chapter 2 Lemma 10


The boundedness (in nn) of this sum is a natural analog of the transience condition


for a single infinite-state chain. So in principle the methods used to determine transience or recurrence in the infinite-state case ([339] Chapter 1) should be usable to determine whether property (13.23) holds for finite families, and indeed Proposition 37 of Chapter 3 provides a tool for this purpose. In practice these extremal methods haven’t yet proved very successful; early papers [85] proved (13.23) for expanders in this way, but other methods are easier (see our proof of Chapter 9 Theorem 1). There is well-developed theory ([339] section 6) which establishes recurrence for infinite planar graphs under mild assumptions. It is natural to conjecture that under similar assumptions, a planar nn-vertex graph has τ0=Θ(nlogn)\tau_{0}=\Theta(n\log n), as in the case of Z2Z^{2} in Proposition 13.8 below.

13.2.4 Random walk on ZdZ^{d}

We consider the lattice ZdZ^{d} as an infinite 2d2d-regular unweighted graph. Write XtX_{t} for simple random walk on ZdZ^{d}, and write X~t\widetilde{X}_{t} for the continuized random walk. Of course, general random flights (i.e. “random walks”, in everyone’s terminology except ours) and their numerous variations comprise a well-studied classical topic in probability theory. See Hughes [184] for a wide-ranging intermediate-level treatment, emphasizing physics applications. Our discussion here is very narrow, relating to topics treated elsewhere in this book.

To start some calculations, for d=1d=1 consider

p~(t)\displaystyle\tilde{p}(t) \displaystyle\equiv P0(X~t=0)\displaystyle P_{0}(\widetilde{X}_{t}=0)
=\displaystyle= P(Jt+=Jt-), where Jt+J^{+}_{t} and Jt-J^{-}_{t} are the\displaystyle P(J^{+}_{t}=J^{-}_{t}),\mbox{ where $J^{+}_{t}$ and $J^{-}_{t}$ % are the}
independent Poisson(t/2)(t/2) numbers of +1+1 and -1-1 jumps
=\displaystyle= n=0(e-t/2(t/2)nn!)2\displaystyle\sum_{n=0}^{\infty}\left(\frac{e^{-t/2}(t/2)^{n}}{n!}\right)^{2}
=\displaystyle= e-tI0(t)\displaystyle e^{-t}I_{0}(t)

where I0(t):=n=0(t/2)2n(n!)2I_{0}(t):=\sum_{n=0}^{\infty}\frac{(t/2)^{2n}}{(n!)^{2}} is the modified Bessel function of the first kind of order 00. Now varX~t=t{\rm var}\ \widetilde{X}_{t}=t, and as a consequence of the local CLT (or by quoting asymptotics of the Bessel function I0I_{0}) we have

p~(t)(2πt)-1/2 as t.\tilde{p}(t)\sim(2\pi t)^{-1/2}\mbox{ as }t\rightarrow\infty. (13.24)

As discussed in Chapter 4 section 6.2 and Chapter 5 Example 17, a great advantage of working in continuous time in dimensions d2d\geq 2 is that the coordinate processes are independent copies of slowed-down one-dimensional processes, so that p~(d)(t)P0(X~t=0)\tilde{p}^{(d)}(t)\equiv P_{0}(\widetilde{X}_{t}=0) in dimension dd satisfies

p~(d)(t)=(p~(t/d))d=e-t(I0(t/d))d.\tilde{p}^{(d)}(t)=(\tilde{p}(t/d))^{d}=e^{-t}(I_{0}(t/d))^{d}. (13.25)

In particular, from (13.24),

p~(d)(t)(d2π)d/2t-d/2 as t.\tilde{p}^{(d)}(t)\sim({\textstyle\frac{d}{2\pi}})^{d/2}t^{-d/2}\mbox{ as }t% \rightarrow\infty. (13.26)

One can do a similar analysis in the discrete time case. In dimension d=1d=1,

p(t)\displaystyle p(t) \displaystyle\equiv P0(Xt=0)\displaystyle P_{0}(X_{t}=0) (13.27)
=\displaystyle= 2-t(tt/2),t even\displaystyle 2^{-t}{t\choose t/2}\ ,t\mbox{ even }
\displaystyle\sim 2(2πt)-1/2 as t, tt even.\displaystyle 2\ (2\pi t)^{-1/2}\mbox{ as }t\rightarrow\infty,\mbox{ $t$ even. }

This agrees with (13.26) but with an extra “artificial” factor of 22 arising from periodicity. A more tedious argument gives the analog of (13.26) in discrete time for general dd:

p(d)(t)2(d2π)d/2t-d/2 as t, tt even. p^{(d)}(t)\sim 2({\textstyle\frac{d}{2\pi}})^{d/2}t^{-d/2}\mbox{ as }t% \rightarrow\infty,\mbox{ $t$ even. } (13.28)

From the viewpoint of classical probability, one can regard (13.26,13.28) as the special case j=0j=0 of the local CLT: in continuous time in dimension dd,

supj|P0(X~t=j)-(d2π)d/2t-d/2exp(-d|j|2/(2t))|=o(t-d/2) as t\sup_{j}\left|P_{0}(\widetilde{X}_{t}=j)-({\textstyle\frac{d}{2\pi}})^{d/2}t^{% -d/2}\exp(-d|j|^{2}/(2t))\right|=o(t^{-d/2})\mbox{ as }t\rightarrow\infty

where |j||j| denotes Euclidean norm.

The occupation time N0(t)N_{0}(t) satisfies E0N0(t)=0tp~(s)dsE_{0}N_{0}(t)=\int_{0}^{t}\tilde{p}(s)\ ds (continuous time) and =s=0t-1p(s)=\sum_{s=0}^{t-1}p(s) (discrete time). In either case, as tt\rightarrow\infty,

(d=1)E0N0(t)\displaystyle(d=1)\ \ E_{0}N_{0}(t) \displaystyle\sim 2πt1/2\displaystyle\sqrt{{\textstyle\frac{2}{\pi}}}\ t^{1/2} (13.29)
(d=2)E0N0(t)\displaystyle(d=2)\ \ E_{0}N_{0}(t) \displaystyle\sim 1πlogt\displaystyle{\textstyle\frac{1}{\pi}}\log t (13.30)
(d3)E0N0(t)\displaystyle(d\geq 3)\ \ E_{0}N_{0}(t) \displaystyle\rightarrow Rd0p~(d)(t)dt\displaystyle R_{d}\equiv\int_{0}^{\infty}\tilde{p}^{(d)}(t)dt (13.31)
=0e-t(I0(t/d))ddt\displaystyle=\int_{0}^{\infty}e^{-t}(I_{0}(t/d))^{d}\ dt

where Rd<R_{d}<\infty for d3d\geq 3 by (13.26). This is the classical argument for establishing transience in d3d\geq 3 and recurrence in d2d\leq 2, by applying Theorem 13.4. Note that the return probability ρ(d):=P0(T0+<)\rho^{(d)}:=P_{0}(T^{+}_{0}<\infty) is related to E0N0()E_{0}N_{0}(\infty) by E0N0()=11-ρ(d)E_{0}N_{0}(\infty)=\frac{1}{1-\rho^{(d)}}; in other words

ρ(d)=Rd-1Rd,  d3.\rho^{(d)}=\frac{R_{d}-1}{R_{d}},\ d\geq 3.

Textbooks sometimes give the impression that calculating ρ(d)\rho^{(d)} is hard, but one can just calculate numerically the integral (13.31). Or see [174] for a table.

The quantity ρ(d)\rho^{(d)} has the following sample path interpretation. Let VtV_{t} be the number of distinct vertices visited by the walk before time tt. Then

t-1Vt1-ρ(d) a.s. ,  d3.t^{-1}V_{t}\rightarrow 1-\rho^{(d)}\mbox{ a.s. },\ d\geq 3. (13.32)

The proof of this result is a textbook application of the ergodic theorem for stationary processes: see [133] Theorem 6.3.1.

13.2.5 The torus ZmdZ^{d}_{m}

We now discuss how random walk on ZdZ^{d} relates to mm\rightarrow\infty asymptotics for random walk on the finite torus ZmdZ^{d}_{m}, discussed in Chapter 5. We now use superscript (m)\cdot^{(m)} to denote the length parameter. From Chapter 5 Example 17 we have

τ1(m)=Θ(m2)\tau_{1}^{(m)}=\Theta(m^{2}) (13.33)

where asymptotics are as mm\rightarrow\infty for fixed dd. One can interpret this as a consequence of the dN2dN^{2} time rescaling in the wweak convergence of rescaled random walk to Brownian motion of the dd-dimensional torus, for which (cf. sections 13.1.1 and 13.1.2) τ2=2π-2\tau_{2}=2\pi^{-2}. At (74)–(75) of Chapter 5 we saw that the eigentime identity gave an exact formula for the mean hitting time parameter τ0(m)\tau_{0}^{(m)}, whose asymptotics are, for d3d\geq 3,

m-dτ0(m)R^d010111du=1d(1-cos(2πxu))dx1dxd<.m^{-d}\tau_{0}^{(m)}\rightarrow\hat{R}_{d}\equiv\int_{0}^{1}\ldots\int_{0}^{1}% \frac{1}{{\textstyle\frac{1}{d}}\sum_{u=1}^{d}(1-\cos(2\pi x_{u}))}\ dx_{1}% \ldots dx_{d}<\infty. (13.34)

Here we give an independent analysis of this result, and the case d=2d=2.

Proposition 13.8
(d=1)     τ0(n)\displaystyle(d=1)\ \ \ \ \ \ \ \ \ \ \tau^{(n)}_{0} \displaystyle\sim 16n2\displaystyle{\textstyle\frac{1}{6}}n^{2} (13.35)
(d=2)     τ0(m)\displaystyle(d=2)\ \ \ \ \ \ \ \ \ \ \tau^{(m)}_{0} \displaystyle\sim 2π-1m2logm\displaystyle 2\pi^{-1}m^{2}\log m (13.36)
(d3)     τ0(m)\displaystyle(d\geq 3)\ \ \ \ \ \ \ \ \ \ \tau^{(m)}_{0} \displaystyle\sim Rdmd\displaystyle R_{d}m^{d} (13.37)

for RdR_{d} defined by (13.31). In particular, the expressions for RdR_{d} and R^d\hat{R}_{d} at (13.31) and (13.34) are equal, for d3d\geq 3.

The d=1d=1 result is from Chapter 5 (26). We now prove the other cases.

Proof. We may construct continuized random walk X~t(m)\widetilde{X}^{(m)}_{t} on ZmdZ^{d}_{m} from continuized random walk X~t\widetilde{X}_{t} on ZdZ^{d} by

X~t(m)=X~tmodm\widetilde{X}^{(m)}_{t}=\widetilde{X}_{t}\bmod m (13.38)

and then P0(X~t(m)=0)P0(X~t=0)P_{0}(\widetilde{X}^{(m)}_{t}=0)\geq P_{0}(\widetilde{X}_{t}=0). So

m-dτ0(m)\displaystyle m^{-d}\tau_{0}^{(m)} =\displaystyle= 0(P0(X~t(m)=0)-m-d)dt\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m^{-d}% \right)\ dt (13.39)
(Chapter 2, Corollary 12 and (8))
=\displaystyle= 0(P0(X~t(m)=0)-m-d)+dt by complete monotonicity\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m^{-d}% \right)^{+}dt\mbox{ by complete monotonicity}
\displaystyle\geq 0(P0(X~t=0)-m-d)+dt\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}=0)-m^{-d}\right)^{% +}\ dt
\displaystyle\rightarrow 0P0(X~t=0)dt=Rd.\displaystyle\int_{0}^{\infty}P_{0}(\widetilde{X}_{t}=0)\ dt\quad=R_{d}.

Consider the case d3d\geq 3. To complete the proof, we need the corresponding upper bound, for which it is sufficient to show

0(P0(X~t(m)=0)-m-d-P0(X~t=0))+dt0 as m.\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}-P_{0}(% \widetilde{X}_{t}=0)\right)^{+}dt\rightarrow 0\mbox{ as }m\rightarrow\infty. (13.40)

To verify (13.40) without detailed calculations, we first establish a 1-dimensional bound

(d=1)    p~(m)(t)1m+p~(t).(d=1)\ \ \ \ \ \ \ \ \tilde{p}^{(m)}(t)\leq{\textstyle\frac{1}{m}}+\tilde{p}(t). (13.41)

To obtain (13.41) we appeal to a coupling construction (the reflection coupling, described in continuous-space in section 13.1.3 – the discrete-space setting here is similar) which shows that continuized random walks X~(m),Y~(m)\widetilde{X}^{(m)},\widetilde{Y}^{(m)} on ZmZ_{m} with X~0(m)=0\widetilde{X}^{(m)}_{0}=0 and Y~0(m)\widetilde{Y}^{(m)}_{0} distributed uniformly can be coupled so that

Y~t(m)=0 on the event {X~t(m)=0,Tt}\widetilde{Y}^{(m)}_{t}=0\mbox{ on the event }\{\widetilde{X}^{(m)}_{t}=0,T% \leq t\}

where TT is the first time that X~(m)\widetilde{X}^{(m)} goes distance m/2\lfloor m/2\rfloor from 00. And by considering the construction (13.38)

P(X~t(m)=0)P(X~t=0)+P(X~t(m)=0,Tt)P(\widetilde{X}^{(m)}_{t}=0)\leq P(\widetilde{X}_{t}=0)+P(\widetilde{X}^{(m)}_% {t}=0,T\leq t)

and (13.41) follows, since P(Y~t(m)=0)=1/mP(\widetilde{Y}^{(m)}_{t}=0)=1/m.

Since the dd-dimensional probabilities relate to the 1-dimensional probabilities via P0(X~t(m)=0)=(p~(m)(t/d))dP_{0}(\widetilde{X}_{t}^{(m)}=0)=\left(\tilde{p}^{(m)}(t/d)\right)^{d} and similarly on the infinite lattice, we can use inequality (13.41) to bound the integrand in (13.40) as follows.

P0(X~t(m)=0)-m-d-P0(X~t=0)\displaystyle P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}-P_{0}(\widetilde{X}_{t}=0)
\displaystyle\leq (1m+p~(t/d))d-m-d-(p~(t/d))d\displaystyle\left({\textstyle\frac{1}{m}}+\tilde{p}(t/d)\right)^{d}-m^{-d}-(% \tilde{p}(t/d))^{d}
=\displaystyle= j=1d-1(dj)(p~(t/d))j(1m)d-j\displaystyle\sum_{j=1}^{d-1}{d\choose j}(\tilde{p}(t/d))^{j}({\textstyle\frac% {1}{m}})^{d-j}
=\displaystyle= p~(t/d)mj=1d-1(dj)(p~(t/d))j-1(1m)d-1-j\displaystyle\frac{\tilde{p}(t/d)}{m}\sum_{j=1}^{d-1}{d\choose j}(\tilde{p}(t/% d))^{j-1}({\textstyle\frac{1}{m}})^{d-1-j}
\displaystyle\leq p~(t/d)mj=1d-1(dj)[max((p~(t/d))d-2,1m)]d-2\displaystyle\frac{\tilde{p}(t/d)}{m}\sum_{j=1}{d-1}{d\choose j}\left[\max((% \tilde{p}(t/d))^{d-2},{\textstyle\frac{1}{m}})\right]^{d-2}
=\displaystyle= (2d-2)p~(t/d)m[max((p~(t/d))d-2,(1m)d-2)]\displaystyle(2^{d}-2)\frac{\tilde{p}(t/d)}{m}\left[\max((\tilde{p}(t/d))^{d-2% },({\textstyle\frac{1}{m}})^{d-2})\right]
\displaystyle\leq (2d-2)p~(t/d)m[(p~(t/d))d-2+1m)d-2]\displaystyle(2^{d}-2)\frac{\tilde{p}(t/d)}{m}\left[(\tilde{p}(t/d))^{d-2}+{% \textstyle\frac{1}{m}})^{d-2}\right]
=\displaystyle= (2d-2)[(p~(t/d))d-1m+p~(t/d)md-1].\displaystyle(2^{d}-2)\left[\frac{(\tilde{p}(t/d))^{d-1}}{m}+\frac{\tilde{p}(t% /d)}{m^{d-1}}\right].

The fact (13.24) that p~(t)=Θ(t-1/2)\tilde{p}(t)=\Theta(t^{-1/2}) for large tt easily implies that the integral in (13.40) over 0tm30\leq t\leq m^{3} tends to zero. But by (13.33) and submultiplicativity of d¯(t)\bar{d}(t),

0P0(X~t(m)=0)-m-dd(t)d¯(t)B1exp(-tB2m2)0\leq P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}\leq d(t)\leq\bar{d}(t)\leq B_{1}% \exp(-{\textstyle\frac{t}{B_{2}m^{2}}}) (13.42)

where B1,B2B_{1},B_{2} depend only on dd. This easily implies that the integral in (13.40) over m3t<m^{3}\leq t<\infty tends to zero, completing the proof of (13.37).

In the case d=2d=2, we fix b>0b>0 and truncate the integral in (13.39) at bm2bm^{2} to get

m-2τ0(m)\displaystyle m^{-2}\tau_{0}^{(m)} \displaystyle\geq -b+0bm2P0(X~t=0)dt\displaystyle-b+\int_{0}^{bm^{2}}P_{0}(\widetilde{X}_{t}=0)\ dt
=\displaystyle= -b+(1+o(1))2πlog(bm2) by (13.30)\displaystyle-b+(1+o(1)){\textstyle\frac{2}{\pi}}\log(bm^{2})\mbox{ by }(\ref{% d2N})
=\displaystyle= (1+o(1))2πlogm.\displaystyle(1+o(1)){\textstyle\frac{2}{\pi}}\log m.


τ0(m)(1+o(1))2πm2logm.\tau_{0}^{(m)}\geq(1+o(1)){\textstyle\frac{2}{\pi}}m^{2}\log m.

For the corresponding upper bound, since 0m2P0(X~t=0)dt2πlogm\int_{0}^{m^{2}}P_{0}(\widetilde{X}_{t}=0)dt\sim{\textstyle\frac{2}{\pi}}\log m by (13.30), and m-2τ0(m)=0(P0(X~t(m)=0)-m-2)dtm^{-2}\tau_{0}^{(m)}=\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m% ^{-2}\right)\ dt , it suffices to show that

0(P0(X~t(m)=0)-m-2-P0(X~t=0))+dt\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-2}-P_{0}(% \widetilde{X}_{t}=0)\right)^{+}\ dt
+m2(P0(X~t(m)=0)-m-2)+dt=o(logN).+\int_{m^{2}}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-2}\right)^{+}% dt=o(\log N). (13.43)

To bound the first of these two integrals, we observe from (13.41) that P0(X~t(m)=0)(m-1+p~(t/2))2P_{0}(\widetilde{X}^{(m)}_{t}=0)\leq(m^{-1}+\tilde{p}(t/2))^{2}, and so the integrand is bounded by 2mp~(t/2){\textstyle\frac{2}{m}}\tilde{p}(t/2). Using (13.24), the first integral is O(1)=o(logm)O(1)=o(\log m). To analyze the second integral in (13.43) we consider separately the ranges m2tm2log3/2mm^{2}\leq t\leq m^{2}\log^{3/2}m and m2log3/2mt<m^{2}\log^{3/2}m\leq t<\infty. Over the first range, we again use (13.41) to bound the integrand by 2mp~(t/2)+(p~(t/2))2{\textstyle\frac{2}{m}}\tilde{p}(t/2)+(\tilde{p}(t/2))^{2}. Again using (13.24), the integral is bounded by

(1+o(1))2π1/2mm2m2log3/2mt-1/2dt+(1+o(1))π-1m2m2log3/2mt-1dt(1+o(1))\frac{2}{\pi^{1/2}m}\int_{m^{2}}^{m^{2}\log^{3/2}m}t^{-1/2}dt\ +\ (1+o% (1))\pi^{-1}\int_{m^{2}}^{m^{2}\log^{3/2}m}t^{-1}dt
=Θ(log3/4m)+Θ(loglogm)=o(logm).=\Theta(\log^{3/4}m)+\Theta(\log\log m)=o(\log m).

To bound the integral over the second range, we use (13.42) and find

m2log3/2m(P0(X~t(m)=0)-m-2)dt\displaystyle\int_{m^{2}\log^{3/2}m}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{% t}=0)-m^{-2}\right)dt \displaystyle\leq B1B2m2exp(-log3/2mB2)\displaystyle B_{1}B_{2}m^{2}\exp(-{\textstyle\frac{\log^{3/2}m}{B_{2}}})
=\displaystyle= o(1)=o(logm).\displaystyle o(1)=o(\log m).


13.2.6 The infinite degree-rr tree

Fix r3r\geq 3 and write 𝐓r{\bf T}^{r} for the infinite tree of degree rr. We picture 𝐓r{\bf T}^{r} as a “family tree”, where the root ϕ\phi has rr children, and each other vertex has one parent and r-1r-1 children. Being a vertex-transitive graph (recall Chapter 7 section 1.1; for rr even, 𝐓r{\bf T}^{r} is the Cayley graph of the free group on r/2r/2 generators), one can study many more general “random flights” on 𝐓r{\bf T}^{r} (see Notes), but we shall consider only the simple random walk (Xt)(X_{t}).

We can get some information about the walk without resorting to calculations. The “depth” process d(Xt,ϕ)d(X_{t},\phi) is clearly the “reflecting asymmetric random walk” on Z+:={0,1,2,}Z^{+}:=\{0,1,2,\ldots\} with

p0,1=1;  pi,i-1=1/r;  pi,i+1=(r-1)/r,  i1.p_{0,1}=1;\ p_{i,i-1}=1/r;\ p_{i,i+1}=(r-1)/r,\ i\geq 1.

By comparison with asymmetric random walk on all ZZ, which has drift (r-2)/r(r-2)/r, we see that

t-1d(Xt,ϕ)r-2r a.s. as t.t^{-1}d(X_{t},\phi)\rightarrow\frac{r-2}{r}\mbox{ a.s. as }t\rightarrow\infty. (13.44)

In particular, the number of returns to ϕ\phi is finite and so the walk is transient. Now consider the return probability ρ=Pϕ(Tϕ+<)\rho=P_{\phi}(T^{+}_{\phi}<\infty) and note that (by considering the first step) ρ=Pϕ(Tc<)\rho=P_{\phi}(T_{c}<\infty) where cc is a child of ϕ\phi. Considering the first two steps, we obtain the equation ρ=1r+r-1rρ2\rho=\frac{1}{r}+\frac{r-1}{r}\rho^{2}, and since by transience ρ<1\rho<1, we see that

ρ:=Pϕ(Tϕ+<)=Pϕ(Tc<)=1r-1.\rho:=P_{\phi}(T^{+}_{\phi}<\infty)=P_{\phi}(T_{c}<\infty)=\frac{1}{r-1}. (13.45)


EϕNϕ()=11-ρ=r-1r-2.E_{\phi}N_{\phi}(\infty)=\frac{1}{1-\rho}=\frac{r-1}{r-2}. (13.46)

As at (13.32), ρ\rho has a sample path interpretation: the number VtV_{t} of distinct vertices visited by the walk before time tt satisfies

t-1Vt1-ρ=r-2r-1 a.s. as t.t^{-1}V_{t}\rightarrow 1-\rho={\textstyle\frac{r-2}{r-1}}\mbox{ a.s. as }t% \rightarrow\infty.

By transience, amongst the children of ϕ\phi there is some vertex L1L_{1} which is visited last by the walk; then amongst the children of L1L_{1} there is some vertex L2L_{2} which is visited last by the walk; and so on, to define a “path to infinity” ϕ=L0,L1,L2,\phi=L_{0},L_{1},L_{2},\ldots. By symmetry, given L1,L2,,Li-1L_{1},L_{2},\ldots,L_{i-1} the conditional distribution of LiL_{i} is uniform over the children of Li-1L_{i-1}, so in the natural sense we can describe (Li)(L_{i}) as the uniform random path to infinity.

13.2.7 Generating function arguments

While the general qualitative behavior of random walk on 𝐓r{\bf T}^{r} is clear from the arguments above, more precise quantitative estimates are most naturally obtained via generating function arguments. For any state ii of a Markov chain, the generating functions Gi(z):=t=0Pi(Xt=i)ztG_{i}(z):=\sum_{t=0}^{\infty}P_{i}(X_{t}=i)z^{t} and Fi(z):=t=1Pi(Ti+=t)ztF_{i}(z):=\sum_{t=1}^{\infty}P_{i}(T^{+}_{i}=t)z^{t} are related by

Gi=1+FiGiG_{i}=1+F_{i}G_{i} (13.47)

(this is a small variation on Chapter 2 Lemma 19). Consider simple symmetric reflecting random walk on Z+Z^{+}. Clearly

G0(z)=t=0(2tt)2-2tz2t=(1-z2)-1/2,G_{0}(z)=\sum_{t=0}^{\infty}{2t\choose t}2^{-2t}z^{2t}=(1-z^{2})^{-1/2},

the latter identity being the series expansion of (1-x)-1/2(1-x)^{-1/2}. So by (13.47)

F0(z):=t=0 or 1P0(T0+=2t)z2t=1-(1-z2)1/2.F_{0}(z):=\sum_{t=0\mbox{ or }1}^{\infty}P_{0}(T^{+}_{0}=2t)z^{2t}=1-(1-z^{2})% ^{1/2}.

Consider an excursion of length 2t2t, that is, a path (0=i0,i1,,i2t-1,i2t=0)(0=i_{0},i_{1},\ldots,i_{2t-1},i_{2t}=0) with ij>0,1j2t-1i_{j}>0,1\leq j\leq 2t-1. This excursion has chance 21-2t2^{1-2t} for the symmetric walk on Z+Z^{+}, and has chance ((r-1)/r)t-1(1/r)t((r-1)/r)^{t-1}(1/r)^{t} for the asymmetric walk d(Xt,ϕ)d(X_{t},\phi). So

Pϕ(Tϕ+=2t)P0(T0+=2t)=r2(r-1)(4(r-1)r2)t\frac{P_{\phi}(T^{+}_{\phi}=2t)}{P_{0}(T^{+}_{0}=2t)}=\frac{r}{2(r-1)}\ \left(% \frac{4(r-1)}{r^{2}}\right)^{t}

where the numerator refers to simple RW on the tree, and the denominator refers to simple symmetric reflecting RW on Z+Z^{+}. So on the tree,

Fϕ(z)=r2(r-1)F0(z4(r-1)r2)=r2(r-1)(1-(1-4(r-1)z2r2)1/2).F_{\phi}(z)=\frac{r}{2(r-1)}F_{0}\left(z\sqrt{{\textstyle\frac{4(r-1)}{r^{2}}}% }\right)=\frac{r}{2(r-1)}\left(1-\left(1-\frac{4(r-1)z^{2}}{r^{2}}\right)^{1/2% }\right).

Then (13.47) gives an expression for Gϕ(z)G_{\phi}(z) which simplifies to

Gϕ(z)=2(r-1)r-2+r2-4(r-1)z2.G_{\phi}(z)=\frac{2(r-1)}{r-2+\sqrt{r^{2}-4(r-1)z^{2}}}. (13.48)

In particular, GϕG_{\phi} has radius of convergence 1/β1/\beta, where

β=2r-1r-1<1.\beta=2r^{-1}\sqrt{r-1}<1. (13.49)

Without going into details, one can now use standard Tauberian arguments to show

Pϕ(Xt=ϕ)αt-3/2βt, tt evenP_{\phi}(X_{t}=\phi)\sim\alpha t^{-3/2}\beta^{t},\ \ \ \mbox{ $t$ even} (13.50)

for a computable constant α\alpha, and this format (for different values of α\alpha and β\beta) remains true for more general radially symmetric random flights on 𝐓r{\bf T}^{r} ([339] Theorem 19.30). One can also in principle expand (13.48) as a power series to obtain Pϕ(Xt=ϕ)P_{\phi}(X_{t}=\phi). Again we shall not give details, but according to Giacometti [164] one obtains

Pϕ(Xt=ϕ)\displaystyle P_{\phi}(X_{t}=\phi) =\displaystyle= r-1r(r-1r)tΓ(1+t)Γ(2+t/2)Γ(1+t/2)\displaystyle\frac{r-1}{r}\ \left(\frac{\sqrt{r-1}}{r}\right)^{t}\ \frac{% \Gamma(1+t)}{\Gamma(2+t/2)\Gamma(1+t/2)} (13.51)
×2F1(t+12,1,2+t2,4(r-1)r2),t even\displaystyle\times\ _{2}F_{1}({\textstyle\frac{t+1}{2}},1,2+{\textstyle\frac{% t}{2}},{\textstyle\frac{4(r-1)}{r^{2}}}),\ t\mbox{ even}

where F12\ {}_{2}F_{1} is the generalized hypergeometric function.

Finally, the β\beta at (13.49) can be interpreted as an eigenvalue for the infinite transition matrix (pij)(p_{ij}), so we anticipate a corresponding eigenfunction f2f_{2} with

jpijf2(j)=βf2(i)i,\sum_{j}p_{ij}f_{2}(j)=\beta f_{2}(i)\ \forall i, (13.52)

and one can verify this holds for

f2(i):=(1+r-2ri)(r-1)-i/2.f_{2}(i):=(1+{\textstyle\frac{r-2}{r}}i)(r-1)^{-i/2}. (13.53)

13.2.8 Comparison arguments

Fix r3r\geq 3 and consider a sequence (Gn)(G_{n}) of nn-vertex rr-regular graphs with nn\rightarrow\infty. Write (Xtn)(X^{n}_{t}) for the random walk on GnG_{n}. We can compare these random walks with the random walk (Xt)(X^{\infty}_{t}) on 𝐓r{\bf T}^{r} via the obvious inequality

Pv(Xtn=v)Pϕ(Xt=ϕ),t0.P_{v}(X^{n}_{t}=v)\geq P_{\phi}(X^{\infty}_{t}=\phi),\ t\geq 0. (13.54)

To spell this out, there is a universal cover map γ:𝐓rGn\gamma:{\bf T}^{r}\rightarrow G_{n} with γ(ϕ)=v\gamma(\phi)=v and such that for each vertex ww of 𝐓r{\bf T}^{r} the rr edges at ww are mapped to the rr edges of GnG_{n} at γ(w)\gamma(w). Given the random walk XX^{\infty} on 𝐓r{\bf T}^{r}, the definition Xtn=γ(Xt)X^{n}_{t}=\gamma(X^{\infty}_{t}) constructs random walk on GnG_{n}, and (13.54) holds because {Xtn=v}{Xt=ϕ}\{X^{n}_{t}=v\}\supseteq\{X^{\infty}_{t}=\phi\}.

It is easy to use (13.54) to obtain asymptotic lower bounds on the fundamental parameters discussed in Chapter 4. Instead of the relaxation time τ2\tau_{2}, it is more natural here to deal directly with the second eigenvalue λ2\lambda_{2}.

Lemma 13.9

For random walk on nn-vertex rr-regular graphs, with r3r\geq 3 fixed and nn\rightarrow\infty

(a) lim infn-1τ0(n)r-1r-2\liminf n^{-1}\tau_{0}(n)\geq\frac{r-1}{r-2};

(b) lim infτ1(n)lognr(r-2)log(r-1)\liminf\frac{\tau_{1}(n)}{\log n}\geq\frac{r}{(r-2)\log(r-1)};

(c) lim infλ2(n)β:=2r-1r-1\liminf\lambda_{2}(n)\geq\beta:=2r^{-1}\sqrt{r-1}.

Theory concerning expanders (Chapter 9 section 1) shows there exist graphs where the limits above are finite constants (depending on rr), so Lemma 13.9 gives the optimal order of magnitude bound.

Proof. For (a), switch to the continuous-time walk, consider an arbitrary vertex vv in GnG_{n}, and take t0(n)t_{0}(n)\rightarrow\infty with t0(n)/n0t_{0}(n)/n\rightarrow 0. Then we repeat the argument around (13.39) in the torus setting:

n-1EπTv\displaystyle n^{-1}E_{\pi}T_{v} =\displaystyle= 0(Pv(Xtn=v)-1n)dt\displaystyle\int_{0}^{\infty}(P_{v}(X^{n}_{t}=v)-{\textstyle\frac{1}{n}})\ dt
\displaystyle\geq 0t0(Pv(Xtn=v)-1n)dt\displaystyle\int_{0}^{t_{0}}(P_{v}(X^{n}_{t}=v)-{\textstyle\frac{1}{n}})\ dt
\displaystyle\geq -t0n+0t0Pϕ(Xt=ϕ)dt by (13.54)\displaystyle-\frac{t_{0}}{n}+\int_{0}^{t_{0}}P_{\phi}(X^{\infty}_{t}=\phi)\ % dt\mbox{ by (\ref{treebd}) }
\displaystyle\rightarrow 0Pϕ(Xt=ϕ)dt\displaystyle\int_{0}^{\infty}P_{\phi}(X^{\infty}_{t}=\phi)\ dt
=\displaystyle= EϕNϕ()=r-1r-2,\displaystyle E_{\phi}N_{\phi}(\infty)=\frac{r-1}{r-2},

which is somewhat stronger than assertion (a). Next, the discrete-time spectral representation implies


Using (13.54) and (13.50), for any n,tn\rightarrow\infty,t\rightarrow\infty with tt even,

t-3/2βt(α-o(1))1n+nβt(n).t^{-3/2}\beta^{t}(\alpha-o(1))\leq{\textstyle\frac{1}{n}}+n\beta^{t}(n). (13.55)

For (b), the argument for (13.54) gives a coupling between the process XnX^{n} started at vv and the process XX^{\infty} started at ϕ\phi such that

dn(Xtn,v)d(Xt,ϕ)d^{n}(X^{n}_{t},v)\leq d^{\infty}(X^{\infty}_{t},\phi)

where dnd^{n} and dd^{\infty} denote graph distance. Fix ε>0\varepsilon>0 and write γ=r-2r+ε\gamma=\frac{r-2}{r}+\varepsilon. By the coupling and (13.44), P(dn(Xtn,v)γt)0P(d^{n}(X^{n}_{t},v)\geq\gamma t)\rightarrow 0 as n,tn,t\rightarrow\infty. This remains true in continuous time. Clearly τ1(n)\tau_{1}(n)\to\infty, and so by definition of τ1\tau_{1} we have

lim supπ{w:dn(w,v)γτ1(n)}e-1.\limsup\pi\{w:d^{n}(w,v)\geq\gamma\tau_{1}(n)\}\leq e^{-1}.

But by counting vertices,

π{w:dn(v,w)d}\displaystyle\pi\{w:d^{n}(v,w)\leq d\} \displaystyle\leq 1+r+r(r-1)++r(r-1)d-1n\displaystyle\frac{1+r+r(r-1)+\cdots+r(r-1)^{d-1}}{n}
\displaystyle\rightarrow 0 if d(1-ε)lognlog(r-1).\displaystyle 0\mbox{ if }d\sim(1-\varepsilon)\frac{\log n}{\log(r-1)}.

For these two limit results to be consistent we must have γτ1(n)(1-ε)lognlog(r-1)\gamma\tau_{1}(n)\geq(1-\varepsilon)\frac{\log n}{\log(r-1)} for all large nn, establishing (b).

For (c), fix a vertex v0v_{0} of GnG_{n} and use the function f2f_{2} at (13.53) to define f(v):=f2(d(v,v0))f(v):=f_{2}(d(v,v_{0})) for all vertices vv of GnG_{n}. The equality (13.52) for f2f_{2} on the infinite tree easily implies the inequality 𝐏fβf{\bf P}f\geq\beta f on GnG_{n}. Set f¯:=n-1vf(v)\bar{f}:=n^{-1}\sum_{v}f(v) and write 11 for the unit function. By the Rayleigh-Ritz characterization (Chapter 4 eq. (73)), writing g,h:=ijπigipijhj\langle g,h\rangle:=\sum_{ij}\pi_{i}g_{i}p_{ij}h_{j},

λ2(n)\displaystyle\lambda_{2}(n) \displaystyle\geq f-f¯1,𝐏(f-f¯1)||f-f¯1||22\displaystyle\frac{\langle f-\bar{f}1,{\bf P}(f-\bar{f}1)\rangle}{||f-\bar{f}1% ||_{2}^{2}}
=\displaystyle= f,𝐏f-f¯2||f||22-f¯2\displaystyle\frac{\langle f,{\bf P}f\rangle-\bar{f}^{2}}{||f||_{2}^{2}-\bar{f% }^{2}}
\displaystyle\geq β||f||22-f¯2||f||22-f¯2.\displaystyle\frac{\beta||f||_{2}^{2}-\bar{f}^{2}}{||f||_{2}^{2}-\bar{f}^{2}}.

As nn\to\infty we have f¯0\bar{f}\to 0 while ||f||2||f||_{2} tends to a non-zero limit, establishing (c).

13.2.9 The hierarchical tree

Fix r2r\geq 2. There is an infinite tree (illustrated for r=2r=2 in the figure) specified as follows. Each vertex is at some height 0,1,2,0,1,2,\ldots. A vertex at height hh has one parent vertex at height h+1h+1 and (if h1h\geq 1) rr child vertices at height h-1h-1. The height-00 vertices are leaves, and the set LL of leaves has a natural labeling by finite rr-ary strings. The figure illustrates the binary (r=2r=2) case, where L={0,1,10,11,100,101,}L=\{0,1,10,11,100,101,\ldots\}. LL forms an Abelian group under entrywise addition modulo rr, e.g. for r=2r=2 we have 1101+110=1101+0110=10111101+110=1101+0110=1011. Adopting a name used for generalizations of this construction in statistical physics, we call LL the hierarchical lattice and the tree 𝐓hierr{\bf T}^{r}_{\rm hier} the hierarchical tree.


Fix a parameter 0<λ<r0<\lambda<r. Consider biased random walk XtX_{t} on the tree 𝐓hierr{\bf T}^{r}_{\rm hier}, where from each non-leaf vertex the transition goes to the parent with probability λ/(λ+r)\lambda/(\lambda+r) and to each child with probability 1/(λ+r)1/(\lambda+r). Then consider Y=Y=XX watched only on LL”, that is the sequence of (not-necessarily distinct) successive leaves visited by XX. The group LL is distance-transitive (for Hamming distance on LL) and YY is a certain isotropic random flight on LL. A nice feature of this example is that without calculation we can see that YY is recurrent if and only if λ1\lambda\leq 1. For consider the path of ancestors of 00. The chain XX must spend an infinite time on that path (side-branches are finite); on that path XX behaves as asymmetric simple random walk on Z+Z^{+}, which is recurrent if and only if λ1\lambda\leq 1; so XX and thence YY visits 00 infinitely often if and only if λ1\lambda\leq 1.

Another nice feature is that we can give a fairly explicit expression for the tt-step transition probabilities of YY. Writing HH for the maximum height reached by XX in an excursion from the leaves, then

P(Hh)=P1(T^h<T^0)=rλ-1(rλ)h-1,h1P(H\geq h)=P_{1}(\hat{T}_{h}<\hat{T}_{0})=\frac{{\textstyle\frac{r}{\lambda}}-% 1}{({\textstyle\frac{r}{\lambda}})^{h}-1},\quad h\geq 1

where T^\hat{T} denotes hitting time for the height process. Writing MtM_{t} for the maximum height reached in tt excursions,

P(Mt<h)=(P(H<h))t=(1-rλ-1(rλ)h-1)t.P(M_{t}<h)=(P(H<h))^{t}=\left(1-\frac{{\textstyle\frac{r}{\lambda}}-1}{({% \textstyle\frac{r}{\lambda}})^{h}-1}\right)^{t}.

It is clear by symmetry that the distribution of YtY_{t} is conditionally uniform on the leaves which are descendants of the maximal-height vertex previously visited by XX. So for leaves v,xv,x with branchpoint at height dd,

Pv(Yt=x)=hdr-hP(Mt=h).P_{v}(Y_{t}=x)=\sum_{h\geq d}r^{-h}P(M_{t}=h).

Since P(Mt=h)=P(Mt<h+1)-P(Mt<h)P(M_{t}=h)=P(M_{t}<h+1)-P(M_{t}<h), we have found the “fairly explicit expression” promised above. A brief calculation gives the following time-asymptotics. Fix s>0s>0 and consider ts(rλ)jt\sim s(\frac{r}{\lambda})^{j} with jj\rightarrow\infty; then

Pv(Yt=v)r-jf(s); where P_{v}(Y_{t}=v)\sim r^{-j}f(s);\ \ \mbox{ where }
f(s)=i=-r-i(exp(-s(rλ-1)(rλ)-i-1)-exp(-s(rλ-1)(rλ)-i)).f(s)=\sum_{i=-\infty}^{\infty}r^{-i}\left(\exp(-s({\textstyle\frac{r}{\lambda}% }-1)({\textstyle\frac{r}{\lambda}})^{-i-1})-\exp(-s({\textstyle\frac{r}{% \lambda}}-1)({\textstyle\frac{r}{\lambda}})^{-i})\right).

In particular,

Pv(Yt=v)=Θ(t-d/2) as t,d=2logrlogr-logλ.P_{v}(Y_{t}=v)=\Theta(t^{-d/2})\mbox{ as }t\rightarrow\infty,\ d=\frac{2\log r% }{\log r-\log\lambda}. (13.56)

Comparing with (13.26), this gives a sense in which YY mimics simple random walk on ZdZ^{d}, for dd defined above. Note that dd increases continuously from 00 to \infty as λ\lambda increases from 00 to rr, and that YY is recurrent if and only if d2d\leq 2.

Though we don’t go into details, random walk on the hierarchical lattice is a natural infinite-state analog of biased random walk on the balanced finite tree (Chapter 5 section 2.1). In particular, results in the latter context showed that, writing nn for number of vertices, τ0(n)=O(n)\tau_{0}(n)=O(n) if and only if λ/r>1/r\lambda/r>1/r, that is if and only if d>2d>2. This is the condition for transience of the infinite-state walk, confirming the paradigm of section 13.2.3.

13.2.10 Towards a classification theory for sequences of finite chains

Three chapters of Woess [339] treat in detail three properties that random walk on an infinite graph may or may not possess:

  • transience

  • spectral radius <1<1

  • non-trivial boundary.

Can these be related to properties for sequences of finite chains? We already mentioned (section 13.2.3) that the property τ0(n)=O(n)\tau_{0}(n)=O(n) seems to be the analog of transience. In this speculative section we propose definitions of three other properties for sequences of finite chains, which we name

  • compactness

  • infinite-dimensionality

  • expander-like.

Future research will show whether these are useful definitions! Intuitively we expect that every reasonably “natural” sequence should fall into one of these three classes.

For simplicity we consider reversible random walks on Cayley graphs. It is also convenient to continuize. The resulting chains are special cases of (reversible) Lévy processes. We define the general Lévy process to be a continuous-time process with stationary independent increments on a (continuous or discrete) group. Thus the setting for the rest of this section is a sequence (Xt(n))(X^{(n)}_{t}) of reversible Lé—vy processes on finite groups G(n)G^{(n)} of size nn\to\infty through some subsequence. Because we work in continuous time, the eigenvalues satisfy 0=λ1(n)<λ2(n)0=\lambda^{(n)}_{1}<\lambda^{(n)}_{2}\leq\cdots.

(A): Compactness. Say the sequence (Xt(n))(X^{(n)}_{t}) is compact if there exists a (discrete or continuous) compact set SS and a reversible Lévy process X~t\widetilde{X}_{t} on SS such that

(i) d~(t)||Pπ(X~t)π||0\tilde{d}(t)\equiv||P_{\pi}(\widetilde{X}_{t}\in\cdot)\rightarrow\pi||\rightarrow 0 as tt\rightarrow\infty;

(ii) λj(n)λ2(n)λ~j as n,  j2\frac{\lambda_{j}(n)}{\lambda_{2}(n)}\rightarrow\tilde{\lambda}_{j}\mbox{ as }% n\rightarrow\infty,\ j\geq 2; where 1=λ~2λ~31=\tilde{\lambda}_{2}\leq\tilde{\lambda}_{3}\leq\cdots are the eigenvalues of (X~t)(\widetilde{X}_{t});

(iii) d(n)(tτ2(n))d~(t) as n;  t>0d^{(n)}(t\ \tau_{2}(n))\rightarrow\tilde{d}(t)\mbox{ as }n\rightarrow\infty;\ % t>0.

These properties formalize the idea that the sequence of random walks form discrete approximations to a limit Lévy process on a compact group, at least as far as mixing times are concerned. Simple random walk on ZmdZ^{d}_{m}, and the limit Brownian motion on RdR^{d} (section 13.1.2) form the obvious example. Properties (i) and (iii) imply, in particular, that

τ1(n)/τ2(n) is bounded as n.\tau_{1}(n)/\tau_{2}(n)\mbox{ is bounded as }n\rightarrow\infty. (13.57)

One might hope that a converse is true:

Does every sequence satisfying (13.57) have a compact subsequence?

Unfortunately, we are convinced that the answer is “no”, for the following reason. Take (Xtn)(X^{n}_{t}) which is compact, where the limit Lévy process has function d~(t)\tilde{d}(t) as at (i). Now consider a product chain (Xt(n),Yt(n))(X^{(n)}_{t},Y^{(n)}_{t}), where components run independently, and where Y(n)Y^{(n)} has the cut-off property (Chapter 7) and τ1Y(n)τ2X(n)\tau^{Y}_{1}(n)\sim\tau_{2}^{X}(n). Note that by Chapter 7-1 Lemma 1 we have τ2Y(n)=o(τ1Y(n))\tau^{Y}_{2}(n)=o(\tau^{Y}_{1}(n)). If the product chain had a subsequential limit, then its total variation function at (i), say d(t)d^{\prime}(t), must satisfy

d(t)\displaystyle d^{\prime}(t) =\displaystyle= d~(t), t>1\displaystyle\tilde{d}(t),\quad t>1
=\displaystyle= 1.t<1.\displaystyle 1.\quad t<1.

But it seems intuitively clear (though we do not know a proof) that every Lévy process on a compact set has continuous d()d(\cdot). This suggests the following conjecture.

Conjecture 13.10

For any sequence of reversible Lévy processes satisfying (13.57), there exists a subsequence satisfying the definition of compact except that condition (ii) is replaced by

(iv): t00:\exists t_{0}\geq 0:

d(n)(tτ2(n))\displaystyle d^{(n)}(t\ \tau_{2}(n)) \displaystyle\to 1; t<t0\displaystyle 1;\quad t<t_{0}
\displaystyle\to d~(t); t>t0.\displaystyle\tilde{d}(t);\quad t>t_{0}.

Before describing the other two classes of chains, we need a definition and some motivating background. In the present setting, the property “trivial boundary” is equivalent (see Notes) to the property

limt||Pv(Xt)-Pw(Xt)||=0,v,w.\lim_{t\to\infty}||P_{v}(X_{t}\in\cdot)-P_{w}(X_{t}\in\cdot)||=0,\quad\forall v% ,w. (13.58)

This suggests that an analogous finite-state property might involve whether the variation distance for nearby starts becomes small before time τ1\tau_{1}. Say that a sequence (Ln(ε))(L_{n}(\varepsilon)) of subsets is an asymptotic ε\varepsilon-neighborhood  if

||Pϕ(Xετ1)-Pv(Xετ1)||0 as n||P_{\phi}(X_{\varepsilon\tau_{1}}\in\cdot)-P_{v}(X_{\varepsilon\tau_{1}}\in% \cdot)||\rightarrow 0\mbox{ as }n\rightarrow\infty

uniformly over vLn(ε)v\in L_{n}(\varepsilon); here ϕ\phi is an arbitrary reference vertex. From Chapter 7-1 Lemma 1(b) we can deduce that, if the cut-off property holds, such a neighborhood must have size |Ln(ε)|=o(n)|L_{n}(\varepsilon)|=o(n).

(B): Infinite dimensional. Say the sequence (Xt(n))(X^{(n)}_{t}) is infinite-dimensional if the following three properties hold.

(i) τ1(n)=Θ(τ2loglogn)\tau_{1}(n)=\Theta(\tau_{2}\log\log n)

(ii) The cut-off property holds

(iii) there exists some δ(ε)\delta(\varepsilon), increasing from 00 to 11 as ε\varepsilon increases from 00 to 11, such that a maximal-size asymptotic ε\varepsilon-neighborhood (Ln(ε))(L_{n}(\varepsilon)) has

log|Ln(ε)|=(logn)δ(ε)+o(1) as n.\log|L_{n}(\varepsilon)|=(\log n)^{\delta(\varepsilon)+o(1)}\mbox{ as }n% \rightarrow\infty.

This definition is an attempt to abstract the essential properties of random walk on the dd-cube (Chapter 5 Example 15), where properties (i) and (ii) were already shown. We outline below a proof of property (iii) in that example. Another fundamental example where (i) and (ii) hold is card-shuffling by random transpositions (Chapter 7 Example 18)), and we conjecture that property (iii) also holds there. Conceptually, this class infinite-dimensional of sequences is intended (cf. (13.58)) as the analog of a single random walk with trivial boundary on an infinite-dimensional graph.

Property (iii) for the dd-cube. Let (X(t))(X(t)) be continuous-time random walk on the dd-cube, and (X*(t))(X^{*}(t)) continuous-time random walk on the bb-cube, where bdb\leq d. The natural coupling shows

if d(v,w)=b then \displaystyle\mbox{ if }d(v,w)=b\mbox{ then }
||Pv(X(t))-Pw(X(t))||=||P𝟎(X*(tb/d))-P𝟏(X*(tb/d))||.\displaystyle||P_{v}(X(t)\in\cdot)-P_{w}(X(t)\in\cdot)||=||P_{\bf 0}(X^{*}(tb/% d)\in\cdot)-P_{\bf 1}(X^{*}(tb/d)\in\cdot)||.

Take dd\to\infty with

b(d)dα, t(d)14εdlogdb(d)\sim d^{\alpha},\quad t(d)\sim{\textstyle\frac{1}{4}}\varepsilon d\log d

for some 0<α,ε<10<\alpha,\varepsilon<1, so that

limdt(d)b(d)/d14b(d)logb(d)εα.\lim_{d}\frac{t(d)b(d)/d}{{\textstyle\frac{1}{4}}b(d)\log b(d)}\to{\textstyle% \frac{\varepsilon}{\alpha}}.

Since the variation cut-off for the bb-cube is at 14blogb\frac{1}{4}b\log b, we see that for vertices vv and ww at distance b(d)b(d),

||Pv(X(t(d)))-Pw(X(t(d)))||\displaystyle||P_{v}(X(t(d))\in\cdot)-P_{w}(X(t(d))\in\cdot)|| \displaystyle\to 1, ε>α\displaystyle 1,\quad\varepsilon>\alpha
\displaystyle\to 0, ε<α.\displaystyle 0,\quad\varepsilon<\alpha.

So a maximal-size asymptotic ε\varepsilon-neighborhood (Ln(ε))(L_{n}(\varepsilon)) of 𝟎{\bf 0} must be of the form {w:d(w,𝟎)dε+o(1)}\{w:d(w,{\bf 0})\leq d^{\varepsilon+o(1)}\}. So

log|Ln(ε)|=log(ddε+o(1))=dε+o(1)=(logn)ε+o(1)\log|L_{n}(\varepsilon)|=\log{d\choose d^{\varepsilon+o(1)}}=d^{\varepsilon+o(% 1)}=(\log n)^{\varepsilon+o(1)}

as required.

Finally, we want an analog of a random walk with non-trivial boundary, expressed using property (ii) below.

(C): Expander-like. Say the sequence (Xt(n))(X^{(n)}_{t}) is expander-like if

(i) τ1=Θ(τ2logn)\tau_{1}=\Theta(\tau_{2}\log n)

(ii) every asymptotic ε\varepsilon-neighborhood (Ln(ε))(L_{n}(\varepsilon)) has

log|Ln(ε)|=(logn)o(1) as n\log|L_{n}(\varepsilon)|=(\log n)^{o(1)}\mbox{ as }n\rightarrow\infty

(iii) The cut-off property holds.

Recall from Chapter 9 section 1 that, for symmetric graphs which are rr-regular expanders for fixed rr, we have τ2(n)=Θ(1)\tau_{2}(n)=\Theta(1) and τ1(n)=Θ(logn)\tau_{1}(n)=\Theta(\log n). But it is not known whether properties (ii) and (iii) always hold in this setting.