# 13.2 Infinite graphs

There is a huge research literature concerning random walks on infinite discrete groups, and more generally on infinite graphs, and the recent monograph of Woess [339] provides an in-depth treatment. This section focuses narrowly on two aspects of an issue not emphasized in [339]: what does study of random walk on infinite graphs tell us about random walks on finite graphs? One aspect of this issue is that random walks on certain specific infinite graphs may be used to get approximations or inequalities for random walks on specific finite graphs. We treat three examples.

• The infinite lattice $Z^{d}$ as an approximation to the discrete torus $Z^{d}_{N}$ for large $N$ (section 13.2.4).

• The infinite degree-$r$ tree ${\bf T}^{r}$ and bounds for $r$-regular expander graphs of large size (section 13.2.6).

• The hierarchical tree ${\bf T}^{r}_{\rm hier}$ as an approximation to balanced $(r-1)$-ary trees (section 13.2.9).

The second aspect concerns properties such as transience, non-trivial boundary, and “spectral radius $<1$”, which have been well-studied as qualitative properties which an infinite-state chain either possesses or does not possess. What are the quantitative finite-state analogs of such properties? Here actual theorems are scarce; we present conceptual discussion in sections 13.2.3 and 13.2.10 as a spur to future research.

## 13.2.1 Set-up

We assume the reader has some acquaintance with classical theory (e.g., [133] Chapter 5) for a countable-state irreducible Markov chain, which emphasizes the trichotomy transient or null-recurrent or positive-recurrent. We use the phrase general chain to refer to the case of an arbitrary irreducible transition matrix ${\bf P}$, without any reversibility assumption.

Recall from Chapter 3 section 2 the identification, in the finite-state setting, of reversible chains and random walks on weighted graphs. Given a reversible chain we defined edge-weights $w_{ij}=\pi_{i}p_{ij}=\pi_{j}p_{ji}$; conversely, given edge-weights we defined random walk as the reversible chain

 $p_{vx}=w_{vx}/w_{v};\ w_{v}=\sum_{x}w_{vx}.$ (13.19)

In the infinite setting it is convenient (for reasons explained below) to take the “weighted graph” viewpoint. Thus the setting of this section is that we are given a connected weighted graph satisfying

 $w_{v}\equiv\sum_{x}w_{vx}<\infty\ \forall x,\ \ \ \sum_{v}w_{v}=\infty$ (13.20)

and we study the associated random walk $(X_{t})$, i.e., the discrete-time chain with $p_{vx}=w_{vx}/w_{v}$. So in the unweighted setting ($w_{e}\equiv 1$), we have nearest-neighbor random walk on a locally finite, infinite graph.

To explain why we adopt this set-up, say $\pi$ is invariant for ${\bf P}$ if

 $\sum_{i}\pi_{i}p_{ij}=\pi_{j}\ \forall j;\ \ \pi_{j}>0\ \forall j.$

Consider asymmetric random walk on $Z$, say

 $p_{i,i+1}=2/3,\ p_{i,i-1}=1/3;\ -\infty (13.21)

One easily verifies that each of the two measures $\pi_{i}=1$ and $\pi_{i}=2^{i}$ is invariant. Such nonuniqueness makes it awkward to seek to define reversibility of ${\bf P}$ via the detailed balance equations

 $\pi_{i}p_{ij}=\pi_{j}p_{ji}\ \forall i,j$ (13.22)

without a prior definition of $\pi$. Stating definitions via weighted graphs avoids this difficulty.

The second assumption in (13.20), that $\sum_{v}w_{v}=\infty$, excludes the positive-recurrent case (see Theorem 13.4 below); because in that case the questions one asks, such as whether the relaxation time $\tau_{2}$ is finite, can be analyzed by the same techniques as in the finite-state setting.

Our intuitive interpretation of “reversible” in Chapter 3 was “a movie of the chain looks the same run forwards or run backwards”. But the chain corresponding to the weighted graph with weights $w_{i,i+1}=2^{i}$, which is the chain (13.21) with $\pi_{i}=2^{i}$, has a particle moving towards $+\infty$ and so certainly doesn’t satisfy this intuitive notion. On the other hand, a probabilistic interpretation of an infinite invariant measure $\pi$ is that if we start at time $0$ with independent Poisson($\pi_{v}$) numbers of particles at vertices $v$, and let the particles move independently according to ${\bf P}$, then the particle process is stationary in time. So the detailed balance equations (13.22) correspond to the intuitive “movie” notion of reversible for the infinite particle process, rather than for a single chain.

## 13.2.2 Recurrence and Transience

The next Theorem summarizes parts of the standard theory of general chains (e.g., [133] Chapter 5). Write $\rho_{v}:=P_{v}(T^{+}_{v}<\infty)$ and let $N_{v}(\infty)$ be the total number of visits (including time 0) to $v$.

###### Theorem 13.4

For a general chain, one of the following alternatives holds.

Recurrent.$\rho_{v}=1$ and $E_{v}N_{v}(\infty)=\infty$ and $P_{v}(N_{w}(\infty)=\infty)=1$ for all $v,w$.

Transient.$\rho_{v}<1$ and $E_{v}N_{v}(\infty)<\infty$ and $P_{v}(N_{w}(\infty)<\infty)=1$ for all $v,w$.

In the recurrent case there exists an invariant measure $\pi$, unique up to constant multiples, and the chain is either

positive-recurrent: $E_{v}T_{v}^{+}<\infty\ \forall v$ and $\sum_{v}\pi_{v}<\infty$; or

null-recurrent: $E_{v}T_{v}^{+}=\infty\ \forall v$ and $\sum_{v}\pi_{v}=\infty$.

In the transient and null-recurrent cases, $P_{v}(X_{t}=w)\rightarrow 0$ as $t\rightarrow\infty$ for all $v,w$.

Specializing to random walk on a weighted graph, the measure $(w_{v})$ is invariant, and the second assumption in (13.20) implies that the walk cannot be positive-recurrent. By a natural abuse of language we call the weighted graph recurrent or transient. Because $E_{v}N_{v}(\infty)=\sum_{t}p^{(t)}_{vv}$, Theorem 13.4 contains the “classical” method to establish transience or recurrence by considering the $t\rightarrow\infty$ behavior of $p^{(t)}_{vv}$. This method works easily for random walk on $Z^{d}$ (section 13.2.4).

Some of the “electrical network” story from Chapter 3 extends immediately to the infinite setting. Recall the notion of a flow ${\bf f}$, and the net flow $f_{(x)}$ out of a vertex $x$. Say ${\bf f}$ is a unit flow from $x$ to infinity if $f_{(x)}=1$ and $f_{(v)}=0\ \forall v\neq x$. Thompson’s principle (Chapter 3 Proposition 35) extends to the infinite setting, by considering subsets $A_{n}\downarrow\phi$ (the empty set) with $A_{n}^{c}$ finite.

###### Theorem 13.5

Consider a weighted graph satisfying (13.20). For each $v$,

 $v$

In particular, the random walk is transient iff for some (all) $v$ there exists a unit flow ${\bf f}$ from $v$ to infinity such that $\sum_{e}f^{2}_{e}/w_{e}<\infty$.

By analogy with the finite setting, we can regard the inf as the effective resistance between $v$ and infinity, although (see section LABEL:IEN) we shall not attempt an axiomatic treatment of infinite electrical networks.

Theorem 13.5 has the following immediate corollary: of course (a) and (b) are logically equivalent.

###### Corollary 13.6

(a) If a weighted graph is recurrent, then so is any subgraph.

(b) To show that a weighted graph is transient, it suffices to find a transient subgraph.

Thus the classical fact that $Z^{2}$ is recurrent implies that a subgraph of $Z^{2}$ is recurrent, a fact which is hard to prove by bounding $t$-step transition probabilities. In the other direction, it is possible (but not trivial) to prove that $Z^{3}$ is transient by exhibiting a flow: indeed Doyle and Snell [131] construct a transient tree-like subgraph of $Z^{3}$.

Here is a different formulation of the same idea.

###### Corollary 13.7

The return probability $\rho_{v}=P_{v}(T_{v}^{+}<\infty)$ cannot increase if a new edge (not incident at $v$) is added, or the weight of an existing edge (not incident at $v$) is increased.

## 13.2.3 The finite analog of transience

Recall the mean hitting time parameter $\tau_{0}$ from Chapter 4. For a sequence of $n$-state reversible chains, consider the property

 $n^{-1}\tau_{0}(n)\mbox{ is bounded as }n\rightarrow\infty.$ (13.23)

We assert, as a conceptual paradigm, that property (13.23) is the analog of the “transient” property for a single infinite-state chain. The connection is easy to see algebraically for symmetric chains (Chapter 7), where $\tau_{0}=E_{\pi}T_{v}$ for each $v$, so that by Chapter 2 Lemma 10

 $n^{-1}\tau_{0}=z_{vv}=\sum_{t=0}^{\infty}(p_{v}v^{(t)}-n^{-1}).$

The boundedness (in $n$) of this sum is a natural analog of the transience condition

 $\sum_{t=0}^{\infty}p_{vv}^{(t)}<\infty$

for a single infinite-state chain. So in principle the methods used to determine transience or recurrence in the infinite-state case ([339] Chapter 1) should be usable to determine whether property (13.23) holds for finite families, and indeed Proposition 37 of Chapter 3 provides a tool for this purpose. In practice these extremal methods haven’t yet proved very successful; early papers [85] proved (13.23) for expanders in this way, but other methods are easier (see our proof of Chapter 9 Theorem 1). There is well-developed theory ([339] section 6) which establishes recurrence for infinite planar graphs under mild assumptions. It is natural to conjecture that under similar assumptions, a planar $n$-vertex graph has $\tau_{0}=\Theta(n\log n)$, as in the case of $Z^{2}$ in Proposition 13.8 below.

## 13.2.4 Random walk on $Z^{d}$

We consider the lattice $Z^{d}$ as an infinite $2d$-regular unweighted graph. Write $X_{t}$ for simple random walk on $Z^{d}$, and write $\widetilde{X}_{t}$ for the continuized random walk. Of course, general random flights (i.e. “random walks”, in everyone’s terminology except ours) and their numerous variations comprise a well-studied classical topic in probability theory. See Hughes [184] for a wide-ranging intermediate-level treatment, emphasizing physics applications. Our discussion here is very narrow, relating to topics treated elsewhere in this book.

To start some calculations, for $d=1$ consider

 $\displaystyle\tilde{p}(t)$ $\displaystyle\equiv$ $\displaystyle P_{0}(\widetilde{X}_{t}=0)$ $\displaystyle=$ $J^{+}_{t}$ independent Poisson$(t/2)$ numbers of $+1$ and $-1$ jumps $\displaystyle=$ $\displaystyle\sum_{n=0}^{\infty}\left(\frac{e^{-t/2}(t/2)^{n}}{n!}\right)^{2}$ $\displaystyle=$ $\displaystyle e^{-t}I_{0}(t)$

where $I_{0}(t):=\sum_{n=0}^{\infty}\frac{(t/2)^{2n}}{(n!)^{2}}$ is the modified Bessel function of the first kind of order $0$. Now ${\rm var}\ \widetilde{X}_{t}=t$, and as a consequence of the local CLT (or by quoting asymptotics of the Bessel function $I_{0}$) we have

 $\tilde{p}(t)\sim(2\pi t)^{-1/2}\mbox{ as }t\rightarrow\infty.$ (13.24)

As discussed in Chapter 4 section 6.2 and Chapter 5 Example 17, a great advantage of working in continuous time in dimensions $d\geq 2$ is that the coordinate processes are independent copies of slowed-down one-dimensional processes, so that $\tilde{p}^{(d)}(t)\equiv P_{0}(\widetilde{X}_{t}=0)$ in dimension $d$ satisfies

 $\tilde{p}^{(d)}(t)=(\tilde{p}(t/d))^{d}=e^{-t}(I_{0}(t/d))^{d}.$ (13.25)

In particular, from (13.24),

 $\tilde{p}^{(d)}(t)\sim({\textstyle\frac{d}{2\pi}})^{d/2}t^{-d/2}\mbox{ as }t% \rightarrow\infty.$ (13.26)

One can do a similar analysis in the discrete time case. In dimension $d=1$,

 $\displaystyle p(t)$ $\displaystyle\equiv$ $\displaystyle P_{0}(X_{t}=0)$ (13.27) $\displaystyle=$ $\displaystyle 2^{-t}{t\choose t/2}\ ,t\mbox{ even }$ $\displaystyle\sim$ $t$

This agrees with (13.26) but with an extra “artificial” factor of $2$ arising from periodicity. A more tedious argument gives the analog of (13.26) in discrete time for general $d$:

 $t$ (13.28)

From the viewpoint of classical probability, one can regard (13.26,13.28) as the special case $j=0$ of the local CLT: in continuous time in dimension $d$,

 $\sup_{j}\left|P_{0}(\widetilde{X}_{t}=j)-({\textstyle\frac{d}{2\pi}})^{d/2}t^{% -d/2}\exp(-d|j|^{2}/(2t))\right|=o(t^{-d/2})\mbox{ as }t\rightarrow\infty$

where $|j|$ denotes Euclidean norm.

The occupation time $N_{0}(t)$ satisfies $E_{0}N_{0}(t)=\int_{0}^{t}\tilde{p}(s)\ ds$ (continuous time) and $=\sum_{s=0}^{t-1}p(s)$ (discrete time). In either case, as $t\rightarrow\infty$,

 $\displaystyle(d=1)\ \ E_{0}N_{0}(t)$ $\displaystyle\sim$ $\displaystyle\sqrt{{\textstyle\frac{2}{\pi}}}\ t^{1/2}$ (13.29) $\displaystyle(d=2)\ \ E_{0}N_{0}(t)$ $\displaystyle\sim$ $\displaystyle{\textstyle\frac{1}{\pi}}\log t$ (13.30) $\displaystyle(d\geq 3)\ \ E_{0}N_{0}(t)$ $\displaystyle\rightarrow$ $\displaystyle R_{d}\equiv\int_{0}^{\infty}\tilde{p}^{(d)}(t)dt$ (13.31) $\displaystyle=\int_{0}^{\infty}e^{-t}(I_{0}(t/d))^{d}\ dt$

where $R_{d}<\infty$ for $d\geq 3$ by (13.26). This is the classical argument for establishing transience in $d\geq 3$ and recurrence in $d\leq 2$, by applying Theorem 13.4. Note that the return probability $\rho^{(d)}:=P_{0}(T^{+}_{0}<\infty)$ is related to $E_{0}N_{0}(\infty)$ by $E_{0}N_{0}(\infty)=\frac{1}{1-\rho^{(d)}}$; in other words

 $\rho^{(d)}=\frac{R_{d}-1}{R_{d}},\ d\geq 3.$

Textbooks sometimes give the impression that calculating $\rho^{(d)}$ is hard, but one can just calculate numerically the integral (13.31). Or see [174] for a table.

The quantity $\rho^{(d)}$ has the following sample path interpretation. Let $V_{t}$ be the number of distinct vertices visited by the walk before time $t$. Then

 $t^{-1}V_{t}\rightarrow 1-\rho^{(d)}\mbox{ a.s. },\ d\geq 3.$ (13.32)

The proof of this result is a textbook application of the ergodic theorem for stationary processes: see [133] Theorem 6.3.1.

## 13.2.5 The torus $Z^{d}_{m}$

We now discuss how random walk on $Z^{d}$ relates to $m\rightarrow\infty$ asymptotics for random walk on the finite torus $Z^{d}_{m}$, discussed in Chapter 5. We now use superscript $\cdot^{(m)}$ to denote the length parameter. From Chapter 5 Example 17 we have

 $\tau_{2}^{(m)}=\frac{d}{1-\cos(2\pi/m)}\sim\frac{dm^{2}}{2\pi^{2}}$
 $\tau_{1}^{(m)}=\Theta(m^{2})$ (13.33)

where asymptotics are as $m\rightarrow\infty$ for fixed $d$. One can interpret this as a consequence of the $dN^{2}$ time rescaling in the wweak convergence of rescaled random walk to Brownian motion of the $d$-dimensional torus, for which (cf. sections 13.1.1 and 13.1.2) $\tau_{2}=2\pi^{-2}$. At (74)–(75) of Chapter 5 we saw that the eigentime identity gave an exact formula for the mean hitting time parameter $\tau_{0}^{(m)}$, whose asymptotics are, for $d\geq 3$,

 $m^{-d}\tau_{0}^{(m)}\rightarrow\hat{R}_{d}\equiv\int_{0}^{1}\ldots\int_{0}^{1}% \frac{1}{{\textstyle\frac{1}{d}}\sum_{u=1}^{d}(1-\cos(2\pi x_{u}))}\ dx_{1}% \ldots dx_{d}<\infty.$ (13.34)

Here we give an independent analysis of this result, and the case $d=2$.

###### Proposition 13.8
 $\displaystyle(d=1)\ \ \ \ \ \ \ \ \ \ \tau^{(n)}_{0}$ $\displaystyle\sim$ $\displaystyle{\textstyle\frac{1}{6}}n^{2}$ (13.35) $\displaystyle(d=2)\ \ \ \ \ \ \ \ \ \ \tau^{(m)}_{0}$ $\displaystyle\sim$ $\displaystyle 2\pi^{-1}m^{2}\log m$ (13.36) $\displaystyle(d\geq 3)\ \ \ \ \ \ \ \ \ \ \tau^{(m)}_{0}$ $\displaystyle\sim$ $\displaystyle R_{d}m^{d}$ (13.37)

for $R_{d}$ defined by (13.31). In particular, the expressions for $R_{d}$ and $\hat{R}_{d}$ at (13.31) and (13.34) are equal, for $d\geq 3$.

The $d=1$ result is from Chapter 5 (26). We now prove the other cases.

Proof. We may construct continuized random walk $\widetilde{X}^{(m)}_{t}$ on $Z^{d}_{m}$ from continuized random walk $\widetilde{X}_{t}$ on $Z^{d}$ by

 $\widetilde{X}^{(m)}_{t}=\widetilde{X}_{t}\bmod m$ (13.38)

and then $P_{0}(\widetilde{X}^{(m)}_{t}=0)\geq P_{0}(\widetilde{X}_{t}=0)$. So

 $\displaystyle m^{-d}\tau_{0}^{(m)}$ $\displaystyle=$ $\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m^{-d}% \right)\ dt$ (13.39) (Chapter 2, Corollary 12 and (8)) $\displaystyle=$ $\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m^{-d}% \right)^{+}dt\mbox{ by complete monotonicity}$ $\displaystyle\geq$ $\displaystyle\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}=0)-m^{-d}\right)^{% +}\ dt$ $\displaystyle\rightarrow$ $\displaystyle\int_{0}^{\infty}P_{0}(\widetilde{X}_{t}=0)\ dt\quad=R_{d}.$

Consider the case $d\geq 3$. To complete the proof, we need the corresponding upper bound, for which it is sufficient to show

 $\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}-P_{0}(% \widetilde{X}_{t}=0)\right)^{+}dt\rightarrow 0\mbox{ as }m\rightarrow\infty.$ (13.40)

To verify (13.40) without detailed calculations, we first establish a 1-dimensional bound

 $(d=1)\ \ \ \ \ \ \ \ \tilde{p}^{(m)}(t)\leq{\textstyle\frac{1}{m}}+\tilde{p}(t).$ (13.41)

To obtain (13.41) we appeal to a coupling construction (the reflection coupling, described in continuous-space in section 13.1.3 – the discrete-space setting here is similar) which shows that continuized random walks $\widetilde{X}^{(m)},\widetilde{Y}^{(m)}$ on $Z_{m}$ with $\widetilde{X}^{(m)}_{0}=0$ and $\widetilde{Y}^{(m)}_{0}$ distributed uniformly can be coupled so that

 $\widetilde{Y}^{(m)}_{t}=0\mbox{ on the event }\{\widetilde{X}^{(m)}_{t}=0,T% \leq t\}$

where $T$ is the first time that $\widetilde{X}^{(m)}$ goes distance $\lfloor m/2\rfloor$ from $0$. And by considering the construction (13.38)

 $P(\widetilde{X}^{(m)}_{t}=0)\leq P(\widetilde{X}_{t}=0)+P(\widetilde{X}^{(m)}_% {t}=0,T\leq t)$

and (13.41) follows, since $P(\widetilde{Y}^{(m)}_{t}=0)=1/m$.

Since the $d$-dimensional probabilities relate to the 1-dimensional probabilities via $P_{0}(\widetilde{X}_{t}^{(m)}=0)=\left(\tilde{p}^{(m)}(t/d)\right)^{d}$ and similarly on the infinite lattice, we can use inequality (13.41) to bound the integrand in (13.40) as follows.

 $\displaystyle P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}-P_{0}(\widetilde{X}_{t}=0)$ $\displaystyle\leq$ $\displaystyle\left({\textstyle\frac{1}{m}}+\tilde{p}(t/d)\right)^{d}-m^{-d}-(% \tilde{p}(t/d))^{d}$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{d-1}{d\choose j}(\tilde{p}(t/d))^{j}({\textstyle\frac% {1}{m}})^{d-j}$ $\displaystyle=$ $\displaystyle\frac{\tilde{p}(t/d)}{m}\sum_{j=1}^{d-1}{d\choose j}(\tilde{p}(t/% d))^{j-1}({\textstyle\frac{1}{m}})^{d-1-j}$ $\displaystyle\leq$ $\displaystyle\frac{\tilde{p}(t/d)}{m}\sum_{j=1}{d-1}{d\choose j}\left[\max((% \tilde{p}(t/d))^{d-2},{\textstyle\frac{1}{m}})\right]^{d-2}$ $\displaystyle=$ $\displaystyle(2^{d}-2)\frac{\tilde{p}(t/d)}{m}\left[\max((\tilde{p}(t/d))^{d-2% },({\textstyle\frac{1}{m}})^{d-2})\right]$ $\displaystyle\leq$ $\displaystyle(2^{d}-2)\frac{\tilde{p}(t/d)}{m}\left[(\tilde{p}(t/d))^{d-2}+{% \textstyle\frac{1}{m}})^{d-2}\right]$ $\displaystyle=$ $\displaystyle(2^{d}-2)\left[\frac{(\tilde{p}(t/d))^{d-1}}{m}+\frac{\tilde{p}(t% /d)}{m^{d-1}}\right].$

The fact (13.24) that $\tilde{p}(t)=\Theta(t^{-1/2})$ for large $t$ easily implies that the integral in (13.40) over $0\leq t\leq m^{3}$ tends to zero. But by (13.33) and submultiplicativity of $\bar{d}(t)$,

 $0\leq P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-d}\leq d(t)\leq\bar{d}(t)\leq B_{1}% \exp(-{\textstyle\frac{t}{B_{2}m^{2}}})$ (13.42)

where $B_{1},B_{2}$ depend only on $d$. This easily implies that the integral in (13.40) over $m^{3}\leq t<\infty$ tends to zero, completing the proof of (13.37).

In the case $d=2$, we fix $b>0$ and truncate the integral in (13.39) at $bm^{2}$ to get

 $\displaystyle m^{-2}\tau_{0}^{(m)}$ $\displaystyle\geq$ $\displaystyle-b+\int_{0}^{bm^{2}}P_{0}(\widetilde{X}_{t}=0)\ dt$ $\displaystyle=$ $\displaystyle-b+(1+o(1)){\textstyle\frac{2}{\pi}}\log(bm^{2})\mbox{ by }(\ref{% d2N})$ $\displaystyle=$ $\displaystyle(1+o(1)){\textstyle\frac{2}{\pi}}\log m.$

Therefore

 $\tau_{0}^{(m)}\geq(1+o(1)){\textstyle\frac{2}{\pi}}m^{2}\log m.$

For the corresponding upper bound, since $\int_{0}^{m^{2}}P_{0}(\widetilde{X}_{t}=0)dt\sim{\textstyle\frac{2}{\pi}}\log m$ by (13.30), and $m^{-2}\tau_{0}^{(m)}=\int_{0}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{t}=0)-m% ^{-2}\right)\ dt$ , it suffices to show that

 $\int_{0}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-2}-P_{0}(% \widetilde{X}_{t}=0)\right)^{+}\ dt$
 $+\int_{m^{2}}^{\infty}\left(P_{0}(\widetilde{X}_{t}^{(m)}=0)-m^{-2}\right)^{+}% dt=o(\log N).$ (13.43)

To bound the first of these two integrals, we observe from (13.41) that $P_{0}(\widetilde{X}^{(m)}_{t}=0)\leq(m^{-1}+\tilde{p}(t/2))^{2}$, and so the integrand is bounded by ${\textstyle\frac{2}{m}}\tilde{p}(t/2)$. Using (13.24), the first integral is $O(1)=o(\log m)$. To analyze the second integral in (13.43) we consider separately the ranges $m^{2}\leq t\leq m^{2}\log^{3/2}m$ and $m^{2}\log^{3/2}m\leq t<\infty$. Over the first range, we again use (13.41) to bound the integrand by ${\textstyle\frac{2}{m}}\tilde{p}(t/2)+(\tilde{p}(t/2))^{2}$. Again using (13.24), the integral is bounded by

 $(1+o(1))\frac{2}{\pi^{1/2}m}\int_{m^{2}}^{m^{2}\log^{3/2}m}t^{-1/2}dt\ +\ (1+o% (1))\pi^{-1}\int_{m^{2}}^{m^{2}\log^{3/2}m}t^{-1}dt$
 $=\Theta(\log^{3/4}m)+\Theta(\log\log m)=o(\log m).$

To bound the integral over the second range, we use (13.42) and find

 $\displaystyle\int_{m^{2}\log^{3/2}m}^{\infty}\left(P_{0}(\widetilde{X}^{(m)}_{% t}=0)-m^{-2}\right)dt$ $\displaystyle\leq$ $\displaystyle B_{1}B_{2}m^{2}\exp(-{\textstyle\frac{\log^{3/2}m}{B_{2}}})$ $\displaystyle=$ $\displaystyle o(1)=o(\log m).$

$\Box$

## 13.2.6 The infinite degree-$r$ tree

Fix $r\geq 3$ and write ${\bf T}^{r}$ for the infinite tree of degree $r$. We picture ${\bf T}^{r}$ as a “family tree”, where the root $\phi$ has $r$ children, and each other vertex has one parent and $r-1$ children. Being a vertex-transitive graph (recall Chapter 7 section 1.1; for $r$ even, ${\bf T}^{r}$ is the Cayley graph of the free group on $r/2$ generators), one can study many more general “random flights” on ${\bf T}^{r}$ (see Notes), but we shall consider only the simple random walk $(X_{t})$.

We can get some information about the walk without resorting to calculations. The “depth” process $d(X_{t},\phi)$ is clearly the “reflecting asymmetric random walk” on $Z^{+}:=\{0,1,2,\ldots\}$ with

 $p_{0,1}=1;\ p_{i,i-1}=1/r;\ p_{i,i+1}=(r-1)/r,\ i\geq 1.$

By comparison with asymmetric random walk on all $Z$, which has drift $(r-2)/r$, we see that

 $t^{-1}d(X_{t},\phi)\rightarrow\frac{r-2}{r}\mbox{ a.s. as }t\rightarrow\infty.$ (13.44)

In particular, the number of returns to $\phi$ is finite and so the walk is transient. Now consider the return probability $\rho=P_{\phi}(T^{+}_{\phi}<\infty)$ and note that (by considering the first step) $\rho=P_{\phi}(T_{c}<\infty)$ where $c$ is a child of $\phi$. Considering the first two steps, we obtain the equation $\rho=\frac{1}{r}+\frac{r-1}{r}\rho^{2}$, and since by transience $\rho<1$, we see that

 $\rho:=P_{\phi}(T^{+}_{\phi}<\infty)=P_{\phi}(T_{c}<\infty)=\frac{1}{r-1}.$ (13.45)

So

 $E_{\phi}N_{\phi}(\infty)=\frac{1}{1-\rho}=\frac{r-1}{r-2}.$ (13.46)

As at (13.32), $\rho$ has a sample path interpretation: the number $V_{t}$ of distinct vertices visited by the walk before time $t$ satisfies

 $t^{-1}V_{t}\rightarrow 1-\rho={\textstyle\frac{r-2}{r-1}}\mbox{ a.s. as }t% \rightarrow\infty.$

By transience, amongst the children of $\phi$ there is some vertex $L_{1}$ which is visited last by the walk; then amongst the children of $L_{1}$ there is some vertex $L_{2}$ which is visited last by the walk; and so on, to define a “path to infinity” $\phi=L_{0},L_{1},L_{2},\ldots$. By symmetry, given $L_{1},L_{2},\ldots,L_{i-1}$ the conditional distribution of $L_{i}$ is uniform over the children of $L_{i-1}$, so in the natural sense we can describe $(L_{i})$ as the uniform random path to infinity.

## 13.2.7 Generating function arguments

While the general qualitative behavior of random walk on ${\bf T}^{r}$ is clear from the arguments above, more precise quantitative estimates are most naturally obtained via generating function arguments. For any state $i$ of a Markov chain, the generating functions $G_{i}(z):=\sum_{t=0}^{\infty}P_{i}(X_{t}=i)z^{t}$ and $F_{i}(z):=\sum_{t=1}^{\infty}P_{i}(T^{+}_{i}=t)z^{t}$ are related by

 $G_{i}=1+F_{i}G_{i}$ (13.47)

(this is a small variation on Chapter 2 Lemma 19). Consider simple symmetric reflecting random walk on $Z^{+}$. Clearly

 $G_{0}(z)=\sum_{t=0}^{\infty}{2t\choose t}2^{-2t}z^{2t}=(1-z^{2})^{-1/2},$

the latter identity being the series expansion of $(1-x)^{-1/2}$. So by (13.47)

 $F_{0}(z):=\sum_{t=0\mbox{ or }1}^{\infty}P_{0}(T^{+}_{0}=2t)z^{2t}=1-(1-z^{2})% ^{1/2}.$

Consider an excursion of length $2t$, that is, a path $(0=i_{0},i_{1},\ldots,i_{2t-1},i_{2t}=0)$ with $i_{j}>0,1\leq j\leq 2t-1$. This excursion has chance $2^{1-2t}$ for the symmetric walk on $Z^{+}$, and has chance $((r-1)/r)^{t-1}(1/r)^{t}$ for the asymmetric walk $d(X_{t},\phi)$. So

 $\frac{P_{\phi}(T^{+}_{\phi}=2t)}{P_{0}(T^{+}_{0}=2t)}=\frac{r}{2(r-1)}\ \left(% \frac{4(r-1)}{r^{2}}\right)^{t}$

where the numerator refers to simple RW on the tree, and the denominator refers to simple symmetric reflecting RW on $Z^{+}$. So on the tree,

 $F_{\phi}(z)=\frac{r}{2(r-1)}F_{0}\left(z\sqrt{{\textstyle\frac{4(r-1)}{r^{2}}}% }\right)=\frac{r}{2(r-1)}\left(1-\left(1-\frac{4(r-1)z^{2}}{r^{2}}\right)^{1/2% }\right).$

Then (13.47) gives an expression for $G_{\phi}(z)$ which simplifies to

 $G_{\phi}(z)=\frac{2(r-1)}{r-2+\sqrt{r^{2}-4(r-1)z^{2}}}.$ (13.48)

In particular, $G_{\phi}$ has radius of convergence $1/\beta$, where

 $\beta=2r^{-1}\sqrt{r-1}<1.$ (13.49)

Without going into details, one can now use standard Tauberian arguments to show

 $t$ (13.50)

for a computable constant $\alpha$, and this format (for different values of $\alpha$ and $\beta$) remains true for more general radially symmetric random flights on ${\bf T}^{r}$ ([339] Theorem 19.30). One can also in principle expand (13.48) as a power series to obtain $P_{\phi}(X_{t}=\phi)$. Again we shall not give details, but according to Giacometti [164] one obtains

 $\displaystyle P_{\phi}(X_{t}=\phi)$ $\displaystyle=$ $\displaystyle\frac{r-1}{r}\ \left(\frac{\sqrt{r-1}}{r}\right)^{t}\ \frac{% \Gamma(1+t)}{\Gamma(2+t/2)\Gamma(1+t/2)}$ (13.51) $\displaystyle\times\ _{2}F_{1}({\textstyle\frac{t+1}{2}},1,2+{\textstyle\frac{% t}{2}},{\textstyle\frac{4(r-1)}{r^{2}}}),\ t\mbox{ even}$

where $\ {}_{2}F_{1}$ is the generalized hypergeometric function.

Finally, the $\beta$ at (13.49) can be interpreted as an eigenvalue for the infinite transition matrix $(p_{ij})$, so we anticipate a corresponding eigenfunction $f_{2}$ with

 $\sum_{j}p_{ij}f_{2}(j)=\beta f_{2}(i)\ \forall i,$ (13.52)

and one can verify this holds for

 $f_{2}(i):=(1+{\textstyle\frac{r-2}{r}}i)(r-1)^{-i/2}.$ (13.53)

## 13.2.8 Comparison arguments

Fix $r\geq 3$ and consider a sequence $(G_{n})$ of $n$-vertex $r$-regular graphs with $n\rightarrow\infty$. Write $(X^{n}_{t})$ for the random walk on $G_{n}$. We can compare these random walks with the random walk $(X^{\infty}_{t})$ on ${\bf T}^{r}$ via the obvious inequality

 $P_{v}(X^{n}_{t}=v)\geq P_{\phi}(X^{\infty}_{t}=\phi),\ t\geq 0.$ (13.54)

To spell this out, there is a universal cover map $\gamma:{\bf T}^{r}\rightarrow G_{n}$ with $\gamma(\phi)=v$ and such that for each vertex $w$ of ${\bf T}^{r}$ the $r$ edges at $w$ are mapped to the $r$ edges of $G_{n}$ at $\gamma(w)$. Given the random walk $X^{\infty}$ on ${\bf T}^{r}$, the definition $X^{n}_{t}=\gamma(X^{\infty}_{t})$ constructs random walk on $G_{n}$, and (13.54) holds because $\{X^{n}_{t}=v\}\supseteq\{X^{\infty}_{t}=\phi\}$.

It is easy to use (13.54) to obtain asymptotic lower bounds on the fundamental parameters discussed in Chapter 4. Instead of the relaxation time $\tau_{2}$, it is more natural here to deal directly with the second eigenvalue $\lambda_{2}$.

###### Lemma 13.9

For random walk on $n$-vertex $r$-regular graphs, with $r\geq 3$ fixed and $n\rightarrow\infty$

(a) $\liminf n^{-1}\tau_{0}(n)\geq\frac{r-1}{r-2}$;

(b) $\liminf\frac{\tau_{1}(n)}{\log n}\geq\frac{r}{(r-2)\log(r-1)}$;

(c) $\liminf\lambda_{2}(n)\geq\beta:=2r^{-1}\sqrt{r-1}$.

Theory concerning expanders (Chapter 9 section 1) shows there exist graphs where the limits above are finite constants (depending on $r$), so Lemma 13.9 gives the optimal order of magnitude bound.

Proof. For (a), switch to the continuous-time walk, consider an arbitrary vertex $v$ in $G_{n}$, and take $t_{0}(n)\rightarrow\infty$ with $t_{0}(n)/n\rightarrow 0$. Then we repeat the argument around (13.39) in the torus setting:

 $\displaystyle n^{-1}E_{\pi}T_{v}$ $\displaystyle=$ $\displaystyle\int_{0}^{\infty}(P_{v}(X^{n}_{t}=v)-{\textstyle\frac{1}{n}})\ dt$ $\displaystyle\geq$ $\displaystyle\int_{0}^{t_{0}}(P_{v}(X^{n}_{t}=v)-{\textstyle\frac{1}{n}})\ dt$ $\displaystyle\geq$ $\displaystyle-\frac{t_{0}}{n}+\int_{0}^{t_{0}}P_{\phi}(X^{\infty}_{t}=\phi)\ % dt\mbox{ by (\ref{treebd}) }$ $\displaystyle\rightarrow$ $\displaystyle\int_{0}^{\infty}P_{\phi}(X^{\infty}_{t}=\phi)\ dt$ $\displaystyle=$ $\displaystyle E_{\phi}N_{\phi}(\infty)=\frac{r-1}{r-2},$

which is somewhat stronger than assertion (a). Next, the discrete-time spectral representation implies

 $P_{v}(X^{n}_{t}=v)\leq{\textstyle\frac{1}{n}}+n\beta^{t}(n).$

Using (13.54) and (13.50), for any $n\rightarrow\infty,t\rightarrow\infty$ with $t$ even,

 $t^{-3/2}\beta^{t}(\alpha-o(1))\leq{\textstyle\frac{1}{n}}+n\beta^{t}(n).$ (13.55)

For (b), the argument for (13.54) gives a coupling between the process $X^{n}$ started at $v$ and the process $X^{\infty}$ started at $\phi$ such that

 $d^{n}(X^{n}_{t},v)\leq d^{\infty}(X^{\infty}_{t},\phi)$

where $d^{n}$ and $d^{\infty}$ denote graph distance. Fix $\varepsilon>0$ and write $\gamma=\frac{r-2}{r}+\varepsilon$. By the coupling and (13.44), $P(d^{n}(X^{n}_{t},v)\geq\gamma t)\rightarrow 0$ as $n,t\rightarrow\infty$. This remains true in continuous time. Clearly $\tau_{1}(n)\to\infty$, and so by definition of $\tau_{1}$ we have

 $\limsup\pi\{w:d^{n}(w,v)\geq\gamma\tau_{1}(n)\}\leq e^{-1}.$

But by counting vertices,

 $\displaystyle\pi\{w:d^{n}(v,w)\leq d\}$ $\displaystyle\leq$ $\displaystyle\frac{1+r+r(r-1)+\cdots+r(r-1)^{d-1}}{n}$ $\displaystyle\rightarrow$ $\displaystyle 0\mbox{ if }d\sim(1-\varepsilon)\frac{\log n}{\log(r-1)}.$

For these two limit results to be consistent we must have $\gamma\tau_{1}(n)\geq(1-\varepsilon)\frac{\log n}{\log(r-1)}$ for all large $n$, establishing (b).

For (c), fix a vertex $v_{0}$ of $G_{n}$ and use the function $f_{2}$ at (13.53) to define $f(v):=f_{2}(d(v,v_{0}))$ for all vertices $v$ of $G_{n}$. The equality (13.52) for $f_{2}$ on the infinite tree easily implies the inequality ${\bf P}f\geq\beta f$ on $G_{n}$. Set $\bar{f}:=n^{-1}\sum_{v}f(v)$ and write $1$ for the unit function. By the Rayleigh-Ritz characterization (Chapter 4 eq. (73)), writing $\langle g,h\rangle:=\sum_{ij}\pi_{i}g_{i}p_{ij}h_{j}$,

 $\displaystyle\lambda_{2}(n)$ $\displaystyle\geq$ $\displaystyle\frac{\langle f-\bar{f}1,{\bf P}(f-\bar{f}1)\rangle}{||f-\bar{f}1% ||_{2}^{2}}$ $\displaystyle=$ $\displaystyle\frac{\langle f,{\bf P}f\rangle-\bar{f}^{2}}{||f||_{2}^{2}-\bar{f% }^{2}}$ $\displaystyle\geq$ $\displaystyle\frac{\beta||f||_{2}^{2}-\bar{f}^{2}}{||f||_{2}^{2}-\bar{f}^{2}}.$

As $n\to\infty$ we have $\bar{f}\to 0$ while $||f||_{2}$ tends to a non-zero limit, establishing (c).

## 13.2.9 The hierarchical tree

Fix $r\geq 2$. There is an infinite tree (illustrated for $r=2$ in the figure) specified as follows. Each vertex is at some height $0,1,2,\ldots$. A vertex at height $h$ has one parent vertex at height $h+1$ and (if $h\geq 1$) $r$ child vertices at height $h-1$. The height-$0$ vertices are leaves, and the set $L$ of leaves has a natural labeling by finite $r$-ary strings. The figure illustrates the binary ($r=2$) case, where $L=\{0,1,10,11,100,101,\ldots\}$. $L$ forms an Abelian group under entrywise addition modulo $r$, e.g. for $r=2$ we have $1101+110=1101+0110=1011$. Adopting a name used for generalizations of this construction in statistical physics, we call $L$ the hierarchical lattice and the tree ${\bf T}^{r}_{\rm hier}$ the hierarchical tree.

Fix a parameter $0<\lambda. Consider biased random walk $X_{t}$ on the tree ${\bf T}^{r}_{\rm hier}$, where from each non-leaf vertex the transition goes to the parent with probability $\lambda/(\lambda+r)$ and to each child with probability $1/(\lambda+r)$. Then consider $Y=$$X$ watched only on $L$”, that is the sequence of (not-necessarily distinct) successive leaves visited by $X$. The group $L$ is distance-transitive (for Hamming distance on $L$) and $Y$ is a certain isotropic random flight on $L$. A nice feature of this example is that without calculation we can see that $Y$ is recurrent if and only if $\lambda\leq 1$. For consider the path of ancestors of $0$. The chain $X$ must spend an infinite time on that path (side-branches are finite); on that path $X$ behaves as asymmetric simple random walk on $Z^{+}$, which is recurrent if and only if $\lambda\leq 1$; so $X$ and thence $Y$ visits $0$ infinitely often if and only if $\lambda\leq 1$.

Another nice feature is that we can give a fairly explicit expression for the $t$-step transition probabilities of $Y$. Writing $H$ for the maximum height reached by $X$ in an excursion from the leaves, then

 $P(H\geq h)=P_{1}(\hat{T}_{h}<\hat{T}_{0})=\frac{{\textstyle\frac{r}{\lambda}}-% 1}{({\textstyle\frac{r}{\lambda}})^{h}-1},\quad h\geq 1$

where $\hat{T}$ denotes hitting time for the height process. Writing $M_{t}$ for the maximum height reached in $t$ excursions,

 $P(M_{t}

It is clear by symmetry that the distribution of $Y_{t}$ is conditionally uniform on the leaves which are descendants of the maximal-height vertex previously visited by $X$. So for leaves $v,x$ with branchpoint at height $d$,

 $P_{v}(Y_{t}=x)=\sum_{h\geq d}r^{-h}P(M_{t}=h).$

Since $P(M_{t}=h)=P(M_{t}, we have found the “fairly explicit expression” promised above. A brief calculation gives the following time-asymptotics. Fix $s>0$ and consider $t\sim s(\frac{r}{\lambda})^{j}$ with $j\rightarrow\infty$; then

 $P_{v}(Y_{t}=v)\sim r^{-j}f(s);\ \ \mbox{ where }$
 $f(s)=\sum_{i=-\infty}^{\infty}r^{-i}\left(\exp(-s({\textstyle\frac{r}{\lambda}% }-1)({\textstyle\frac{r}{\lambda}})^{-i-1})-\exp(-s({\textstyle\frac{r}{% \lambda}}-1)({\textstyle\frac{r}{\lambda}})^{-i})\right).$

In particular,

 $P_{v}(Y_{t}=v)=\Theta(t^{-d/2})\mbox{ as }t\rightarrow\infty,\ d=\frac{2\log r% }{\log r-\log\lambda}.$ (13.56)

Comparing with (13.26), this gives a sense in which $Y$ mimics simple random walk on $Z^{d}$, for $d$ defined above. Note that $d$ increases continuously from $0$ to $\infty$ as $\lambda$ increases from $0$ to $r$, and that $Y$ is recurrent if and only if $d\leq 2$.

Though we don’t go into details, random walk on the hierarchical lattice is a natural infinite-state analog of biased random walk on the balanced finite tree (Chapter 5 section 2.1). In particular, results in the latter context showed that, writing $n$ for number of vertices, $\tau_{0}(n)=O(n)$ if and only if $\lambda/r>1/r$, that is if and only if $d>2$. This is the condition for transience of the infinite-state walk, confirming the paradigm of section 13.2.3.

## 13.2.10 Towards a classification theory for sequences of finite chains

Three chapters of Woess [339] treat in detail three properties that random walk on an infinite graph may or may not possess:

• transience

• spectral radius $<1$

• non-trivial boundary.

Can these be related to properties for sequences of finite chains? We already mentioned (section 13.2.3) that the property $\tau_{0}(n)=O(n)$ seems to be the analog of transience. In this speculative section we propose definitions of three other properties for sequences of finite chains, which we name

• compactness

• infinite-dimensionality

• expander-like.

Future research will show whether these are useful definitions! Intuitively we expect that every reasonably “natural” sequence should fall into one of these three classes.

For simplicity we consider reversible random walks on Cayley graphs. It is also convenient to continuize. The resulting chains are special cases of (reversible) Lévy processes. We define the general Lévy process to be a continuous-time process with stationary independent increments on a (continuous or discrete) group. Thus the setting for the rest of this section is a sequence $(X^{(n)}_{t})$ of reversible Lé—vy processes on finite groups $G^{(n)}$ of size $n\to\infty$ through some subsequence. Because we work in continuous time, the eigenvalues satisfy $0=\lambda^{(n)}_{1}<\lambda^{(n)}_{2}\leq\cdots$.

(A): Compactness. Say the sequence $(X^{(n)}_{t})$ is compact if there exists a (discrete or continuous) compact set $S$ and a reversible Lévy process $\widetilde{X}_{t}$ on $S$ such that

(i) $\tilde{d}(t)\equiv||P_{\pi}(\widetilde{X}_{t}\in\cdot)\rightarrow\pi||\rightarrow 0$ as $t\rightarrow\infty$;

(ii) $\frac{\lambda_{j}(n)}{\lambda_{2}(n)}\rightarrow\tilde{\lambda}_{j}\mbox{ as }% n\rightarrow\infty,\ j\geq 2$; where $1=\tilde{\lambda}_{2}\leq\tilde{\lambda}_{3}\leq\cdots$ are the eigenvalues of $(\widetilde{X}_{t})$;

(iii) $d^{(n)}(t\ \tau_{2}(n))\rightarrow\tilde{d}(t)\mbox{ as }n\rightarrow\infty;\ % t>0$.

These properties formalize the idea that the sequence of random walks form discrete approximations to a limit Lévy process on a compact group, at least as far as mixing times are concerned. Simple random walk on $Z^{d}_{m}$, and the limit Brownian motion on $R^{d}$ (section 13.1.2) form the obvious example. Properties (i) and (iii) imply, in particular, that

 $\tau_{1}(n)/\tau_{2}(n)\mbox{ is bounded as }n\rightarrow\infty.$ (13.57)

One might hope that a converse is true:

Does every sequence satisfying (13.57) have a compact subsequence?

Unfortunately, we are convinced that the answer is “no”, for the following reason. Take $(X^{n}_{t})$ which is compact, where the limit Lévy process has function $\tilde{d}(t)$ as at (i). Now consider a product chain $(X^{(n)}_{t},Y^{(n)}_{t})$, where components run independently, and where $Y^{(n)}$ has the cut-off property (Chapter 7) and $\tau^{Y}_{1}(n)\sim\tau_{2}^{X}(n)$. Note that by Chapter 7-1 Lemma 1 we have $\tau^{Y}_{2}(n)=o(\tau^{Y}_{1}(n))$. If the product chain had a subsequential limit, then its total variation function at (i), say $d^{\prime}(t)$, must satisfy

 $\displaystyle d^{\prime}(t)$ $\displaystyle=$ $\displaystyle\tilde{d}(t),\quad t>1$ $\displaystyle=$ $\displaystyle 1.\quad t<1.$

But it seems intuitively clear (though we do not know a proof) that every Lévy process on a compact set has continuous $d(\cdot)$. This suggests the following conjecture.

###### Conjecture 13.10

For any sequence of reversible Lévy processes satisfying (13.57), there exists a subsequence satisfying the definition of compact except that condition (ii) is replaced by

(iv): $\exists t_{0}\geq 0:$

 $\displaystyle d^{(n)}(t\ \tau_{2}(n))$ $\displaystyle\to$ $\displaystyle 1;\quad t $\displaystyle\to$ $\displaystyle\tilde{d}(t);\quad t>t_{0}.$

Before describing the other two classes of chains, we need a definition and some motivating background. In the present setting, the property “trivial boundary” is equivalent (see Notes) to the property

 $\lim_{t\to\infty}||P_{v}(X_{t}\in\cdot)-P_{w}(X_{t}\in\cdot)||=0,\quad\forall v% ,w.$ (13.58)

This suggests that an analogous finite-state property might involve whether the variation distance for nearby starts becomes small before time $\tau_{1}$. Say that a sequence $(L_{n}(\varepsilon))$ of subsets is an asymptotic $\varepsilon$-neighborhood  if

 $||P_{\phi}(X_{\varepsilon\tau_{1}}\in\cdot)-P_{v}(X_{\varepsilon\tau_{1}}\in% \cdot)||\rightarrow 0\mbox{ as }n\rightarrow\infty$

uniformly over $v\in L_{n}(\varepsilon)$; here $\phi$ is an arbitrary reference vertex. From Chapter 7-1 Lemma 1(b) we can deduce that, if the cut-off property holds, such a neighborhood must have size $|L_{n}(\varepsilon)|=o(n)$.

(B): Infinite dimensional. Say the sequence $(X^{(n)}_{t})$ is infinite-dimensional if the following three properties hold.

(i) $\tau_{1}(n)=\Theta(\tau_{2}\log\log n)$

(ii) The cut-off property holds

(iii) there exists some $\delta(\varepsilon)$, increasing from $0$ to $1$ as $\varepsilon$ increases from $0$ to $1$, such that a maximal-size asymptotic $\varepsilon$-neighborhood $(L_{n}(\varepsilon))$ has

 $\log|L_{n}(\varepsilon)|=(\log n)^{\delta(\varepsilon)+o(1)}\mbox{ as }n% \rightarrow\infty.$

This definition is an attempt to abstract the essential properties of random walk on the $d$-cube (Chapter 5 Example 15), where properties (i) and (ii) were already shown. We outline below a proof of property (iii) in that example. Another fundamental example where (i) and (ii) hold is card-shuffling by random transpositions (Chapter 7 Example 18)), and we conjecture that property (iii) also holds there. Conceptually, this class infinite-dimensional of sequences is intended (cf. (13.58)) as the analog of a single random walk with trivial boundary on an infinite-dimensional graph.

Property (iii) for the $d$-cube. Let $(X(t))$ be continuous-time random walk on the $d$-cube, and $(X^{*}(t))$ continuous-time random walk on the $b$-cube, where $b\leq d$. The natural coupling shows

 $\displaystyle\mbox{ if }d(v,w)=b\mbox{ then }$ $\displaystyle||P_{v}(X(t)\in\cdot)-P_{w}(X(t)\in\cdot)||=||P_{\bf 0}(X^{*}(tb/% d)\in\cdot)-P_{\bf 1}(X^{*}(tb/d)\in\cdot)||.$

Take $d\to\infty$ with

 $b(d)\sim d^{\alpha},\quad t(d)\sim{\textstyle\frac{1}{4}}\varepsilon d\log d$

for some $0<\alpha,\varepsilon<1$, so that

 $\lim_{d}\frac{t(d)b(d)/d}{{\textstyle\frac{1}{4}}b(d)\log b(d)}\to{\textstyle% \frac{\varepsilon}{\alpha}}.$

Since the variation cut-off for the $b$-cube is at $\frac{1}{4}b\log b$, we see that for vertices $v$ and $w$ at distance $b(d)$,

 $\displaystyle||P_{v}(X(t(d))\in\cdot)-P_{w}(X(t(d))\in\cdot)||$ $\displaystyle\to$ $\displaystyle 1,\quad\varepsilon>\alpha$ $\displaystyle\to$ $\displaystyle 0,\quad\varepsilon<\alpha.$

So a maximal-size asymptotic $\varepsilon$-neighborhood $(L_{n}(\varepsilon))$ of ${\bf 0}$ must be of the form $\{w:d(w,{\bf 0})\leq d^{\varepsilon+o(1)}\}$. So

 $\log|L_{n}(\varepsilon)|=\log{d\choose d^{\varepsilon+o(1)}}=d^{\varepsilon+o(% 1)}=(\log n)^{\varepsilon+o(1)}$

as required.

Finally, we want an analog of a random walk with non-trivial boundary, expressed using property (ii) below.

(C): Expander-like. Say the sequence $(X^{(n)}_{t})$ is expander-like if

(i) $\tau_{1}=\Theta(\tau_{2}\log n)$

(ii) every asymptotic $\varepsilon$-neighborhood $(L_{n}(\varepsilon))$ has

 $\log|L_{n}(\varepsilon)|=(\log n)^{o(1)}\mbox{ as }n\rightarrow\infty$

(iii) The cut-off property holds.

Recall from Chapter 9 section 1 that, for symmetric graphs which are $r$-regular expanders for fixed $r$, we have $\tau_{2}(n)=\Theta(1)$ and $\tau_{1}(n)=\Theta(\log n)$. But it is not known whether properties (ii) and (iii) always hold in this setting.