# 5.2 Special graphs

In this section we record results about some specific easy-to-analyze graphs. As in Section 5.1.3, we focus on the parameters $\tau^{*},\tau_{0},\tau_{1},\tau_{2},\tau_{c}$ discussed in Chapter 4; orders of magnitudes of these parameters (in the asymptotic setting discussed with each example) are summarized in terms of $n$, the number of vertices, in the following table. A minor theme is that some of the graphs are known or conjectured to be extremal for our parameters. In the context of extremality we ignore the parameter $\tau_{1}$ since its exact definition is a little arbitrary.

jjj David: (1) Shall I add complete bipartite to table? (2) Please fill in missing entries for torus.

Orders of magnitude of parameters [$\tau=\Theta(\mbox{entry})$] for special graphs.

 Example $\tau^{*}$ $\tau_{0}$ $\tau_{1}$ $\tau_{2}$ $\tau_{c}$ 5.7. cycle $n^{2}$ $n^{2}$ $n^{2}$ $n^{2}$ $n$ 5.8. path $n^{2}$ $n^{2}$ $n^{2}$ $n^{2}$ $n$ 5.9. complete graph $n$ $n$ $1$ $1$ $1$ 5.10. star $n$ $n$ $1$ $1$ $1$ 5.11. barbell $n^{3}$ $n^{3}$ $n^{3}$ $n^{3}$ $n^{2}$ 5.12. lollipop $n^{3}$ $n^{2}$ $n^{2}$ $n^{2}$ $n$ 5.13. necklace $n^{2}$ $n^{2}$ $n^{2}$ $n^{2}$ $n$ 5.14. balanced $r$-tree $n\log n$ $n\log n$ $n$ $n$ $n$ 5.15. $d$-cube ($d=\log_{2}n$) $n$ $n$ $d\log d$ $d$ $d$ 5.16. dense regular graphs $n$ $n$ $1$ $1$ $1$ 5.17. $d$-dimensional torus $d=2$ jjj? $n\log n$ $n^{2/d}$ $n^{2/d}$ jjj?$n^{1/d}$ $d\geq 3$ jjj? $n$ $n^{2/d}$ $n^{2/d}$ jjj?$n^{1/d}$ 5.19. rook’s walk $n$ $n$ $1$ $1$ $1$

In simpler cases we also record the $t$-step transition probabilities $P_{i}(X_{t}=j)$ in discrete and continuous time. In fact one could write out exact expressions for $P_{i}(X_{t}=j)$ and indeed for hitting time distributions in almost all these examples, but complicated exact expressions are seldom very illuminating.

qqq names of graphs vary—suggestions for “standard names” from readers of drafts are welcome.

###### Example 5.7

The $n$-cycle.

This is just the graph $0$$1$$2$$\cdots$$(n-1)$$0$ on $n$ vertices. By rotational symmetry, it is enough to give formulas for random walk started at $0$. If $(\hat{X}_{t})$ is random walk on (all) the integers, then $X_{t}=\phi(\hat{X}_{t})$ is random walk on the $n$-cycle, for

 $\phi(i)=i\mbox{ mod }n.$

Thus results for the $n$-cycle can be deduced from results for the integers. For instance,

 $E_{0}T_{i}=i(n-i)$ (5.24)

by Lemma 5.2, because this is the mean exit time from $(i-n,i)$ for random walk on the integers. We can now calculate

 $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle=$ $\displaystyle\lfloor n^{2}/4\rfloor$ $\displaystyle\tau^{*}\equiv\max_{ij}(E_{i}T_{j}+E_{j}T_{i})$ $\displaystyle=$ $\displaystyle 2\lfloor n^{2}/4\rfloor$ (5.25) $\displaystyle\tau_{0}=n^{-1}\sum_{j}E_{0}T_{j}$ $\displaystyle=$ $\displaystyle(n^{2}-1)/6$ (5.26)

where for the final equality we used the formula

 $\sum_{m=1}^{n}m^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}.$

As at (5.9) and (5.10) we can get an expression for the distribution at time $t$ from the Binomial distribution (in discrete time) or the Poisson distribution (in continuous time). The former is

 $P_{0}(X_{t}=i)=\sum_{j:2j-t=i\ {\rm mod}\ n}\frac{t!}{j!(t-j)!}2^{-t}.$

A more useful expression is obtained from the spectral representation. The $n$ eigenvalues of the transition matrix are $\cos(2\pi m/n)$, $0\leq m\leq n-1$. That is, $1$ and (if $n$ is even) $-1$ are simple eigenvalues, with respective normalized eigenvectors $u_{i0}=1/\sqrt{n}$ and $u_{i,n/2}=(-1)^{i}/\sqrt{n}$ ($0\leq i\leq n-1$). The multiplicity of $\cos(2\pi m/n)$ is $2$ for $0; the corresponding normalized eigenvectors are $u_{im}=\sqrt{2/n}\cos(2\pi im/n)$ and $u_{i,-m}=\sqrt{2/n}\sin(2\pi im/n)$ ($0\leq i\leq n-1$). Thus

 $P_{0}(X_{t}=i)=\frac{1}{n}\sum_{m=0}^{n-1}(\cos(2\pi m/n))^{t}\cos(2\pi im/n),$

a fact most easily derived using Fourier analysis.

jjj Cite Diaconis book [123]? Continue same paragraph:

So the relaxation time is

 $\tau_{2}=\frac{1}{1-\cos(2\pi/n)}\sim\frac{n^{2}}{2\pi^{2}}.$

As an aside, note that the eigentime identity (Chapter 3 yyy) gives the curious identity

 $\frac{n^{2}-1}{6}=\sum_{m=1}^{n-1}\frac{1}{1-\cos(2\pi m/n)}$

whose $n\rightarrow\infty$ limit is the well-known formula $\sum_{m=1}^{\infty}m^{-2}=\pi^{2}/6$.

If $n$ is even, the discrete-time random walk is periodic. This parity problem vanishes in continuous time, for which we have the formula

 $P_{0}(X(t)=i)=\frac{1}{n}\sum_{m=0}^{n-1}\exp(-t(1-\cos(2\pi m/n)))\cos(2\pi im% /n).$ (5.27)

Turning to total variation convergence, we remain in continuous time and consider the distance functions $\bar{d}_{n}(t)$ and $d_{n}(t)$ of Chapter 4 yyy. The reader familiar with the notion of weak convergence of random walks to Brownian motion (on the circle, in this setting) will see immediately that

 $\bar{d}_{n}(tn^{2})\rightarrow\bar{d}_{\infty}(t)$

where the limit is “$\bar{d}$ for Brownian motion on the circle”, which can be written as

 $\bar{d}_{\infty}(t)\equiv 1-2P((t^{1/2}Z)\mbox{ mod }1\ \in(1/4,3/4))$

where $Z$ has the standard Normal distribution. So

 $\tau_{1}\sim cn^{2}$

for the constant $c$ such that $\bar{d}_{\infty}(c)=e^{-1}$, whose numerical value $c\doteq 0.063$ has no real significance.

jjj David: You got $0.054$. Please check. Continue same paragraph:

Similarly

 $d_{n}(tn^{2})\rightarrow d_{\infty}(t)\equiv{1\over 2}\int_{0}^{1}|f_{t}(u)-1|% \,du,$

where $f_{t}$ is the density of $(t^{1/2}Z)\mbox{ mod }1$.

As for $\tau_{c}$, the sup in its definition is attained by some $A$ of the form $[0,i-1]$, so

 $\tau_{c}=\max_{i}\frac{\frac{i}{n}(1-\frac{i}{n})}{1/n}=\frac{1}{n}\left% \lfloor\frac{n^{2}}{4}\right\rfloor\sim\frac{n}{2}.$

As remarked in Chapter 4 yyy, this provides a counter-example to reversing inequalities in Theorem yyy. But if we consider $\max_{A}(\pi(A)E_{\pi}T_{A})$, the max is attained with $A=[\frac{n}{2}-\alpha n,\frac{n}{2}+\alpha n]$, say, where $0\leq\alpha<1/2$. By Lemma 5.2, for $x\in(-\frac{1}{2}+\alpha,\frac{1}{2}-\alpha)$,

 $E_{\lfloor(x\ \mbox{\rm\scriptsize mod }1)n\rfloor}T_{A}\sim\left({\textstyle% \frac{1}{2}}-\alpha-x\right)\left({\textstyle\frac{1}{2}}-\alpha+x\right)n^{2},$

and so

 $E_{\pi}T_{A}\sim n^{2}\int_{-\frac{1}{2}+\alpha}^{\frac{1}{2}-\alpha}\left(% \frac{1}{2}-\alpha-x\right)\left(\frac{1}{2}-\alpha+x\right)\,dx=\frac{4(\frac% {1}{2}-\alpha)^{3}n^{2}}{3}.$

Thus

 $\max_{A}(\pi(A)E_{\pi}T_{A})\sim n^{2}\sup_{0<\alpha<1/2}\frac{4(\frac{1}{2}-% \alpha)^{3}\ 2\alpha}{3}=\frac{9n^{2}}{512},$

consistent with Chapter 4 Open Problem yyy.

xxx level of detail for $\bar{d}$ results, here and later.

Remark. Parameters $\tau^{*}$, $\tau_{0}$, $\tau_{1}$, and $\tau_{2}$ are all $\Theta(n^{2})$ in this example, and in Chapter 6 we’ll see that they are $O(n^{2})$ over the class of regular graphs. However, the exact maximum values over all $n$-vertex regular graphs (or the constants $c$ in the $\sim cn^{2}$ asymptotics) are not known. See Chapter 6 for the natural conjectures.

###### Example 5.8

The $n$-path.

This is just the graph $0$$1$$2$$\cdots$$(n-1)$ on $n$ vertices. If $(\hat{X}_{t})$ is random walk on (all) the integers, then $X_{t}=\phi(\hat{X}_{t})$ is random walk on the $n$-path, for the “concertina” map

 $i$

Of course the stationary distribution is not quite uniform:

 $\pi_{i}=\frac{1}{n-1},\ 1\leq i\leq n-2;\ \ \pi_{0}=\pi_{n-1}=\frac{1}{2(n-1)}.$

Again, results for the $n$-path can be deduced from results for the integers. Using Lemma 5.2,

 $E_{i}T_{j}=(j-i)(j+i),\ 0\leq i (5.28)

¿From this, or from the more general results in Section 5.1.2, we obtain

 $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle=$ $\displaystyle(n-1)^{2}$ (5.29) $\displaystyle\tau^{*}\equiv\max_{ij}(E_{i}T_{j}+E_{j}T_{i})$ $\displaystyle=$ $\displaystyle 2(n-1)^{2}$ (5.30) $\displaystyle\tau_{0}=\sum_{j}\pi_{j}E_{0}T_{j}$ $\displaystyle=$ $\displaystyle\frac{1}{3}(n-1)^{2}+\frac{1}{6}$ (5.31)

We can also regard $X_{t}$ as being derived from random walk $\tilde{X}_{t}$ on the $(2n-2)$-cycle via $X_{t}=\min(\tilde{X}_{t},2n-2-\tilde{X}_{t})$. So we can deduce the spectral representation from that in the previous example:

 $P_{i}(X_{t}=j)=\sqrt{\pi_{j}/\pi_{i}}\sum_{m=0}^{n-1}\lambda_{m}^{t}u_{im}u_{jm}$

where, for $0\leq m\leq n-1$,

 $\lambda_{m}=\cos(\pi m/(n-1))$

and

 $u_{0m}=\sqrt{\pi_{m}};\ \ \ u_{n-1,m}=\sqrt{\pi_{m}}(-1)^{m};$
 $u_{im}=\sqrt{2\pi_{m}}\cos(\pi im/(n-1)),\ \ 1\leq i\leq n-2.$

In particular, the relaxation time is

 $\tau_{2}=\frac{1}{1-\cos(\pi/(n-1))}\sim\frac{2n^{2}}{\pi^{2}}.$

Furthermore, $\bar{d}_{n}(t)=\bar{\tilde{d}}_{2n-2}(t)$ and $d_{n}(t)=\tilde{d}_{2n-2}(t)$ for all $t$, so

 $\bar{d}_{n}(t(2n)^{2})\rightarrow\bar{d}_{\infty}(t)$
 $d_{n}(t(2n)^{2})\rightarrow d_{\infty}(t)$

where the limits are those in the previous example. Thus $\tau_{1}\sim cn^{2}$, where $c\doteq 0.25$ is $4$ times the corresponding constant for the $n$-cycle.

xxx explain: BM on $[0,1]$ and circle described in Chapter 16.

It is easy to see that

 $n$

In Section 5.3.2 we will see that the $n$-path attains the exact maximum values of our parameters over all $n$-vertex trees.

###### Example 5.9

The complete graph.

For the complete graph on $n$ vertices, the $t$-step probabilities for the chain started at $i$ can be calculated by considering the induced $2$-state chain which indicates whether or not the walk is at $i$. This gives, in discrete time,

 $\displaystyle P_{i}(X_{t}=i)$ $\displaystyle=$ $\displaystyle\frac{1}{n}+\left(1-\frac{1}{n}\right)\left(-\frac{1}{n-1}\right)% ^{t}$ $\displaystyle P_{i}(X_{t}=j)$ $\displaystyle=$ $\displaystyle\frac{1}{n}-\frac{1}{n}\left(-\frac{1}{n-1}\right)^{t},\ \ j\neq i$ (5.32)

and, in continuous time,

 $\displaystyle P_{i}(X_{t}=i)$ $\displaystyle=$ $\displaystyle\frac{1}{n}+\left(1-\frac{1}{n}\right)\exp\left(-\frac{nt}{n-1}\right)$ $\displaystyle P_{i}(X_{t}=j)$ $\displaystyle=$ $\displaystyle\frac{1}{n}-\frac{1}{n}\exp\left(-\frac{nt}{n-1}\right),\ \ j\neq i$ (5.33)

It is clear that the hitting time to $j\neq i$ has Geometric($1/(n-1)$) distribution (in continuous time, Exponential($1/(n-1)$) distribution), and so in particular

 $E_{i}T_{j}=n-1,\ j\neq i.$ (5.34)

Thus we can calculate the parameters

 $\displaystyle\tau^{*}\equiv\max_{ij}(E_{j}T_{i}+E_{i}T_{j})$ $\displaystyle=$ $\displaystyle 2(n-1)$ (5.35) $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle=$ $\displaystyle n-1$ (5.36) $\displaystyle\tau_{0}\equiv n^{-1}\sum_{j}E_{i}T_{j}$ $\displaystyle=$ $\displaystyle(n-1)^{2}/n.$ (5.37)

From (5.32) the discrete-time eigenvalues are $\lambda_{2}=\lambda_{3}=\cdots=\lambda_{n}=-1/(n-1).$ So the relaxation time is

 $\tau_{2}=(n-1)/n.$ (5.38)

The total variation functions are

 $\bar{d}(t)=\exp\left(-\frac{nt}{n-1}\right),\ \ d(t)=\frac{n-1}{n}\exp\left(-% \frac{nt}{n-1}\right),$

so

 $\tau_{1}=(n-1)/n.$ (5.39)

It is easy to check

 $\tau_{c}=(n-1)/n.$

We have already proved (Chapter 3 yyy) that the complete graph attains the exact minimum of $\tau^{*},\max_{ij}E_{i}T_{j},\tau_{0}$, and $\tau_{2}$ over all $n$-vertex graphs. The same holds for $\tau_{c}$, by considering (in an arbitrary graph) a vertex of minimum degree.

###### Example 5.10

The $n$-star.

This is the graph on $n\geq 3$ vertices $\{0,1,2,\ldots,n-1\}$ with edges $0$$1$, $0$$2$, $0$$3$, …, $0$$(n-1)$. The stationary distribution is

 $\pi_{0}=1/2,\ \pi_{i}=1/(2(n-1)),\ \ i\geq 1.$

In discrete time the walk is periodic. Starting from the leaf $1$, the walk at even times is simply independent and uniform on the leaves, so

 $P_{1}(X_{2t}=i)=1/(n-1),\ i\geq 1$

for $t\geq 1$. At odd times, the walk is at $0$. Writing these $t$-step probabilities as

 $P_{1}(X_{t}=i)=\frac{1}{2(n-1)}(1+(-1)^{t})1_{(i\geq 1)}+\frac{1}{2}(1+(-1)^{t% +1})1_{(i=0)},\ \ t\geq 1$

we see that the discrete-time eigenvalues are $\lambda_{2}=\cdots=\lambda_{n-1}=0,\ \lambda_{n}=-1$ and hence the relaxation time is

 $\tau_{2}=1.$

The mean hitting times are

 $E_{1}T_{0}=1$
 $E_{1}T_{j}=2(n-1),\ \ j\geq 2,$

where the latter comes from the fact that $T_{j}/2$ has Geometric($1/(n-1)$) distribution, using the “independent uniform on leaves at even times” property. Then

 $E_{0}T_{1}=2n-3.$

We can calculate the parameters

 $\displaystyle\tau^{*}\equiv$ $\displaystyle\max_{ij}(E_{i}T_{j}+E_{j}T_{i})$ $\displaystyle=4n-4$ (5.40) $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle=2n-2$ (5.41) $\displaystyle\tau_{0}=$ $\displaystyle\sum_{j}E_{0}T_{j}\pi_{j}$ $\displaystyle=n-{\textstyle\frac{3}{2}}.$ (5.42)

In continuous time we find

 $\displaystyle P_{1}(X_{t}=1)$ $\displaystyle=$ $\displaystyle\frac{1}{2(n-1)}(1+e^{-2t})+\frac{n-2}{n-1}e^{-t}$ $\displaystyle P_{1}(X_{t}=i)$ $\displaystyle=$ $\displaystyle\frac{1}{2(n-1)}(1+e^{-2t})-\frac{1}{n-1}e^{-t},\ i>1$ $\displaystyle P_{1}(X_{t}=0)$ $\displaystyle=$ $\displaystyle{\textstyle\frac{1}{2}}(1-e^{-2t})$ $\displaystyle P_{0}(X_{t}=0)$ $\displaystyle=$ $\displaystyle{\textstyle\frac{1}{2}}(1+e^{-2t})$ $\displaystyle P_{0}(X_{t}=1)$ $\displaystyle=$ $\displaystyle\frac{1}{2(n-1)}(1-e^{-2t})$

 $\bar{d}(t)=e^{-t},\ \ d(t)=\frac{1}{2(n-1)}e^{-2t}+\frac{n-2}{n-1}e^{-t},$

from which

 $\tau_{1}=1.$

Clearly (isolate a leaf)

 $\tau_{c}=1-\frac{1}{2(n-1)}.$

We shall see in Section 5.3.2 that the $n$-star attains the exact minimum of our parameters over all $n$-vertex trees.

###### Example 5.11

The barbell.

Here is a graph on $n=2m_{1}+m_{2}$ vertices ($m_{1}\geq 2,m_{2}\geq 0$). Start with two complete graphs on $m_{1}$ vertices. Distinguish vertices $v_{l}\neq v_{L}$ in one graph (“the left bell”) and vertices $v_{R}\neq v_{r}$ in the other graph (“the right bell”). Then connect the graphs via a path $v_{L}$$w_{1}$$w_{2}$$\cdots$$w_{m_{2}}$$v_{R}$ containing $m_{2}$ new vertices.

xxx picture

A point of the construction is that the mean time to go from a typical point $v_{l}$ in the left bell to a typical point $v_{r}$ in the right bell is roughly $m_{1}^{2}m_{2}$. To argue this informally, it takes mean time about $m_{1}$ to hit $v_{L}$; then there is chance $1/m_{1}$ to hit $w_{1}$, so it takes mean time about $m_{1}^{2}$ to hit $w_{1}$; and from $w_{1}$ there is chance about $1/m_{2}$ to hit the right bell before returning into the left bell, so it takes mean time about $m_{1}^{2}m_{2}$ to enter the right bell.

The exact result, argued below, is

 $\max_{ij}E_{i}T_{j}=E_{v_{l}}T_{v_{r}}=m_{1}(m_{1}-1)(m_{2}+1)+(m_{2}+1)^{2}+4% (m_{1}-1)+4\frac{m_{2}+1}{m_{1}}.$ (5.43)

It is cleaner to consider asymptotics as

 $n\rightarrow\infty,\ m_{1}/n\rightarrow\alpha,\ m_{2}/n\rightarrow 1-2\alpha$

with $0<\alpha<1/2$. Then

 $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle\sim$ $\displaystyle\alpha^{2}(1-2\alpha)n^{3}$ $\displaystyle\sim$ $\displaystyle\frac{n^{3}}{27}\mbox{\ for\ }\alpha=1/3$

where $\alpha=1/3$ is the asymptotic maximizer here and for the other parameters below. Similarly

 $\displaystyle\tau^{*}$ $\displaystyle\sim$ $\displaystyle 2\alpha^{2}(1-2\alpha)n^{3}$ $\displaystyle\sim$ $\displaystyle\frac{2n^{3}}{27}\mbox{ for }\alpha=1/3.$

The stationary distribution $\pi$ puts mass $\rightarrow 1/2$ on each bell. Also, by (5.45)–(5.47) below, $E_{v_{l}}T_{v}=o(E_{v_{l}}T_{v_{r}})$ uniformly for vertices $v$ in the left bell and $E_{v_{l}}T_{v}\sim E_{v_{l}}T_{v_{r}}\sim\alpha^{2}(1-2\alpha)n^{3}$ uniformly for vertices $v$ in the right bell. Hence

 $\tau_{0}\equiv\sum_{v}\pi_{v}E_{v_{l}}T_{v}\sim\frac{1}{2}E_{v_{l}}T_{v_{r}}% \sim\frac{1}{2}\alpha^{2}(1-2\alpha)n^{3}$

and so we have proved the “$\tau_{0}$” part of

 $\displaystyle\mbox{each of }\{\tau_{0},\tau_{1},\tau_{2}\}$ $\displaystyle\sim$ $\displaystyle{\textstyle\frac{1}{2}}\alpha^{2}(1-2\alpha)n^{3}$ (5.44) $\displaystyle\sim$ $\displaystyle\frac{n^{3}}{54}\mbox{ for }\alpha=1/3.$

Consider the relaxation time $\tau_{2}$. For the function $g$ defined to be $+1$ on the left bell, $-1$ on the right bell and linear on the bar, the Dirichlet form gives

 $\mbox{{\cal E}}(g,g)=\frac{2}{(m_{2}+1)(m_{1}(m_{1}-1)+m_{2}+1)}\sim\frac{2}% {\alpha^{2}(1-2\alpha)n^{3}}.$

Since the variance of $g$ tends to $1$, the extremal characterization of $\tau_{2}$ shows that ${1\over 2}\alpha^{2}(1-2\alpha)n^{3}$ is an asymptotic lower bound for $\tau_{2}$. But in general $\tau_{2}\leq\tau_{0}$, so having already proved (5.44) for $\tau_{0}$ we must have the same asymptotics for $\tau_{2}$. Finally, without going into details, it is not hard to show that for fixed $t>0$,

 $\bar{d}_{n}\left({1\over 2}\alpha^{2}(1-2\alpha)n^{3}t\right)\rightarrow e^{-t% },\ d_{n}\left({1\over 2}\alpha^{2}(1-2\alpha)n^{3}t\right)\rightarrow{1\over 2% }e^{-t}$

from which the “$\tau_{1}$” assertion of (5.44) follows.

jjj Proof? (It’s not so terrifically easy, either! How much do we want to include?) I’ve (prior to writing this) carefully written out an argument similar to the present one, also involving approximate exponentiality of a hitting time distribution, for the balanced $r$-tree below. Here is a rough sketch for the argument for $\bar{d}$ here; note that it uses results about the next example (the lollipop). (The argument for $d$ is similar.) The pair $(v_{l},v_{r})$ of initial states achieves $\bar{d}(t)$ for every $t$ (although the following can be made to work without knowing this “obvious fact” a priori). Couple chains starting in these states by having them move symmetrically in the obvious fashion. Certainly these copies will couple by the time $T$ the copy started at $v_{l}$ has reached the center vertex $w_{m_{2}/2}$ of the bar. We claim that the distribution of $T$ is approximately exponential, and its expected value is $\sim\frac{1}{2}m^{2}_{1}m_{2}\sim\frac{1}{2}\alpha^{2}(1-2\alpha)n^{3}$ by the first displayed result for the lollipop example, with $m_{2}$ changed to $m_{2}/2$ there. (In keeping with this observation, I’ll refer to the “half-stick” lollipop in the next paragraph.)

jjj (cont.) To get approximate exponentiality for the distribution of $T$, first argue easily that it’s approximately the same as that of $T_{w_{m/2}}$ for the half-stick lollipop started in stationarity. But that distribution is, in turn, approximately exponential by Chapter 3 Proposition yyy, since $\tau_{2}=\Theta(n^{2})=o(n^{3})$ for the half-stick lollipop.

Proof of (5.43). The mean time in question is the sum of the following mean times:

 $\displaystyle E_{v_{l}}T_{v_{L}}=m_{1}-1$ (5.45) $\displaystyle E_{v_{L}}T_{v_{R}}=m_{1}(m_{1}-1)(m_{2}+1)+(m_{2}+1)^{2}$ (5.46) $\displaystyle E_{v_{R}}T_{v_{r}}=3(m_{1}-1)+4\frac{m_{2}+1}{m_{1}}.$ (5.47)

Here (5.45) is just the result (5.34) for the complete graph. And (5.46) is obtained by summing over the edges of the “bar” the expression

 $E_{w_{i}}T_{w_{i+1}}=m_{1}(m_{1}-1)+2i+1,\ \ i=0,\ldots,m_{2}$ (5.48)

obtained from the general formula for mean hitting time across an essential edge of a graph (Lemma 5.1), where $w_{0}=v_{L}$ and $w_{m_{2}+1}=v_{R}$. To argue (5.47), we start with the $1$-step recurrence

 $E_{v_{R}}T_{v_{r}}=1+\frac{1}{m_{1}}E_{w_{m_{2}}}T_{v_{r}}+\frac{m_{1}-2}{m_{1% }}E_{x}T_{v_{r}}$

where $x$ denotes a vertex of the right bell distinct from $v_{R}$ and $v_{r}$. Now

 $E_{w_{m_{2}}}T_{v_{r}}=m_{1}(m_{1}-1)+2m_{2}+1+E_{v_{R}}T_{v_{r}}$

using the formula (5.48) for the mean passage time from $w_{m_{2}}$ to $v_{R}$. Starting from $x$, the time until a hit on either $v_{R}$ or $v_{r}$ has Geometric($2/(m_{1}-1)$) distribution, and the two vertices are equally likely to be hit first. So

 $E_{x}T_{v_{r}}=(m_{1}-1)/2+{\textstyle\frac{1}{2}}E_{v_{R}}T_{v_{r}}.$

The last three expressions give an equation for $E_{v_{R}}T_{v_{r}}$ whose solution is (5.47). And it is straightforward to check that $E_{v_{l}}T_{v_{r}}$ does achieve the maximum, using (5.45)–(5.47) to bound the general $E_{i}T_{j}$.

It is straightforward to check

 $\tau_{c}\sim\frac{\alpha^{2}n^{2}}{2}.$
###### Example 5.12

The lollipop.

xxx picture

This is just the barbell without the right bell. That is, we start with a complete graph on $m_{1}$ vertices and add $m_{2}$ new vertices in a path. So there are $n=m_{1}+m_{2}$ vertices, and $w_{m_{2}}$ is now a leaf. In this example, by (5.45) and (5.46), with $m_{2}$ in place of $m_{2}+1$, we have

 $\max_{ij}E_{i}T_{j}=E_{v_{l}}T_{w_{m_{2}}}=m_{1}(m_{1}-1)m_{2}+(m_{1}-1)+m_{2}% ^{2}.$

In the asymptotic setting with

 $n\rightarrow\infty,m_{1}/n\rightarrow\alpha,m_{2}/n\rightarrow 1-\alpha$

where $0<\alpha<1$, we have

 $\displaystyle\max_{ij}E_{i}T_{j}$ $\displaystyle\sim$ $\displaystyle\alpha^{2}(1-\alpha)n^{3}$ (5.49) $\displaystyle\sim$ $\displaystyle\frac{4n^{3}}{27}\mbox{\ for\ }\alpha=2/3,$

where $\alpha=2/3$ gives the asymptotic maximum.

Let us discuss the other parameters only briefly, in the asymptotic setting. Clearly $E_{w_{m_{2}}}T_{v_{L}}=m^{2}_{2}\sim(1-\alpha)^{2}n^{2}$ and it is not hard to check

 $E_{w_{m_{2}}}T_{v_{l}}=\max_{v}E_{v}T_{v_{l}}\sim(1-\alpha)^{2}n^{2},$ (5.50)

whence

 $\tau^{*}=\max_{ij}(E_{i}T_{j}+E_{j}T_{i})=E_{v_{l}}T_{w_{m_{2}}}+E_{w_{m_{2}}}% T_{v_{l}}\sim\alpha^{2}(1-\alpha)n^{3}.$

Because the stationary distribution puts mass $\Theta(1/n)$ on the “bar”, (5.50) is also enough to show that $\tau_{0}=O(n^{2})$. So by the general inequalities between our parameters, to show

 $\mbox{ each of }\{\tau_{0},\tau_{1},\tau_{2}\}=\Theta(n^{2})$ (5.51)

it is enough to show that $\tau_{2}=\Omega(n^{2})$. But for the function $g$ defined to be $0$ on the “bell”, $1$ at the end $w_{m_{2}}$ of the “bar,” and linear along the bar, a brief calculation gives

 $\mbox{{\cal E}}(g,g)=\Theta(n^{-3}),\ \ {\rm var}\ _{\pi}g=\Theta(n^{-1})$

so that $\tau_{2}\geq({\rm var}\ _{\pi}g)/\mbox{{\cal E}}(g,g)=\Omega(n^{2})$, as required.

Finally, in the asymptotic setting it is straightforward to check that $\tau_{c}$ is achieved by $A=\{w_{1},\ldots,w_{m_{2}}\}$, giving

 $\tau_{c}\sim 2(1-\alpha)n.$

Remark. The barbell and lollipop are the natural candidates for the $n$-vertex graphs which maximize our parameters. The precise conjectures and known results will be discussed in Chapter 6.

jjj We need to put somewhere—Chapter 4 on $\tau_{c}$? Chapter 6 on max parameters over $n$-vertex graphs? in the barbell example?—the fact that $\mbox{\bf max\,}\tau_{c}$ is attained, when $n$ is even, by the barbell with $m_{2}=0$, the max value being $(n^{2}-2n+2)/8\sim n^{2}/8$. Similarly, when $n$ is odd, the maximizing graph is formed by joining complete graphs on $\lfloor n/2\rfloor$ and $\lceil n/2\rceil$ vertices respectively by a single edge, and the max value is easy to write down (I’ve kept a record) but not so pretty; however, this value too is $\sim n^{2}/8$, which is probably all we want to say anyway. Here is the first draft of a proof:

For random walk on an unweighted graph, $\tau_{c}$ is the maximum over nonempty proper subsets $A$ of the ratio

 $\frac{(\mbox{deg\,}A)(\mbox{deg\,}A^{c})}{2|\mbox{{\cal E}}|(A,A^{c})},$ (5.52)

where $\mbox{deg\,}A$ is defined to be the sum of the degrees of vertices in $A$ and $(A,A^{c})$ is the number of directed cut edges from $A$ to $A^{c}$.

jjj Perhaps it would be better for exposition to stick with undirected edges and introduce factor $1/2$?

Maximizing now over choice of graphs, the max in question is no larger than the maximum $M$, over all choices of $n_{1}>0$, $n_{2}>0$, $e_{1}$, $e_{2}$, and $e^{\prime}$ satisfying $n_{1}+n_{2}=n$ and $0\leq e_{i}\leq{n_{i}\choose 2}$ for $i=1,2$ and $1\leq e^{\prime}\leq n_{1}n_{2}$, of the ratio

 $\frac{(2e_{1}+e^{\prime})(2e_{2}+e^{\prime})}{2(e_{1}+e_{2}+e^{\prime})e^{% \prime}}.$ (5.53)

(We don’t claim equality because we don’t check that each $n_{i}$-graph is connected. But we’ll show that $M$ is in fact achieved by the connected graph claimed above.)

Simple calculus shows that the ratio (5.53) is (as one would expect) increasing in $e_{1}$ and $e_{2}$ and decreasing in $e^{\prime}$. Thus, for given $n_{1}$, (5.53) is maximized by considering complete graphs of size $n_{1}$ and $n_{2}=n-n_{1}$ joined by a single edge. Call the maximum value $M(n_{1})$. If $n$ is even, it is then easy to see that $M_{n_{1}}$ is maximized by $n_{1}=n/2$, giving $M=(n^{2}-2n+2)/8$, as desired.

For the record, here are the slightly tricky details if $n$ is odd. Write $\nu=n/2$ and $n_{1}=\nu-y$ and put $x=y^{2}$. A short calculation gives $M(n_{1})=1+g(x)$, where $g(x)\equiv[(a-x)(b-x)-1]/(2x+c)$ with $a=\nu^{2}$, $b=(\nu-1)^{2}$, and $c=2\nu(\nu-1)+2$. Easy calculus shows that $g$ is $U$-shaped over $[0,\nu]$ and then that $g(1/4)\geq g(\nu^{2})$. Thus $M(n_{1})$ is maximized when $n_{1}=\nu-\frac{1}{2}=\lfloor n/2\rfloor$.

###### Example 5.13

The necklace.

This graph, pictured below, is $3$-regular with $n=4m+2$ vertices, consisting of $m$ subgraphs linked in a line, the two end subgraphs being different from the intervening ones. This is an artificial graph designed to mimic the $n$-path while being regular, and this construction (or some similar one) is the natural candidate for the $n$-vertex graph which asymptotically maximizes our parameters over regular graphs.

This example affords a nice illustration of use of the commute interpretation of resistance. Applying voltage $1$ at vertex $a$ and voltage $0$ at $e$, a brief calculation gives the potentials at intervening vertices as

 $g(b)=4/7,\ g(c)=5/7,\ g(d)=4/7$

and gives the effective resistance $r_{ae}=7/8$. Since the effective resistance between $f$ and $g$ equals $1$, we see the maximal effective resistance is

 $r_{ah}={\textstyle\frac{7}{8}}+(2m-3)+{\textstyle\frac{7}{8}}=2m-{\textstyle% \frac{5}{4}}.$

So

 $\tau^{*}=E_{a}T_{h}+E_{h}T_{a}=3\times(4m+2)\times\left(2m-\frac{5}{4}\right)% \sim\frac{3n^{2}}{2}.$

One could do elementary exact calculations of other parameters, but it is simpler to get asymptotics from the Brownian motion limit, which implies that the asymptotic ratio of each parameter (excluding $\tau_{c}$) in this example and the $n$-path is the same for each parameter. So

 $\tau_{0}\sim\frac{n^{2}}{4},\ \tau_{2}\sim\frac{3n^{2}}{2\pi^{2}}.$

jjj I haven’t checked this carefully, and I also have abstained from writing anything further about $\tau_{1}$.

Finally, it is clear that $\tau_{c}\sim 3n/4$, achieved by breaking a “link” between “beads” in the middle of the necklace.

###### Example 5.14

The balanced $r$-tree.

Take $r\geq 2$ and $h\geq 1$. The balanced $r$-tree is the rooted tree where all leaves are at distance $h$ from the root, where the root has degree $r$, and where the other internal vertices have degree $r+1$. Call $h$ the height of the tree. For $h=1$ we have the $(r+1)$-star, and for $r=2$ we have the balanced binary tree. The number of vertices is

 $n=1+r+r^{2}+\cdots+r^{h}=(r^{h+1}-1)/(r-1).$

The chain $\hat{X}$ induced (in the sense of Chapter 4 Section yyy) by the function

 $i$

is random walk on $\{0,\ldots,h\}$, biased towards $0$, with reflecting barriers, as in Example 5.5 with

 $\rho=1/r.$

In fact, the transition probabilities for $X$ can be expressed in terms of $\hat{X}$ as follows. Given vertices $v_{1}$ and $v_{2}$ with $f(v_{1})=f_{1}$ and $f(v_{2})=f_{2}$, the paths $[\mbox{root},v_{1}]$ and $[\mbox{root},v_{2}]$ intersect in the path $[\mbox{root},v_{3}]$, say, with $f(v_{3})=f_{3}\geq f_{1}\vee f_{2}$. Then

 $P_{v_{1}}(X_{t}=v_{2})=\sum_{m=f_{3}}^{h}P_{f_{1}}\left(\max_{0\leq s\leq t}% \hat{X}_{s}=m,\hat{X}_{t}=f_{2}\right)r^{-(m-f_{2})}.$

As a special case, suppose that $v_{1}$ is on the path from the root to $v_{2}$; in this case $v_{3}=v_{1}$. Using the essential edge lemma (or Theorem 5.20 below) we can calculate

 $E_{v_{2}}T_{v_{1}}=2(r-1)^{-2}(r^{f_{1}+1}-r^{f_{2}+1})-2(r-1)^{-1}(f_{1}-f_{2% })-(f_{1}-f_{2}),$
 $E_{v_{1}}T_{v_{2}}=2(n-1)(f_{1}-f_{2})-E_{v_{2}}T_{v_{1}}.$ (5.54)

Using this special case we can deduce the general formula for mean hitting times. Indeed, $E_{v_{1}}T_{v_{2}}=E_{v_{1}}T_{v_{3}}+E_{v_{3}}T_{v_{2}}$, which leads to

 $\displaystyle E_{v_{1}}T_{v_{2}}$ $\displaystyle=$ $\displaystyle 2(n-1)(f_{3}-f_{2})+2(r-1)^{-2}(r^{f_{2}+1}-r^{f_{1}+1})$ (5.55) $\displaystyle\ \ -2(r-1)^{-1}(f_{2}-f_{1})-(f_{2}-f_{1}).$

The maximum value $2h(n-1)$ is attained when $v_{1}$ and $v_{2}$ are leaves and $v_{3}$ is the root. So

 ${\textstyle\frac{1}{2}}\tau^{*}=\max_{v,x}E_{v}T_{x}=2(n-1)h.$ (5.56)

(The $\tau^{*}$ part is simpler via (5.88) below.) Another special case is that, for a leaf $v$,

 $E_{v}T_{\mbox{\scriptsize root}}=2(r-1)^{-2}(r^{h+1}-r)-2h(r-1)^{-1}-h\sim 2n/% (r-1),$ (5.57)
 $E_{\mbox{\scriptsize root}}T_{v}=2(n-1)h-E_{v}T_{\mbox{\scriptsize root}}\sim 2nh$ (5.58)

where asymptotics are as $h\rightarrow\infty$ for fixed $r$. Since $E_{\mbox{\scriptsize root}}T_{w}$ is decreasing in $f(w)$, it follows that

 $\tau_{0}=\sum_{w}\pi_{w}E_{\mbox{\scriptsize root}}T_{w}\leq(1+o(1))2nh.$

On the other hand, we claim $\tau_{0}\geq(1+o(1))2nh$, so that

 $\tau_{0}\sim 2nh.$

To sketch the proof, given a vertex $w$, let $v$ be a leaf such that $w$ lies on the path from $v$ to the root. Then

 $E_{\mbox{\scriptsize root}}T_{w}=E_{\mbox{\scriptsize root}}T_{v}-E_{w}T_{v},$

and $E_{w}T_{v}\leq 2(n-1)f(w)$ by (5.54). But the stationary distribution puts nearly all its mass on vertices $w$ with $f(w)$ of constant order, and $n=o(nh)$.

We claim next that

 $\tau_{1}\sim\tau_{2}\sim 2n/(r-1).$

Since $\tau_{2}\leq\tau_{1}$, it is enough to show

 $\tau_{1}\leq(1+o(1))\frac{2n}{r-1}$ (5.59)

and

 $\tau_{2}\geq(1+o(1))\frac{2n}{r-1}.$ (5.60)

Proof of (5.59). Put

 $t_{n}\equiv\frac{2n}{r-1}$

for brevity. We begin the proof by recalling the results (5.22) and (5.19) for the induced walk $\hat{X}$:

 $\hat{\tau}_{2}\rightarrow\frac{(r+1)}{(r^{1/2}-1)^{2}},$
 $E_{\hat{\pi}}\hat{T}_{h}\sim\frac{2r^{h+1}}{(r-1)^{2}}\sim t_{n}.$ (5.61)

By Proposition yyy of Chapter 3,

 $\sup_{t}\left|P_{\hat{\pi}}(\hat{T}_{h}>t)-\exp\left(-\frac{t}{E_{\hat{\pi}}% \hat{T}_{h}}\right)\right|\leq\frac{\hat{\tau}_{2}}{E_{\hat{\pi}}\hat{T}_{h}}=% \Theta(n^{-1})=o(1).$ (5.62)

For $\hat{X}$ started at $0$, let $\hat{S}$ be a stopping time at which the chain has exactly the stationary distribution. Then, for $0\leq s\leq t$,

 $P_{0}(\hat{T}_{h}>t)\leq P_{0}(\hat{S}>s)+P_{\hat{\pi}}(\hat{T}_{h}>t-s).$

Since $\hat{\tau}^{(2)}_{1}=O(h)=O(\log n)$ by (5.23), we can arrange to have $E_{0}\hat{S}=O(\log n)$. Fixing $\epsilon>0$ and choosing $t=(1+\epsilon)t_{n}$ and (say) $s=(\log n)^{2}$, (5.62) and (5.61) in conjunction with Markov’s inequality yield

 $\displaystyle P_{0}(\hat{T}_{h}>(1+\epsilon)t_{n})$ $\displaystyle=$ $\displaystyle\exp\left[-\frac{(1+\epsilon)t_{n}-(\log n)^{2}}{E_{\hat{\pi}}% \hat{T}_{h}}\right]$ $\displaystyle\ \ \ +O((\log n)^{-1})+O(n^{-1})$ $\displaystyle\rightarrow$ $\displaystyle e^{-(1+\epsilon)}.$

Returning to the continuous-time walk on the tree, for $n$ sufficiently large we have

 $P_{v}(T_{\mbox{\scriptsize root}}>(1+\epsilon)t_{n})\leq P_{0}(\hat{T}_{h}>(1+% \epsilon)t_{n})\leq e^{-1}$

for every vertex $v$. Now a simple coupling argument (jjj spell out details?: Couple the induced walks and the tree-walks will agree when the induced walk starting farther from the origin has reached the origin) shows that

 $\bar{d}_{n}((1+\epsilon)t_{n})\leq e^{-1}$

for all large $n$. Hence $\tau_{1}\leq(1+\epsilon)t_{n}$ for all large $n$, and (5.59) follows.

Proof of (5.60).

jjj [This requires further exposition in both Chapters 3 and 4-1. In Chapter 3, it needs to be made clear that one of the inequalities having to do with CM hitting time distributions says precisely that $E_{\alpha_{A}}T_{A}\geq E_{\pi}T_{A}/\pi(A^{c})\geq E_{\pi}T_{A}$. In Chapter 4-1 (2/96 version), it needs to be noted that Lemma 2(a) (concerning $\tau_{2}$ for the joining of two copies of a graph) extends to the joining of any finite number of copies.]

Let $G$ denote a balanced $r$-tree of height $h$. Let $G^{\prime\prime}$ denote a balanced $r$-tree of height $h-1$ with root $y$ and construct a tree $G^{\prime}$ from $G^{\prime\prime}$ by adding an edge from $y$ to an additional vertex $z$. We can construct $G$ by joining $r$ copies of $G^{\prime}$ at the vertex $z$, which becomes the root of $G$. Let $\pi^{\prime}$ and $\pi^{\prime\prime}$ denote the respective stationary distributions for the random walks on $G^{\prime}$ and $G^{\prime\prime}$, and use the notation $T^{\prime}$ and $T^{\prime\prime}$, respectively, for hitting times on these graphs. By Chapter 4 jjj,

 $\tau_{2}=E_{\alpha^{\prime}}T^{\prime}_{z}$ (5.63)

where $\alpha^{\prime}$ is the quasistationary distribution on $G^{\prime}$ associated with the hitting time $T^{\prime}_{z}$. By Chapter 3 jjj, the expectation (5.63) is no smaller than $E_{\pi^{\prime}}T^{\prime}_{z}$, which by the collapsing principle equals

 $\pi^{\prime}(G^{\prime\prime})\left(E_{\pi^{\prime\prime}}T^{\prime\prime}_{y}% +E_{y}T^{\prime}_{z}\right)=\pi^{\prime}(G^{\prime\prime})\left(E_{\pi^{\prime% \prime}}T^{\prime\prime}_{y}+E_{y}T_{z}\right).$

But it is easy to see that this last quantity equals $(1+o(1))E_{\pi}T_{z}$, which is asymptotically equivalent to $2n/(r-1)$ by (5.61).

¿From the discussion at the beginning of Section 5.3.1, it follows that $\tau_{c}$ is achieved at any of the $r$ subtrees of the root. This gives

 $\tau_{c}=\frac{(2r^{h}-r-1)(2r^{h}-1)}{2r(r^{h}-1)}\sim\frac{2n}{r}.$

An extension of the balanced $r$-tree example is treated in Section 5.2.1 below.

###### Example 5.15

The $d$-cube.

This is a graph with vertex-set ${\bf I}=\{0,1\}^{d}$ and hence with $n=2^{d}$ vertices. Write ${\bf i}=(i_{1},\ldots,i_{d})$ for a vertex, and write $|{\bf i}-{\bf j}|=\sum_{u}|i_{u}-j_{u}|$. Then $({\bf i},{\bf j})$ is an edge if and only if $|{\bf i}-{\bf j}|=1$, and in general $|{\bf i}-{\bf j}|$ is the graph-distance between i and j. Thus discrete-time random walk proceeds at each step by choosing a coordinate at random and changing its parity.

It is easier to use the continuized walk ${\bf X}(t)=(X_{1}(t),\ldots,X_{d}(t))$, because the component processes $(X_{u}(t))$ are independent as $u$ varies, and each is in fact just the continuous-time random walk on the $2$-path with transition rate $1/d$. This follows from an elementary fact about the superposition of (marked) Poisson processes.

Thus, in continuous time,

 $\displaystyle P_{{\bf i}}({\bf X}(t)={\bf j})$ $\displaystyle=$ $\displaystyle\prod_{u=1}^{d}\left[\frac{1}{2}\left(1+(-1)^{|i_{u}-j_{u}|}e^{-2% t/d}\right)\right]$ (5.64) $\displaystyle=$ $\displaystyle 2^{-d}\left(1-e^{-2t/d}\right)^{|{\bf i}-{\bf j}|}\left(1+e^{-2t% /d}\right)^{d-|{\bf i}-{\bf j}|}.$

By expanding the right side, we see that the continuous-time eigenvalues are

 $\lambda_{k}=2k/d\mbox{\ with multiplicity\ }{d\choose k},\ \ k=0,1,\ldots,d.$ (5.65)

(Of course, this is just the general fact that the eigenvalues of a $d$-fold product of continuous-time chains are

 $(\lambda_{i_{1}}+\cdots+\lambda_{i_{d}};1\leq i_{1},\ldots,i_{d}\leq n)$ (5.66)

where $(\lambda_{i};1\leq i\leq n)$ are the eigenvalues of the marginal chain.)

In particular,

 $\tau_{2}=d/2.$ (5.67)

By the eigentime identity (Chapter 3 yyy)

 $\displaystyle\tau_{0}=\sum_{m\geq 2}\frac{1}{\lambda_{m}}=\frac{d}{2}\sum_{k=1% }^{d}k^{-1}{d\choose k}$ $\displaystyle=2^{d}(1+d^{-1}+O(d^{-2})),$ (5.68)

the asymptotics being easy analysis.

From (5.64) it is also straightforward to derive the discrete-time $t$-step transition probabilities:

 $P_{\bf i}({\bf X}_{t}=j)=2^{-d}\sum_{m=0}^{d}\left(1-\frac{2m}{d}\right)^{t}% \sum_{r}(-1)^{r}{{|{\bf i}-{\bf j}|}\choose r}{{d-|{\bf i}-{\bf j}|}\choose{m-% r}}.$

Starting the walk at ${\bf 0}$, let $Y_{t}=|{\bf X}(t)|$. Then $Y$ is the birth-and-death chain on states $\{0,1,\ldots,d\}$ with transition rates (transition probabilities, in discrete time)

 $q_{i,i+1}=\frac{d-i}{d},\ q_{i,i-1}=\frac{i}{d},\ \ 0\leq i\leq d.$

xxx box picture

This is the Ehrenfest urn model mentioned in many textbooks. In our terminology we may regard $Y$ as random walk on the weighted linear graph (Section 5.1.2) with weights

 $w_{i}={{d-1}\choose{i-1}},\ \ w=2^{d}.$

In particular, writing $T^{Y}$ for hitting times for $Y$, symmetry and (5.13) give

 $\frac{1}{2}\tau^{*Y}=\frac{1}{2}(E_{0}T^{Y}_{d}+E_{d}T^{Y}_{0})=E_{0}T^{Y}_{d}% =2^{d-1}\sum_{i=1}^{d}\frac{1}{{{d-1}\choose{i-1}}}.$

On the $d$-cube, it is “obvious” that $E_{\bf 0}T_{\bf j}$ is maximized by ${\bf j}={\bf 1}$, and this can be verified by observing in (5.64) that $P_{\bf 0}({\bf X}(t)={\bf j})$ is minimized by ${\bf j}={\bf 1}$, and hence $Z_{{\bf 0}{\bf j}}$ is minimized by ${\bf j}={\bf 1}$, so we can apply Chapter 2 yyy. Thus

 ${1\over 2}\tau^{*}=\max_{{\bf i}{\bf j}}E_{\bf i}T_{\bf j}=E_{\bf 0}T_{\bf 1}=% 2^{d-1}\sum_{i=1}^{d}\frac{1}{{{d-1}\choose{i-1}}}\sim 2^{d}(1+1/d+O(1/d^{2})).$ (5.69)

The asymptotics are the same as in (5.68). In fact it is easy to use (5.64) to show

 $Z_{{\bf i}{\bf i}}=2^{-d}\tau_{0}=1+d^{-1}+O(d^{-2})$
 $Z_{{\bf i}{\bf j}}=O(d^{-2})\mbox{\ uniformly over\ }|{\bf i}-{\bf j}|\geq 2$

and then by Chapter 2 yyy

 $E_{{\bf i}}T_{{\bf j}}=2^{d}(1+d^{-1}+O(d^{-2}))\mbox{ uniformly over }|{\bf i% }-{\bf j}|\geq 2.$

Since

 $1+E_{1}T^{Y}_{0}=E_{0}T^{Y}_{1}+E_{1}T^{Y}_{0}=w/w_{1}=2^{d},$

it follows that

 $E_{{\bf i}}T_{{\bf j}}=2^{d}-1\mbox{ if }|{\bf i}-{\bf j}|=1.$

xxx refrain from write out exact $E_{\bf i}T_{\bf j}$—refs

To discuss total variation convergence, we have by symmetry (and writing ${\bf d}$ to distinguish from dimension $d$)

 $\bar{{\bf d}}(t)=\|P_{\bf 0}({\bf X}(t)\in\cdot)-P_{\bf 1}({\bf X}(t)\in\cdot)\|$
 ${\bf d}(t)=\|P_{\bf 0}({\bf X}(t)\in\cdot)-\pi(\cdot)\|.$

Following Diaconis et al [114] we shall sketch an argument leading to

 ${\bf d}\left({\textstyle\frac{1}{4}}d\log d+sd\right)\rightarrow L(s)\equiv P% \left(|Z|\leq{\textstyle\frac{1}{2}}e^{-2s}\right)\ ,-\infty (5.70)

where $Z$ has the standard Normal distribution. This implies

 $\tau_{1}\sim{\textstyle\frac{1}{4}}d\log d.$ (5.71)

For the discrete-time walk made aperiodic by incorporating chance $1/(d+1)$ of holding, (5.70) and (5.71) remain true, though rigorous proof seems complicated: see [114].

Fix $u$, and consider ${\bf j}={\bf j}(u)$ such that $|{\bf j}|-d/2\sim ud^{1/2}/2$. Using $1-\exp(-\delta)\approx\delta-\frac{1}{2}\delta^{2}$ as $\delta\rightarrow 0$ in (5.64), we can calculate for $t=t(d)=\frac{1}{4}d\log d+sd$ with $s$ fixed that

 $2^{d}P_{\bf 0}({\bf X}(t)={\bf j})\rightarrow\exp\left(-\frac{e^{-4s}}{2}-ue^{% -2s}\right).$

Note the limit is $>1$ when $u. Now

 ${\bf d}(t)=\frac{1}{2}\sum_{{\bf j}}|P_{\bf 0}({\bf X}(t)={\bf j})-2^{-d}|\sim% \sum(P_{\bf 0}({\bf X}(t)={\bf j})-2^{-d})$

where the second sum is over ${\bf j}(u)$ with $u. But from (5.64) we can write this sum as

 $P\left(B\left({\textstyle\frac{1}{2}}(1-d^{-1/2}e^{-2s})\right)\leq|{\bf j}(u_% {0}(s))|\right)-P\left(B\left({\textstyle\frac{1}{2}}\right)\leq|{\bf j}(u_{0}% (s))|\right)$

where $B(p)$ denotes a Binomial$(d,p)$ random variable. By the Normal approximation to Binomial, this converges to

 $P(Z\leq-u_{0}(s))-P(Z\leq u_{0}(s))$

as stated.

As an aside, symmetry and Chapter 4 yyy give

 $\tau_{0}\leq E_{\bf 0}T_{\bf 1}\leq\tau^{(2)}_{1}+\tau_{0}$

and so the difference $E_{\bf 0}T_{\bf 1}-\tau_{0}$ is $O(d\log d)$, which is much smaller than what the series expansions (5.68) and (5.69) imply.

The fact that the “half-cube” $A=\{{\bf i}\in{\bf I}:\,i_{d}=0\}$, yielding

 $\tau_{c}=d/2,$

achieves the sup in the definition of $\tau_{c}$ can be proved using a slightly tricky induction argument. However, the result follows immediately from (5.67) together with the general inequality $\tau_{2}\geq\tau_{c}$.

###### Example 5.16

Dense regular graphs.

Consider an $r$-regular $n$-vertex graph with $r>n/2$. Of course here we are considering a class of graphs rather than a specific example. The calculations below show that these graphs necessarily mimic the complete graph (as far as smallness of the random walk parameters is concerned) in the asymptotic setting $r/n\rightarrow c>1/2$.

The basic fact is that, for any pair $i,j$ of vertices, there must be at least $2r-n$ other vertices $k$ such that $i-k-j$ is a path. To prove this, let $a_{1}$ (resp., $a_{2}$) be the number of vertices $k\neq i,j$ such that exactly $1$ (resp., $2$) of the edges $(k,i),(k,j)$ exist. Then $a_{1}+a_{2}\leq n-2$ by counting vertices, and $a_{1}+2a_{2}\geq 2(r-1)$ by counting edges, and these inequalities imply $a_{2}\geq 2r-n$.

Thus, by Thompson’s principle (Chapter 3, yyy) the effective resistance $r_{ij}\leq\frac{2}{2r-n}$ and so the commute interpretation of resistance implies

 $\tau^{*}\leq\frac{2rn}{2r-n}\sim\frac{2cn}{2c-1}.$ (5.72)

A simple “greedy coupling” argument (Chapter 14, Example yyy) shows

 $\tau_{1}\leq\frac{r}{2r-n}\sim\frac{c}{2c-1}.$ (5.73)

This is also a bound on $\tau_{2}$ and on $\tau_{c}$, because $\tau_{c}\leq\tau_{2}\leq\tau_{1}$ always, and special case 2 below shows that this bound on $\tau_{c}$ cannot be improved asymptotically (nor hence can the bound on $\tau_{1}$ or $\tau_{2}$). Because $E_{\pi}T_{j}\leq n\tau_{2}$ for regular graphs (Chapter 3 yyy), we get

 $E_{\pi}T_{j}\leq\frac{nr}{2r-n}.$

This implies

 $\tau_{0}\leq\frac{nr}{2r-n}\sim\frac{cn}{2c-1}$

which also follows from (5.72) and $\tau_{0}\leq\tau^{*}/2$. We can also argue, in the notation of Chapter 4 yyy, that

 $\max_{ij}E_{i}T_{j}\leq\tau^{(2)}_{1}+\max_{j}E_{\pi}T_{j}\leq\frac{4e}{e-1}% \tau_{1}+n\tau_{1}\leq(1+o(1))\frac{nr}{2r-n}\sim\frac{cn}{2c-1}.$

Special case 1. The orders of magnitude may change for $c=1/2$. Take two complete $(n/2)$-graphs, break one edge in each (say edges $(v_{1},v_{2})$ and $(w_{1},w_{2})$) and add edges $(v_{1},w_{1})$ and $(v_{2},w_{2})$. This gives an $n$-vertex $((n/2)-1)$-regular graph for which all our parameters are $\Theta(n^{2})$.

jjj I haven’t checked this.

Special case 2. Can the bound $\tau_{c}\leq r/(2r-n)\sim c/(2c-1)$ be asymptotically improved? Eric Ordentlicht has provided the following natural counter-example. Again start with two $(n/2)$-complete graphs on vertices $(v_{i})$ and $(w_{i})$. Now add the edges $(v_{i},w_{j})$ for which $0\leq(j-i)\mbox{ mod }(n/2)\leq r-(n/2)$. This gives an $n$-vertex $r$-regular graph. By considering the set $A$ consisting of the vertices $v_{i}$, a brief calculation gives

 $\tau_{c}\geq\frac{r}{2r-n+2}\sim\frac{c}{2c-1}.$
###### Example 5.17

The $d$-dimensional torus $Z^{d}_{m}$.

The torus is the set of $d$-dimensional integers ${\bf i}=(i_{1},\ldots,i_{d})$ modulo $m$, considered in the natural way as a $2d$-regular graph on $n=m^{d}$ vertices. It is much simpler to work with the random walk in continuous time, ${\bf X}(t)=(X_{1}(t),\ldots,X_{d}(t))$, because its component processes $(X_{u}(t))$ are independent as $u$ varies; and each is just continuous-time random walk on the $m$-cycle, slowed down by a factor $1/d$. Thus we can immediately write the time-$t$ transition probabilities for ${\bf X}$ in terms of the corresponding probabilities $p_{0,j}(t)$ for continuous-time random walk on the $m$-cycle (see Example 5.7 above) as

 $p_{{\bf 0},{\bf j}}(t)=\prod_{u=1}^{d}p_{0,j_{u}}(t/d).$

Since the eigenvalues on the $m$-cycle are $(1-\cos(2\pi k/m),0\leq k\leq m-1)$, by (5.66) the eigenvalues of ${\bf X}$ are

 $\lambda_{(k_{1}\ldots k_{d})}=\frac{1}{d}\sum_{u=1}^{d}(1-\cos(2\pi k_{u}/m)),% \ 0\leq k_{u}\leq m-1.$

In particular, we see that the relaxation time satisfies

 $\tau_{2}\sim\frac{dm^{2}}{2\pi^{2}}=\frac{dn^{2/d}}{2\pi^{2}}$

where here and below asymptotics are as $m\rightarrow\infty$ for fixed $d$. This relaxation time could more simply be derived from the $N$-cycle result via the general “product chain” result of Chapter 4 yyy. But writing out all the eigenvalues enables us to use the eigentime identity.

 $\tau_{0}=\sum_{k_{1}}\cdots\sum_{k_{d}}1/\lambda_{(k_{1},\ldots,k_{d})}$

(the sum excluding $(0,\ldots,0)$), and hence

 $\tau_{0}\sim m^{d}R_{d}$ (5.74)

where

 $R_{d}\equiv\int_{0}^{1}\cdots\int_{0}^{1}\frac{1}{\frac{1}{d}\sum_{u=1}^{d}(1-% \cos(2\pi x_{u}))}\,dx_{1}\cdots dx_{d}$ (5.75)

provided the integral converges. The reader who is a calculus whiz will see that in fact $R_{d}<\infty$ for $d\geq 3$ only, but this is seen more easily in the alternative approach of Chapter 15, Section yyy.

xxx more stuff: connection to transience, recurrent potential, etc

xxx new copy from lectures

xxx $\tau_{1},\tau_{c}$

jjj David: I will let you develop the rest of this example. Note that $\tau_{1}$ is considered very briefly in Chapter 15, eq. (17) in 3/6/96 version. Here are a few comments for $\tau_{c}$. First suppose that $m>2$ is even and $d\geq 2$. Presumably, $\tau_{c}$ is achieved by the following half-torus:

 $A:=\{{\bf i}=(i_{1},\ldots,i_{d})\in Z^{d}_{m}:\,0\leq i_{d}

In the notation of (5.52) observe

 $|\mbox{{\cal E}}|=dn,\ \ \mbox{deg\,}A=dn,\ \ \mbox{deg\,}A^{c}=dn,\ \ (A,A^% {c})=2m^{d-1}=2n/m,$

whence

 $\tau(A)={\textstyle\frac{d}{4}}n^{1/d}.$

[By Example 5.15 (the $d$-cube) this last result is also true for $m=2$, and (for even $m\geq 2$) it is by Example 5.7 (the $n$-cycle) also true for $d=1$.] If we have correctly conjectured the maximizing $A$, then

 $m$

and presumably(??)

 $\tau_{c}\sim{\textstyle\frac{d}{4}}n^{1/d}$

in any case.

###### Example 5.18

Chess moves.

Here is a classic homework problem for an undergraduate Markov chains course.

Start a knight at a corner square of an otherwise-empty chessboard. Move the knight at random, by choosing uniformly from the legal knight-moves at each step. What is the mean number of moves until the knight returns to the starting square?

It’s a good question, because if you don’t know Markov chain theory it looks too messy to do by hand, whereas using Markov chain theory it becomes very simple. The knight is performing random walk on a graph (the 64 squares are the vertices, and the possible knight-moves are the edges). It is not hard to check that the graph is connected, so by the elementary Chapter 3 yyy for a corner square $v$ the mean return time is

 $E_{v}T^{+}_{v}=\frac{1}{\pi_{v}}=\frac{2|\mbox{{\cal E}}|}{d_{v}}=|\mbox{{% \cal E}}|,$

and by drawing a sketch in the margin the reader can count the number of edges $|\mbox{{\cal E}}|$ to be $168$.

Other chess pieces—queen, king, rook—define different graphs (the bishop’s is of course not connected, and the pawn’s not undirected). One might expect that the conventional ordering of the “strength” of the pieces as (queen, rook, knight, king) is reflected in parameters $\tau_{0}$ and $\tau_{2}$ (jjj how about the other taus?) being increasing in this ordering. The reader is invited to perform the computations. (jjj: an undergraduate project?) We have done so only for the rook’s move, treated in the next example.

The computations for the queen, knight, and king are simplified if the walks are made on a toroidal chessboard. (There is no difference for the rook.)

jjj Chess on a bagel, anyone? Continue same paragraph:

Then Fourier analysis (see Diaconis [123]) on the abelian group $Z^{2}_{m}$ (with $m=8$) can be brought to bear, and the eigenvalues are easy to compute. We omit the details, but the results for (queen, rook, knight, king) are asymptotically

 $\displaystyle\tau_{0}$ $\displaystyle=$ $\displaystyle(m^{2}+{\textstyle\frac{7}{12}}m+O(1),\,m^{2}+m+O(1),$ $\displaystyle\ \ \mbox{{\bf jjj}?}(1+o(1))c_{\mbox{\scriptsize knight}}m^{2}% \log m,\,\mbox{{\bf jjj}?}(1+o(1))c_{\mbox{\scriptsize king}}m^{2}\log m)$ $\displaystyle\tau_{2}$ $\displaystyle\sim$ $\displaystyle\left({\textstyle\frac{4}{3}},\,2,\,{\textstyle\frac{1}{5\pi^{2}}% }m^{2},\,{\textstyle\frac{2}{3\pi^{2}}}m^{2}\right)$

as $m\rightarrow\infty$, in conformance with our expectations, and numerically

 $\displaystyle\tau_{0}$ $\displaystyle=$ $\displaystyle(65.04,\,67.38,\,69.74,\,79.36)$ $\displaystyle\tau_{2}$ $\displaystyle=$ $\displaystyle(1.29,\,1.75,\,1.55,\,4.55)$

for $m=8$. The only surprise is the inverted $\tau_{2}$ ordering for (rook, knight).

###### Example 5.19

Rook’s random walk on an $m$-by-$m$ chessboard.

jjj Do we want to do this also on a $d$-dimensional grid? We need to mention how this is a serious example, used with Metropolis for sampling from log concave distributions; reference is [33][32]?

Number the rows and columns of the chessboard each $0$ through $m-1$ in arbitrary fashion, and denote the square of the chessboard at row $i_{1}$ and column $i_{2}$ by ${\bf i}=(i_{1},i_{2})$. In continuous time, the rook’s random walk $({\bf X}(t))$ is the product of two continuous-time random walks on the complete graph $K_{m}$ on $m$ vertices, each run at rate $1/2$. Thus (cf. Example 5.9)

 $P_{\bf i}({\bf X}(t)={\bf j})=\prod_{u=1}^{2}\left[\frac{1}{m}+\left(\delta_{i% _{u},j_{u}}-\frac{1}{m}\right)\exp\left(-\frac{mt}{2(m-1)}\right)\right],$ (5.76)

which can be expanded to get the discrete-time multistep transition probabilities, if desired. We recall that the eigenvalues for discrete-time random walk on $K_{m}$ are $1$ with multiplicity $1$ and $-1/(m-1)$ with multiplicity $m-1$. It follows [recall (5.66)] that the eigenvalues for the continuous-time rook’s walk are

 $0,\ \frac{m}{2(m-1)},\ \frac{m}{m-1}\mbox{\ \ with resp.~{}multiplicities\ \ }% 1,\ 2(m-1),\ (m-1)^{2}.$

In particular,

 $\tau_{2}=\frac{2(m-1)}{m},$ (5.77)

which equals $1.75$ for $m=8$ and converges to $2$ as $m$ grows. Applying the eigentime identity, a brief calculation gives

 $\tau_{0}=\frac{(m-1)^{2}(m+3)}{m},$ (5.78)

which equals $67.375$ for $m=8$ and $m^{2}+m+O(1)$ for $m$ large.

Starting the walk ${\bf X}$ at ${\bf 0}=(0,0)$, let $Y(t)$ denote the Hamming distance $H({\bf X}(t),{\bf 0})$ of ${\bf X}(t)$ from ${\bf 0}$, i.e., the number of coordinates ($0$, $1$, or $2$) in which ${\bf X}(t)$ differs from ${\bf 0}$. Then $Y$ is a birth-and-death chain with transition rates

 $q_{01}=1,\ \ q_{10}=\frac{1}{2(m-1)},\ \ q_{12}=\frac{1}{2},\ \ q_{21}=\frac{1% }{m-1}.$

This is useful for computing mean hitting times. Of course

 $H({\bf i},{\bf j})=0$

Since

 $1+E_{1}T^{Y}_{0}=E_{0}T^{Y}_{1}+E_{1}T^{Y}_{0}=m^{2},$

it follows that

 $H({\bf i},{\bf j})=1$

Finally, it is clear that $E_{2}T^{Y}_{1}=m-1$, so that

 $E_{2}T^{Y}_{0}=E_{2}T^{Y}_{1}+E_{1}T^{Y}_{0}=m^{2}+m-2,$

whence

 $H({\bf i},{\bf j})=2$

These calculations show

 ${\textstyle\frac{1}{2}}\tau^{*}=\max_{{\bf i}{\bf j}}E_{\bf i}T_{\bf j}=m^{2}+% m-2,$

which equals $70$ for $m=8$, and they provide another proof of (5.78).

From (5.76) it is easy to derive

 $\bar{d}_{m}(t)=\left(2-\frac{2}{m}\right)\exp\left(-\frac{mt}{2(m-1)}\right)-% \left(1-\frac{2}{m}\right)\exp\left(-\frac{mt}{m-1}\right)$

and thence

 $\tau_{1}=-2\frac{m-1}{m}\left[\ln\left(1-\left(1-e^{-1}\frac{m(m-2)}{(m-1)^{2}% }\right)^{1/2}\right)+\ln\left(\frac{m-1}{m-2}\right)\right],$

which rounds to $2.54$ for $m=8$ and converges to $-2\ln(1-(1-e^{-1})^{1/2})\doteq 3.17$ as $m$ becomes large.

Any set $A$ of the form $\{(i_{1},i_{2}):\,i_{u}\in J\}$ with either $u=1$ or $u=2$ and $J$ a nonempty proper subset of $\{0,\ldots,m-1\}$ achieves the value

 $\tau_{c}=2\frac{m-1}{m}.$

A direct proof is messy, but this follows immediately from the general inequality $\tau_{c}\leq\tau_{2}$, (5.77), and a brief calculation that the indicated $A$ indeed gives the indicated value.

jjj Note: I’ve worked these out and have handwritten notes. How much do we want to include, if at all? (I could at least put the results in the table.)

## 5.2.1 Biased walk on a balanced tree

Consider again the balanced $r$-tree setup of Example 5.14. Fix a parameter $0<\lambda<\infty$. We now consider biased random walk $(X_{t})$ on the tree, where from each non-leaf vertex other than the root the transition goes to the parent with probability $\lambda/(\lambda+r)$ and to each child with probability $1/(\lambda+r)$. As in Example 5.14 (the case $\lambda=1$), the chain $\hat{X}$ induced by the function

 $i$

is (biased) reflecting random walk on $\{0,\ldots,h\}$ with respective probabilities $\lambda/(\lambda+r)$ and $r/(\lambda+r)$ of moving to the right and left from any $i\not=0,h$; the ratio of these two transition probabilities is

 $\rho=\lambda/r.$

The stationary distribution $\hat{\pi}$ for $\hat{X}$ is a modified geometric:

 $m=0$

where

 $\rho\neq 1$

Since the stationary distribution $\pi$ for $X$ is assigns the same probability to each of the $r^{h-f(v)}$ vertices $v$ with a given value of $f(v)$, a brief calculation shows that $\pi_{v}p_{vx}=\lambda^{f(v)}/\hat{w}r^{h}$ for any edge $(v=\mbox{\ child},x=\mbox{\ parent})$ in the tree. In the same notation, it follows that $X$ is random walk on the balanced $r$-tree with edge weights $w_{vx}=\lambda^{f(v)}$ and total weight $w=\sum_{v,x}w_{vx}=\hat{w}r^{h}$.

The distribution $\hat{\pi}$ concentrates near the root-level if $\rho<1$ and near the leaves-level if $\rho>1$; it is nearly uniform on the $h$ levels if $\rho=1$. On the other hand, the weight assigned by the distribution $\pi$ to an individual vertex $v$ is a decreasing function of $f(v)$ (thus favoring vertices near the leaves) if $\lambda<1$ (i.e., $\rho<1/r$) and is an increasing function (thus favoring vertices near the root) if $\lambda>1$; it is uniform on the vertices in the unbiased case $\lambda=1$.

The mean hitting time calculations of Example 5.14 can all be extended to the biased case. For example, for $\lambda\neq 1$ the general formula (5.55) becomes [using the same notation as at (5.55)]

 $\displaystyle E_{v_{1}}T_{v_{2}}$ $\displaystyle=$ $\displaystyle\hat{w}r^{h}\frac{\lambda^{-f_{3}}-\lambda^{-f_{2}}}{\lambda^{-1}% -1}+2(\rho^{-1}-1)^{-2}\left(\rho^{-(f_{2}+1)}-\rho^{-(f_{1}+1)}\right)$ (5.79) $\displaystyle\ \ -2(\rho^{-1}-1)^{-1}(f_{2}-f_{1})-(f_{2}-f_{1})$

if $\rho\neq 1$ and

 $E_{v_{1}}T_{v_{2}}=\hat{w}r^{h}\frac{\lambda^{-f_{3}}-\lambda^{-f_{2}}}{% \lambda^{-1}-1}+f_{2}^{2}-f_{1}^{2}$

if $\rho=1$. The maximum value is attained when $v_{1}$ and $v_{2}$ are leaves and $v_{3}$ is the root. So if $\lambda\neq 1$,

 ${\textstyle\frac{1}{2}}\tau^{*}=\max_{v,x}E_{v}T_{x}=\hat{w}r^{h}\frac{\lambda% ^{-h}-1}{\lambda^{-1}-1}.$ (5.80)

The orders of magnitude for all of the $\tau$-parameters (with $r$ and $\lambda$, and hence $\rho$, fixed as $h$, and hence $n$, becomes large) are summarized on a case-by-case basis in the next table. Following are some of the highlights in deriving these results; the details, and derivation of exact formulas and more detailed asymptotic results, are left to the reader.

Orders of magnitude of parameters [$\tau=\Theta(\mbox{entry})$]

for $\lambda$-biased walk on a balanced $r$-tree of height $h$ ($\rho=\lambda/r$).

Value of $\rho$ $\tau^{*}$ $\tau_{0}$ $\tau_{1}$ $\tau_{2}$ $\tau_{c}$
$\rho<1/r$ $\rho^{-h}$ $\rho^{-h}$ $\rho^{-h}$ $\rho^{-h}$ $\rho^{-h}$
$\rho=1/r$ ($\equiv$ Example 5.14) $nh$ $nh$ $n$ $n$ $n$
$1/r<\rho<1$ $n$ $n$ $\rho^{-h}$ $\rho^{-h}$ $\rho^{-h}$
$\rho=1$ $nh$ $n$ $h$ $h$ $h$
$\rho>1$ $n$ $n$ $h$ $1$ $1$

For $\tau_{0}=\sum_{x}\pi_{x}E_{\mbox{\scriptsize root}}T_{x}$ we have $\tau_{0}\leq E_{\mbox{ \scriptsize root}}T_{\mbox{\rm\scriptsize leaf}}$. If $\rho<1/r$, this bound is tight:

 $\tau_{0}\sim E_{\mbox{ \scriptsize root}}T_{\mbox{\rm\scriptsize leaf}}\sim% \frac{2\rho^{-h}}{(1-\rho)^{2}(1-\lambda)}(\lambda-\rho);$

for $\rho>1/r$ a more careful calculation is required.

If $\rho<1$, then the same arguments as for the unbiased case ($\rho=1/r$) show

 $\tau_{1}\sim\tau_{2}\sim 2\rho^{-(h-1)}/(1-\rho)^{2}.$

In this case it is not hard to show that

 $\tau_{c}=\Theta(\rho^{-h})$

as well. If $\rho=1$, then it is not hard to show that

 $\tau_{1}=\Theta(h),\ \ \ \tau_{c}\sim 2(1-{\textstyle\frac{1}{r}})h$

with $\tau_{c}$ achieved at a branch of the root (excluding the root), and so

 $\tau_{2}=\Theta(h)$

as well. If $\rho>1$, then since $\hat{X}$ has positive drift equal to $(\rho-1)/(\rho+1)$, it follows that

 $\tau_{1}\sim\frac{\rho+1}{\rho-1}h.$

The value $\tau_{c}$ is achieved by isolating a leaf, giving

 $\tau_{c}\rightarrow 1,$

and so, by the inequalities $\tau_{c}\leq\tau_{2}\leq 8\tau_{c}^{2}$ of Chapter 4, Section yyy,

 $\tau_{2}=\Theta(1)$

as well.

jjj Limiting value of $\tau_{2}$ when $\rho>1$ is that of $\tau_{2}$ for biased infinite tree? Namely?