The parameter $\tau_{2}$ is the relaxation time, defined in terms of the eigenvalue $\lambda_{2}$ (Chapter 3 section yyy) as

$\displaystyle\tau_{2}$ | $\displaystyle=$ | $\displaystyle(1-\lambda_{2})^{-1}\mbox{ in discrete time}$ | ||

$\displaystyle=$ | $\displaystyle\lambda_{2}^{-1}\mbox{ in continuous time.}$ |

In Chapter 3 yyy we proved a lower bound for an $n$-state discrete-time chain:

$\tau_{2}\geq 1-\frac{1}{n}$ |

which is attained by random walk on the complete graph. We saw in Chapter 3 Theorem yyy the extremal characterization

$\tau_{2}=\sup\{||g||_{2}^{2}/\mbox{${\cal E}$}(g,g):\sum_{i}\pi_{i}g(i)=0\}.$ | (4.22) |

The next three lemmas give inequalities between $\tau_{2}$ and the parameters studied earlier in this chapter. Write $\pi_{*}\equiv\min_{i}\pi_{i}$.

In continuous time,

$\tau_{2}\leq\tau_{1}\leq\tau_{2}\left(1+\frac{1}{2}\log\frac{1}{\pi_{*}}\right).$ |

In discrete time,

$\tau_{1}^{(2)}\leq\frac{4e}{e-1}\ \tau_{2}\left(1+\frac{1}{2}\log\frac{1}{\pi_% {*}}\right).$ |

$\tau_{2}\leq\tau_{0}\leq(n-1)\tau_{2}$.

$\tau^{*}\leq\frac{2\tau_{2}}{\pi_{*}}$.

Proof of Lemma 4.23. Consider first the continuous time case. By the spectral representation, as $t\rightarrow\infty$ we have $p_{ii}(t)-\pi_{i}\sim c_{i}\exp(-t/\tau_{2})$ with some $c_{i}\neq 0$. But by Lemma 4.5 we have $|p_{ii}(t)-\pi_{i}|=O(\exp(-t/\tau_{1}))$. This shows $\tau_{2}\leq\tau_{1}$. For the right inequality, the spectral representation gives

$p_{ii}(t)-\pi_{i}\leq e^{-t/\tau_{2}}.$ | (4.23) |

Recalling the definition (4.14) of $\hat{d}$,

$\displaystyle\bar{d}(t)$ | $\displaystyle\leq$ | $\displaystyle 2d(t)$ | ||

$\displaystyle\leq$ | $\displaystyle\sqrt{\hat{d}(2t)}\mbox{ by Lemma }\ref{hatinq}(b)$ | |||

$\displaystyle\leq$ | $\displaystyle\sqrt{\max_{i}\frac{e^{-2t/\tau_{2}}}{\pi_{i}}}\mbox{ by }(\ref{% dhat})\mbox{ and }(\ref{sr7})$ | |||

$\displaystyle=$ | $\displaystyle\pi_{*}^{-1/2}e^{-t/\tau_{2}}$ |

and the result follows. The upper bound on $\tau_{1}^{(2)}$ holds in continuous time by Lemmas 4.11 and 4.12, and so holds in discrete time because $\tau_{1}^{(2)}$ and $\tau_{2}$ are unaffected by continuization.

Proof of Lemma 4.24. $\tau_{2}\leq\tau_{0}$ because $\tau_{2}$ is the first term in the eigentime identity for $\tau_{0}$. For the other bound, Chapter 3 Lemma yyy gives the inequality in

$\tau_{0}=\sum_{j}\pi_{j}E_{\pi}T_{j}\leq\sum_{j}(1-\pi_{j})\tau_{2}=(n-1)\tau_% {2}.$ |

Proof of Lemma 4.25. Fix states $a,b$ such that $E_{a}T_{b}+E_{b}T_{a}=\tau^{*}$ and fix a function $0\leq g\leq 1$ attaining the sup in the extremal characterization (Chapter 3 Theorem yyy), so that

$\tau^{*}=\frac{1}{\mbox{${\cal E}$}(g,g)},\ \ g(a)=0,g(b)=1.$ |

Write $c=\sum_{i}\pi_{i}g(i)$. Applying the extremal characterization of $\tau_{2}$ to the centered function $g-c$,

$\tau_{2}\geq\frac{||g-c,g-c||_{2}^{2}}{\mbox{${\cal E}$}(g-c,g-c)}=\frac{{\rm var% }\ _{\pi}g(X_{0})}{\mbox{${\cal E}$}(g,g)}=\tau^{*}\ {\rm var}\ _{\pi}g(X_{0}).$ |

But

$\displaystyle{\rm var}\ _{\pi}g(X_{0})$ | $\displaystyle\geq$ | $\displaystyle\pi_{a}c^{2}+\pi_{b}(1-c)^{2}$ | ||

$\displaystyle\geq$ | $\displaystyle\inf_{0\leq y\leq 1}(\pi_{a}y^{2}+\pi_{b}(1-y)^{2})$ | |||

$\displaystyle=$ | $\displaystyle\frac{\pi_{a}\pi_{b}}{\pi_{a}+\pi_{b}}$ | |||

$\displaystyle\geq$ | $\displaystyle\frac{1}{2}\min(\pi_{a},\pi_{b})$ | |||

$\displaystyle\geq$ | $\displaystyle\pi_{*}/2$ |

establishing the lemma. $\Box$

Simple examples show that the bounds in these Lemmas cannot
be much improved in general.
Specifically

(a) on the complete graph (Chapter 5 Example yyy)
$\tau_{2}=(n-1)\tau_{0}$
and $\tau_{2}^{*}=\frac{2\tau_{2}}{\pi_{*}}$.

(b) On the barbell (Chapter 5 Example yyy),
$\tau_{2},\tau_{1}$ and $\tau_{0}$ are asymptotic to each other.

(c) In the $M/M/1/n$ queue,
$\tau_{1}/\tau_{2}=\Theta(\log 1/\pi_{*})$
as $n\rightarrow\infty$.
$\Box$

In the context of Lemma 4.23, if we want to relate $\tau_{1}^{{\mbox{disc}}}$ itself to eigenvalues in discrete time we need to take almost-periodicity into account and use $\beta=\max(\lambda_{2},-\lambda_{n})$ in place of $\tau_{2}$. Rephrasing the proof of Lemma 4.23 gives

In discrete time,

$\lceil\frac{1}{\log 1/\beta}\rceil\leq\tau_{1}^{{\mbox{disc}}}\leq\lceil\frac{% 1+\frac{1}{2}\log 1/\pi_{*}}{\log 1/\beta}\rceil$ |

Regarding a discrete-time chain as random walk on a weighted graph, let $\Delta$ be the diameter of the graph. By considering the definition of the variation distance $\bar{d}(t)$ and initial vertices $i,j$ at distance $\Delta$, it is obvious that $\bar{d}(t)=1$ for $t<\Delta/2$, and hence $\tau_{1}^{{\mbox{disc}}}\geq\lceil\Delta/2\rceil$. Combining with the upper bound in Lemma 4.26 leads to a relationship between the diameter and the eigenvalues of a weighted graph.

$\log\frac{1}{\beta}\leq\frac{2+\log(1/\pi_{*})}{\Delta}.$ |

This topic will be discussed further in Chapter yyy.

Perhaps the most natural probabilistic interpretation of $\tau_{2}$ is as follows. Recall that the correlation between random variables $Y,Z$ is

$\mbox{cor}(Y,Z)\equiv\frac{E(YZ)-(EY)(EZ)}{\sqrt{{\rm var}\ Y\ {\rm var}\ Z}}.$ |

For a stationary Markov chain define the maximal correlation function

$\rho(t)\equiv\max_{h,g}\mbox{cor}(h(X_{0}),g(X_{t}))$ |

This makes sense for general chains (see Notes for further comments), but under our standing assumption of reversibility we have

In continuous time,

$\rho(t)=\exp(-t/\tau_{2}),\ t\geq 0.$ |

In discrete time,

$\rho(t)=\beta^{t},\ t\geq 0$ |

where $\beta=\max(\lambda_{2},-\lambda_{n})$.

This is a translation of the Rayleigh-Ritz characterization of eigenvalues (Chapter 3 yyy) – we leave the details to the reader.

Now consider a function $g$ with $E_{\pi}g(X_{0})=0$ and $||g||_{2}^{2}\equiv E_{\pi}g^{2}(X_{0})>0$. Write

$\displaystyle S_{t}\equiv$ | $\displaystyle\int_{0}^{t}g(X_{s})ds$ | in continuous time | ||

$\displaystyle S_{t}\equiv$ | $\displaystyle\sum_{s=0}^{t-1}g(X_{s})$ | in discrete time. |

Recall from Chapter 2 yyy that for general chains there is a limit variance $\sigma^{2}=\lim_{t\rightarrow\infty}t^{-1}{\rm var}\ S_{t}$. Reversibility gives extra qualitative and quantitative information. The first result refers to the stationary chain.

In continuous time, $t^{-1}{\rm var}\ _{\pi}S_{t}\uparrow\sigma^{2}$, where

$0<\sigma^{2}\leq 2\tau_{2}||g||_{2}^{2}.$ |

And $A(t/\tau_{2})t\sigma^{2}\leq{\rm var}\ _{\pi}S_{t}\leq t\sigma^{2}$, where

$A(u)\equiv\int_{0}^{u}\left(1-\frac{s}{u}\right)e^{-s}ds=1-u^{-1}(1-e^{-u})% \uparrow 1\mbox{ as }u\uparrow\infty.$ |

In discrete time,

$t^{-1}{\rm var}\ _{\pi}S_{t}\rightarrow\sigma^{2}\leq 2\tau_{2}||g||_{2}^{2}$ |

$\sigma^{2}t\left(1-\frac{2\tau_{2}}{t}\right)\leq{\rm var}\ _{\pi}S_{t}\leq% \sigma^{2}t+||g||_{2}^{2}$ |

and so in particular

${\rm var}\ _{\pi}S_{t}\leq t||g||_{2}^{2}(2\tau_{2}+\frac{1}{t}).$ | (4.24) |

Proof. Consider first the continuous time case. A brief calculation using the spectral representation (Chapter 3 yyy) gives

$E_{\pi}g(X_{0})g(X_{t})=\sum_{m\geq 2}g^{2}_{m}e^{-\lambda_{m}t}$ | (4.25) |

where $g_{m}=\sum_{i}\pi^{1/2}_{i}u_{im}g(i)$. So

$\displaystyle t^{-1}{\rm var}\ _{\pi}S_{t}$ | $\displaystyle=$ | $\displaystyle t^{-1}\int_{0}^{t}\int_{0}^{t}E_{\pi}g(X_{u})g(X_{s})\ dsdu$ | (4.26) | ||

$\displaystyle=$ | $\displaystyle 2t^{-1}\int_{0}^{t}(t-s)E_{\pi}g(X_{0})g(X_{s})ds$ | ||||

$\displaystyle=$ | $\displaystyle 2\int_{0}^{t}\left(1-\frac{s}{t}\right)\sum_{m\geq 2}g^{2}_{m}e^% {-\lambda_{m}s}\ ds$ | ||||

$\displaystyle=$ | $\displaystyle 2\sum_{m\geq 2}\frac{g^{2}_{m}}{\lambda_{m}}A(\lambda_{m}t)$ | (4.27) |

by change of variables in the integral defining $A(u)$. The right side increases with $t$ to

$\sigma^{2}\equiv 2\sum_{m\geq 2}g^{2}_{m}/\lambda_{m},$ | (4.28) |

and the sum here is at most $\sum_{m\geq 2}g^{2}_{m}/\lambda_{2}=||g||_{2}^{2}\tau_{2}$. On the other hand, $A(\cdot)$ is increasing, so

$t^{-1}{\rm var}\ _{\pi}(S_{t})\geq 2\sum_{m\geq 2}\frac{g^{2}_{m}}{\lambda_{m}% }A(\lambda_{2}t)=\sigma^{2}A(t/\tau_{2}).$ |

In discrete time the arguments are messier, and we will omit details of calculations. The analog of (4.26) becomes

$t^{-1}{\rm var}\ _{\pi}S_{t}=\sum_{s=-(t-1)}^{t-1}\left(1-\frac{|i|}{t}\right)% \sum_{m\geq 2}g_{m}^{2}\lambda_{m}^{s}.$ |

In place of the change of variables argument for (4.27), one needs an elementary calculation to get

$t^{-1}{\rm var}\ _{\pi}S_{t}=2\sum_{m\geq 2}\frac{g_{m}^{2}}{1-\lambda_{m}}\ B% (\lambda_{m},t)$ | (4.29) |

$\mbox{where }B(\lambda,t)=\frac{1+\lambda}{2}-\frac{\lambda(1-\lambda^{t})}{t(% 1-\lambda)}.$ |

This shows

$t^{-1}{\rm var}\ _{\pi}S_{t}\rightarrow\sigma^{2}\equiv\sum_{m\geq 2}g_{m}^{2}% \frac{1+\lambda_{m}}{1-\lambda_{m}}$ |

and the sum is bounded above by

$\frac{1+\lambda_{2}}{1-\lambda_{2}}\sum_{m\geq 2}g_{m}^{2}=\frac{1+\lambda_{2}% }{1-\lambda_{2}}||g||_{2}^{2}\leq 2\tau_{2}||g||_{2}^{2}.$ |

Next, rewrite (4.29) as

${\rm var}\ _{\pi}S_{t}-\sigma^{2}t=-2\sum_{m\geq 2}g_{m}^{2}\frac{\lambda_{m}(% 1-\lambda_{m}^{t})}{(1-\lambda_{m})^{2}}.$ |

Then the upper bound for ${\rm var}\ _{\pi}S_{t}$ follows by checking

$\inf_{-1\leq\lambda<1}\frac{\lambda(1-\lambda^{t})}{(1-\lambda)^{2}}\geq-\frac% {1}{2}.$ |

For the lower bound, one has to verify

$C$ |

where in the sequel we may assume $\lambda_{2}>0$. Then

$B(\lambda_{m},t)\geq\frac{1+\lambda_{m}}{2}(1-C/t),\ m\geq 2$ |

and so

$t^{-1}{\rm var}\ _{\pi}S_{t}\geq\sigma^{2}\ (1-C/t).$ |

But

$\frac{C}{t}=\frac{2\lambda_{2}(1-\lambda_{2}^{t})}{t(1-\lambda_{2})(1+\lambda_% {2})}\leq\frac{2}{t(1-\lambda_{2})}=2\tau_{2}/t$ |

giving the lower bound. $\Box$

Note that even in discrete time it is $\tau_{2}$ that matters in Proposition 4.29. Eigenvalues near $-1$ are irrelevant, except that for a periodic chain we have $\sigma=0$ for one particular function $g$ (which?).

Continuing the study of $S_{t}\equiv\int_{0}^{t}g(X_{s})ds$ or its discrete analog for a stationary chain, standardize to the case where $E_{\pi}g(X_{0})=0,E_{\pi}g^{2}(X_{0})=1$. Proposition 4.29 provides finite-time bounds for the asymptotic approximation of variance. One would like a similar finite-time bound for the asymptotic Normal approximation of the distribution of $S_{t}$.

Is there some explicit function $\psi(b,s)\rightarrow 0$ as $s\rightarrow\infty$, not depending on the chain, such that for standardized $g$ and continuous-time chains,

$\sup_{x}\left|P_{\pi}\left(\frac{S_{t}}{\sigma t^{1/2}}\leq x\right)-P(Z\leq x% )\right|\leq\psi(||g||_{\infty},t/\tau_{2})$ |

where $||g||_{\infty}\equiv\max_{i}|g(i)|$ and $Z$ has Normal$(0,1)$ distribution?

See the Notes for further comments. For the analogous result about large deviations see Chapter yyy.

Suppose we want to estimate the average $\bar{g}\equiv\sum_{i}\pi_{i}g(i)$ of a function $g$ defined on state space. If we could sample i.i.d. from $\pi$ we would need order $\varepsilon^{-2}$ samples to get an estimator with error about $\varepsilon\sqrt{{\rm var}\ _{\pi}g}$. Now consider the setting where we cannot directly sample from $\pi$ but instead use the “Markov Chain Monte Carlo” method of setting up a reversible chain with stationary distribution $\pi$. How many steps of the chain do we need to get the same accuracy? As in section 4.3.3, because we typically can’t quantify the closeness to $\pi$ of a feasible initial distribution, we consider bounds which hold for arbitrary initial states. In assessing the number of steps required, there are two opposite traps to avoid. The first is to say (cf. Proposition 4.29) that $\varepsilon^{-2}\tau_{2}$ steps suffice. This is wrong because the relaxation time bounds apply to the stationary chain and cannot be directly applied to a non-stationary chain. The second trap is to say that because it take $\Theta(\tau_{1})$ steps to obtain one sample from the stationary distribution, we therefore need order $\varepsilon^{-2}\tau_{1}$ steps in order to get $\varepsilon^{-2}$ independent samples. This is wrong because we don’t need independent samples. The correct answer is order $(\tau_{1}+\varepsilon^{-2}\tau_{2})$ steps. The conceptual idea (cf. the definition of $\tau_{1}^{(2)}$) is to find a stopping time achieving distribution $\pi$ and use it as an initial state for simulating the stationary chain. More feasible to implement is the following algorithm.

Algorithm. For a specified real number $t_{1}>0$ and an integer $m_{2}\geq 1$, generate $M(t_{1})$ with Poisson$(t_{1})$ distribution. Simulate the chain $X_{t}$ from arbitrary initial distribution for $M(t_{1})+m_{2}-1$ steps and calculate

$A(g,t_{1},m_{2})\equiv\frac{1}{m_{2}}\sum_{t=M(t_{1})}^{M(t_{1})+m_{2}-1}g(X_{% t}).$ |

$P\left(\left|A(g,t_{1},m_{2})-\bar{g}\right|>\varepsilon||g||_{2}\right)\leq s% (t_{1})+\frac{2\tau_{2}+1/m_{2}}{\varepsilon^{2}m_{2}}$ |

where $s(t)$ is separation (recall section 4.3.1) for the continuized chain.

To make the right side approximately $\delta$ we may take

$t_{1}=\tau_{1}^{(1)}\lceil\log(2/\delta)\rceil;\ \ \ m_{2}=\lceil\frac{4\tau_{% 2}}{\varepsilon^{2}\delta}\rceil.$ |

Since the mean number of steps is $t_{1}+m_{2}-1$, this formalizes the idea that we can estimate $\bar{g}$ to within $\varepsilon||g||_{2}$ in order $(\tau_{1}+\varepsilon^{-2}\tau_{2})$ steps.

xxx if don’t know tau‘s

Proof. We may suppose $\bar{g}=0$. Since $X_{M(t_{1})}$ has the distribution of the continuized chain at time $t_{1}$, we may use the definition of $s(t_{1})$ to write

$P(X_{M(t_{1})}\in\cdot)=(1-s(t_{1}))\pi\ +s(t_{1})\rho$ |

for some probability distribution $\rho$. It follows that

$P\left(\left|A(g,t_{1},m_{2})\right|>\varepsilon||g||_{2}\right)\leq s(t_{1})+% P_{\pi}\left(\left|\frac{1}{m_{2}}\sum_{t=0}^{m_{2}-1}g(X_{t})\right|>% \varepsilon||g||_{2}\right)$ |

$\leq s(t_{1})+\frac{1}{m_{2}^{2}\varepsilon^{2}||g||_{2}^{2}}\ {\rm var}\ _{% \pi}\left(\sum_{t=0}^{m_{2}-1}g(X_{t})\right).$ |

Apply (4.24).

The extremal characterization (4.22) can be used to get lower bounds on $\tau_{2}$ by considering a tractable test function $g$. (xxx list examples). As mentioned in Chapter 3, it is an open problem to give an extremal characterization of $\tau_{2}$ as exactly an inf over flows or similar constructs. As an alternative, Theorem 4.32 gives a non-exact upper bound on $\tau_{2}$ involving quantities derived from arbitrary choices of paths between states. An elegant exposition of this idea, expressed by the first inequality in Theorem 4.32, was given by Diaconis and Stroock [122], and Sinclair [307] noted the second inequality. We copy their proofs.

We first state the result in the setting of random walk on a weighted graph. As in section 4.1, consider a path $x=i_{0},i_{1},\ldots,i_{m}=y$, and call this path $\gamma_{xy}$. This path has length $|\gamma_{xy}|=m$ and has “resistance”

$r(\gamma_{xy})\equiv\sum_{e\in\gamma_{xy}}1/w_{e}$ |

where here and below $e$ denotes a directed edge.

For each ordered pair $(x,y)$ of vertices in a weighted graph, let $\gamma_{xy}$ be a path from $x$ to $y$. Then for discrete-time random walk,

$\tau_{2}\leq w\max_{e}\sum_{x}\sum_{y}\pi_{x}\pi_{y}r(\gamma_{xy})1_{(e\in% \gamma_{xy})}$ |

$\tau_{2}\leq w\max_{e}\frac{1}{w_{e}}\sum_{x}\sum_{y}\pi_{x}\pi_{y}|\gamma_{xy% }|1_{(e\in\gamma_{xy})}.$ |

Note that the two inequalities coincide on an unweighted graph.

Proof. For an edge $e=(i,j)$ write $\Delta g(e)=g(j)-g(i)$. The first equality below is the fact $2\ {\rm var}\ (Y_{1})=E(Y_{1}-Y_{2})^{2}$ for i.i.d. $Y$’s, and the first inequality is Cauchy-Schwarz.

$\displaystyle 2||g||_{2}^{2}$ | $\displaystyle=$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}(g(y)-g(x))^{2}$ | (4.30) | ||

$\displaystyle=$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}\left(\sum_{e\in\gamma_{xy}}\Delta g% (e)\right)^{2}$ | ||||

$\displaystyle=$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}r(\gamma_{xy})\left(\sum_{e\in% \gamma_{xy}}\frac{1}{\sqrt{w_{e}r(\gamma_{xy})}}\sqrt{w_{e}}\Delta g(e)\right)% ^{2}$ | ||||

$\displaystyle\leq$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}r(\gamma_{xy})\sum_{e\in\gamma_{xy}% }w_{e}(\Delta g(e))^{2}$ | (4.31) | |||

$\displaystyle=$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}r(\gamma_{xy})\sum_{e}w_{e}(\Delta g% (e))^{2}1_{(e\in\gamma_{xy})}$ | ||||

$\displaystyle\leq$ | $\displaystyle\kappa\sum_{e}w_{e}(\Delta g(e))^{2}=\kappa\ 2w\mbox{${\cal E}$}(% g,g)$ | (4.32) |

where $\kappa$ is the max in the first inequality in the statement of the Theorem. The first inequality now follows from the extremal characterization (4.22). The second inequality makes a simpler use of the Cauchy-Schwarz inequality, in which we replace (4.30,4.31,4.32) by

$\displaystyle=$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}\left(\sum_{e\in\gamma_{xy}}1\cdot% \Delta g(e)\right)^{2}$ | (4.33) | |||

$\displaystyle\leq$ | $\displaystyle\sum_{x}\sum_{y}\pi_{x}\pi_{y}|\gamma_{xy}|\sum_{e\in\gamma_{xy}}% \left(\Delta g(e)\right)^{2}$ | ||||

$\displaystyle\leq$ | $\displaystyle\kappa^{\prime}\sum_{e}w_{e}(\Delta g(e))^{2}=\kappa^{\prime}\ 2w% \mbox{${\cal E}$}(g,g)$ |

where $\kappa^{\prime}$ is the max in the second inequality in the statement of the Theorem.

Remarks. (a) Theorem 4.32 applies to continuous-time (reversible) chains by setting $w_{ij}=\pi_{i}q_{ij}$.

(b) One can replace the deterministic choice of paths $\gamma_{xy}$ by random paths $\Gamma_{xy}$ of the form $x=V_{0},V_{1},\ldots,V_{M}=y$ of random length $M=|\Gamma_{xy}|$. The second inequality extends in the natural way, by taking expectations in (4.33) to give

$\leq\sum_{x}\sum_{y}\pi_{x}\pi_{y}\ E\left(\ |\Gamma_{xy}|1_{(e\in\Gamma_{xy})% }\ \sum_{e}\left(\Delta g(e)\right)^{2}\right),$ |

and the conclusion is

$\tau_{2}\leq w\max_{e}\frac{1}{w_{e}}\sum_{x}\sum_{y}\pi_{x}\pi_{y}\ E\ |% \Gamma_{xy}|1_{(e\in\Gamma_{xy})}.$ |

(c) Inequalities in the style of Theorem 4.32 are often called Poincaré inequalities because, to quote [122], they are “the discrete analog of the classical method of Poincaré for estimating the spectral gap of the Laplacian on a domain (see e.g. Bandle [39])”. I prefer the descriptive name the distinguished path method. This method has the same spirit as the coupling method for bounding $\tau_{1}$ (see Chapter yyy), in that we get to use our skill and judgement in making wise choices of paths in specific examples. xxx list examples. Though its main utility is in studying hard examples, we give some simple illustrations of its use below.

Write the conclusion of Corollary 4.33 as $\tau_{2}\leq w\max_{e}\frac{1}{w_{e}}\ F(e)$. Consider a regular unweighted graph, and let $\Gamma_{x,y}$ be chosen uniformly from the set of minimum-length paths from $x$ to $y$. Suppose that $F(e)$ takes the same value $F$ for every directed edge $e$. A sufficient condition for this is that the graph be arc-transitive (see Chapter 8 yyy). Then, summing over edges in Corollary 4.33,

$\tau_{2}|\stackrel{\rightarrow}{{\cal E}}|\leq w\sum_{e}\sum_{x}\sum_{y}\pi_{x% }\pi_{y}E|\Gamma_{xy}|1_{(e\in\Gamma_{xy})}=w\sum_{x}\sum_{y}\pi_{x}\pi_{y}E|% \Gamma_{xy}|^{2}$ |

where $|\stackrel{\rightarrow}{{\cal E}}|$ is the number of directed edges. Now $w=|\stackrel{\rightarrow}{{\cal E}}|$, so we may reinterpret this inequality as follows.

For random walk on an arc-transitive graph, $\tau_{2}\leq ED^{2}$, where $D=d(\xi_{1},\xi_{2})$ is the distance between independent uniform random vertices $\xi_{1},\xi_{2}$.

In the context of the $d$-dimensional torus $Z^{d}_{N}$, the upper bound is asymptotic (as $N\rightarrow\infty$) to $N^{2}E\left(\sum_{i=1}^{d}U_{i}\right)^{2}$ where the $U_{i}$ are independent uniform $[0,1/2]$, This bound is asymptotic to $d(d+1/3)N^{2}/16$. Here (Chapter 5 Example yyy) in fact $\tau_{2}\sim dN^{2}/(2\pi^{2})$, so for fixed $d$ the bound is off by only a constant. On the $d$-cube (Chapter 5 Example yyy), $D$ has Binomial$(d,1/2)$ distribution and so the bound is $ED^{2}=(d^{2}+d)/4$, while in fact $\tau_{2}=d/2$.

Intuitively one feels that the bound in Corollary 4.34 should hold for more general graphs, but the following example illustrates a difficulty.

Consider the graph on $n=2m$ vertices obtained from two complete graphs on $m$ vertices by adding $m$ edges comprising a matching of the two vertex-sets.

Here a straightforward implementation of Theorem 4.32 gives an upper bound of $2m$, while in fact $\tau_{2}=m/2$. On the other hand the conclusion of Corollary 4.34 would give an $O(1)$ bound. Thus even though this example has a strong symmetry property (vertex-transitivity: Chapter 8 yyy) no bound like Corollary 4.34 can hold.

Generated on Mon Jun 2 14:23:48 2014 by LaTeXML