10 Some Graph Theory and Randomized Algorithms (September 1 1999)

10.1 Expanders

10.1.1 Definitions

The Cheeger time constant τc\tau_{c} discussed in Chapter 4 section 5.1 (yyy 10/11/94 version) becomes, for a rr-regular nn-vertex graph,

τc=supAr|A||Ac|n|(A,Ac)|\tau_{c}=\sup_{A}\frac{r\ |A|\ |A^{c}|}{n\ |\mbox{${\cal E}$}(A,A^{c})|}

where (A,Ac)\mbox{${\cal E}$}(A,A^{c}) is the set of edges from a proper subset AA of vertices to its complement AcA^{c}. Our version of Cheeger’s inequality is (Chapter 4 Corollary 37 and Theorem 40) (yyy 10/11/94 version)

τcτ28τc2.\tau_{c}\leq\tau_{2}\leq 8\tau_{c}^{2}. (10.1)

Definition. An expander family is a sequence GnG_{n} of rr-regular graphs (for some fixed r>2r>2), with nn\to\infty through some subsequence of integers, such that

supnτc(Gn)<\sup_{n}\tau_{c}(G_{n})<\infty

or equivalently (by Cheeger’s inequality)

supnτ2(Gn)<.\sup_{n}\tau_{2}(G_{n})<\infty.

One informally says “expander” for a generic graph GnG_{n} in the family. The expander property is stronger than the rapid mixing property exemplified by the dd-cube (Chapter 5 Example 15) (yyy 4/23/96 version). None of the examples in Chapter 5 is an expander family, and indeed there are no known elementary examples. Certain random constructions of regular graphs yield expanders: see Chapter 30 Proposition 1 (yyy 7/9/96 version). Explicit constructions of expander families, in particular the celebrated Ramanujan graphs, depend on group- and number-theoretic ideas outside our scope: see the elegant monograph of Lubotzky [243].

Graph parameters like τc\tau_{c} are more commonly presented in inverted form (i.e. like 1/τc1/\tau_{c}) as coefficients of expansion such as

h:=infA|(A,Ac)|rmin(|A|,|Ac|).h:=\inf_{A}\frac{|\mbox{${\cal E}$}(A,A^{c})|}{r\ \min(|A|,|A^{c}|)}. (10.2)

A more familiar version ([95] page 26) of Cheeger’s inequality in graph theory becomes, on regular graphs,

h2/21-λ22h.h^{2}/2\leq 1-\lambda_{2}\leq 2h. (10.3)

Since trivially τc1/h2τc\tau_{c}\leq 1/h\leq 2\tau_{c} the two versions agree up to factors of 22. Inequalities involving coefficients of expansion are often called isoperimetric inequalities. Expanders and isoperimetric inequalities have been studied extensively in graph theory and the theory of algorithms, e.g. Chung [95] Chapters 2 and 6, the conference proceedings [157], and the introduction of Lubotzky [243].

One algorithmic motivation for Cheeger-type inequalities concerns computational complexity of calculating parameters like τc\tau_{c} amd hh. Using the definition directly requires exponential (in nn) time; but because eigenvalues can be calculated in polynomial time, these general inequalities imply that at least crude bounds can be computed in polynomial time.

10.1.2 Random walk on expanders

If we don’t pay attention to numerical constants, then general results about reversible chains easily give us the orders of magnitude of other hitting and mixing time parameters for random walks on expanders.

Theorem 10.1

For random walk on an expander family, as nn\to\infty

τ1\displaystyle\tau_{1} =\displaystyle= Θ(logn)\displaystyle\Theta(\log n) (10.4)
τ0\displaystyle\tau_{0} =\displaystyle= Θ(n)\displaystyle\Theta(n) (10.5)
τ*\displaystyle\tau^{*} =\displaystyle= Θ(n)\displaystyle\Theta(n) (10.6)
supvEvC\displaystyle\sup_{v}E_{v}C =\displaystyle= Θ(nlogn)\displaystyle\Theta(n\log n) (10.7)

Proof. Recall the general inequality between τ1\tau_{1} and τ2\tau_{2} (Chapter 4 Lemma 23) (yyy 10/11/94 version), which on a regular graph becomes

τ1τ2(1+12logn).\tau_{1}\leq\tau_{2}(1+{\textstyle\frac{1}{2}}\log n). (10.8)

This immediately gives the upper bound τ1=O(logn)\tau_{1}=O(\log n). For the lower bound, having bounded degree obviously implies that the diameter Δ\Delta of the graph satisfies Δ=Ω(logn)\Delta=\Omega(\log n). And since the mean distance between an initial vertex vv and the position XTX_{T} of the walk at a stopping time TT is at most ETET, the definition of τ1(2)\tau_{1}^{(2)} implies d(v,w)2τ1(2)d(v,w)\leq 2\tau_{1}^{(2)} for any pair of vertices, that is τ1(2)Δ/2\tau_{1}^{(2)}\geq\Delta/2. This establishes (10.4). The general Markov chain fact τ0=Ω(n)\tau_{0}=\Omega(n) is Chapter 3 Proposition 14 (yyy 1/26/93 version). Chapter 4 Lemma 25 gives τ02nτ2\tau_{0}\leq 2n\tau_{2}. Combining these and the obvious inequality τ0τ*/2\tau_{0}\leq\tau^{*}/2 establishes (10.5,10.6). Finally, the lower bound in (10.7) follows from the general lower bound in Chapter 6 Theorem 31 (yyy 10/31/94 version), while the upper bound follows from the upper bound on τ*\tau^{*} combined with Chapter 2 Theorem 36 (yyy 8/18/94 version).    

In many ways the important aspect of Theorem 10.1 is that τ1\tau_{1}-type mixing times are of order logn\log n. We spell out some implications below. These hold for arbitrary regular graphs, though the virtue of expanders is that τ2\tau_{2} is bounded.

Proposition 10.2

There exists constants K1,K2K_{1},K_{2} such that the following inequalities hold on any regular graph.

(i) For each vertex vv there exists a stopping time TvT_{v} such that Pv(X(Tv))P_{v}(X(T_{v})\in\cdot) is uniform and EvTvK1τ2lognE_{v}T_{v}\leq K_{1}\tau_{2}\log n.

(ii) For lazy random walk X~t\widetilde{X}_{t} (with hold-probability 1/21/2)

Pv(X~t=w)1n(1-12j) for all tjK2τ2logn and all vertices v,w.P_{v}(\widetilde{X}_{t}=w)\geq{\textstyle\frac{1}{n}}(1-{\textstyle\frac{1}{2^% {j}}})\mbox{ for all }t\geq jK_{2}\tau_{2}\log n\mbox{ and all vertices }v,w.

Proof. Part (i) is just the definition of τ1(2)\tau_{1}^{(2)}, combined with (10.8) and the fact τ1(2)=O(τ1)\tau_{1}^{(2)}=O(\tau_{1}).

yyy relate (ii) to Chapter 4 section 3.3    

Repeated use of (i) shows that we can get independent samples from π\pi by sampling at random times T1,T2,T3,T_{1},T_{2},T_{3},\ldots with E(Tj+1-Tj)K1τ2lognE(T_{j+1}-T_{j})\leq K_{1}\tau_{2}\log n. Alternatively, repeated use of (ii) shows that we can get almost independent samples from π\pi by examining the lazy chain at deterministic times, as follows.

Corollary 10.3

Fix jj and let t0jK2τ2lognt_{0}\geq jK_{2}\tau_{2}\log n. Write (Y1,,YL)=(Xt0,,XLt0)(Y_{1},\ldots,Y_{L})=(X_{t_{0}},\ldots,X_{Lt_{0}}). Then

P(Y1=y1,,YL=yL)n-L(1-L2j), for all L,y1,,yLP(Y_{1}=y_{1},\ldots,Y_{L}=y_{L})\geq n^{-L}(1-{\textstyle\frac{L}{2^{j}}}),\ % \mbox{ for all }L,y_{1},\ldots,y_{L}
 the variation distance between dist(Y1,,YL) and π××π is at most L/2j.\mbox{ the variation distance between }{\rm dist}(Y_{1},\ldots,Y_{L})\mbox{ % and }\pi\times\ldots\times\pi\mbox{ is at most }L/2^{j}.

Examining the lazy chain at deterministic times means sampling the original walk at random times, but at bounded random times. Thus we can get LL precisely independent samples using (i) in mean number K1Lτ2lognK_{1}L\tau_{2}\log n of steps, but without a deterministic upper bound on the number of steps. Using Corollary 10.3 we get almost independent samples (up to variation distance ε\varepsilon) in a number of steps deterministically bounded by K2Llog(L/ε)τ2lognK_{2}L\log(L/\varepsilon)\tau_{2}\log n.

10.1.3 Counter-example constructions

Constructions with expanders are often useful in providing counter-examples to conjectures suggested by inspecting properties of random walk on the elementary examples of graphs in Chapter 5. For example, consider upper bounds on τ0\tau_{0} in terms of τ2\tau_{2} and nn, in our setting of regular graphs. From general results for reversible chains in Chapter 4 (10/11/94 version: Lemma 24 and below (9))

max(τ2,(n-1)2n)τ0(n-1)τ2.\max(\tau_{2},{\textstyle\frac{(n-1)^{2}}{n}})\leq\tau_{0}\leq(n-1)\tau_{2}.

The examples in Chapter 5 are consistent with a conjecture

τ0=?O(max(n,τ2)logn)\tau_{0}=^{?}O(\max(n,\tau_{2})\log n) (10.9)

where the logn\log n term is needed for the 2-dimensional torus. We now outline a counter-example.

Take mm copies on the complete graph on mm vertices. Distinguish one vertex viv_{i} from each copy ii. Add edges to make the (vi)(v_{i}) the vertices of a rr-regular expander. For this graph GmG_{m} we have, as mm\rightarrow\infty with fixed rr,

n=m2;  τ2=Θ(m2);  τ0=Θ(m3)n=m^{2};\ \tau_{2}=\Theta(m^{2});\ \tau_{0}=\Theta(m^{3})

contradicting conjecture (10.9). We leave the details to the reader: the key point is that random walk on GmG_{m} may be decomposed as random walk on the expander, with successive steps in the expander separated by sojourns of times Θ(m2)\Theta(m^{2}) within a clique.