# 10.1 Expanders

## 10.1.1 Definitions

The Cheeger time constant $\tau_{c}$ discussed in Chapter 4 section 5.1 (yyy 10/11/94 version) becomes, for a $r$-regular $n$-vertex graph,

 $\tau_{c}=\sup_{A}\frac{r\ |A|\ |A^{c}|}{n\ |\mbox{{\cal E}}(A,A^{c})|}$

where $\mbox{{\cal E}}(A,A^{c})$ is the set of edges from a proper subset $A$ of vertices to its complement $A^{c}$. Our version of Cheeger’s inequality is (Chapter 4 Corollary 37 and Theorem 40) (yyy 10/11/94 version)

 $\tau_{c}\leq\tau_{2}\leq 8\tau_{c}^{2}.$ (10.1)

Definition. An expander family is a sequence $G_{n}$ of $r$-regular graphs (for some fixed $r>2$), with $n\to\infty$ through some subsequence of integers, such that

 $\sup_{n}\tau_{c}(G_{n})<\infty$

or equivalently (by Cheeger’s inequality)

 $\sup_{n}\tau_{2}(G_{n})<\infty.$

One informally says “expander” for a generic graph $G_{n}$ in the family. The expander property is stronger than the rapid mixing property exemplified by the $d$-cube (Chapter 5 Example 15) (yyy 4/23/96 version). None of the examples in Chapter 5 is an expander family, and indeed there are no known elementary examples. Certain random constructions of regular graphs yield expanders: see Chapter 30 Proposition 1 (yyy 7/9/96 version). Explicit constructions of expander families, in particular the celebrated Ramanujan graphs, depend on group- and number-theoretic ideas outside our scope: see the elegant monograph of Lubotzky [243].

Graph parameters like $\tau_{c}$ are more commonly presented in inverted form (i.e. like $1/\tau_{c}$) as coefficients of expansion such as

 $h:=\inf_{A}\frac{|\mbox{{\cal E}}(A,A^{c})|}{r\ \min(|A|,|A^{c}|)}.$ (10.2)

A more familiar version ([95] page 26) of Cheeger’s inequality in graph theory becomes, on regular graphs,

 $h^{2}/2\leq 1-\lambda_{2}\leq 2h.$ (10.3)

Since trivially $\tau_{c}\leq 1/h\leq 2\tau_{c}$ the two versions agree up to factors of $2$. Inequalities involving coefficients of expansion are often called isoperimetric inequalities. Expanders and isoperimetric inequalities have been studied extensively in graph theory and the theory of algorithms, e.g. Chung [95] Chapters 2 and 6, the conference proceedings [157], and the introduction of Lubotzky [243].

One algorithmic motivation for Cheeger-type inequalities concerns computational complexity of calculating parameters like $\tau_{c}$ amd $h$. Using the definition directly requires exponential (in $n$) time; but because eigenvalues can be calculated in polynomial time, these general inequalities imply that at least crude bounds can be computed in polynomial time.

## 10.1.2 Random walk on expanders

If we don’t pay attention to numerical constants, then general results about reversible chains easily give us the orders of magnitude of other hitting and mixing time parameters for random walks on expanders.

###### Theorem 10.1

For random walk on an expander family, as $n\to\infty$

 $\displaystyle\tau_{1}$ $\displaystyle=$ $\displaystyle\Theta(\log n)$ (10.4) $\displaystyle\tau_{0}$ $\displaystyle=$ $\displaystyle\Theta(n)$ (10.5) $\displaystyle\tau^{*}$ $\displaystyle=$ $\displaystyle\Theta(n)$ (10.6) $\displaystyle\sup_{v}E_{v}C$ $\displaystyle=$ $\displaystyle\Theta(n\log n)$ (10.7)

Proof. Recall the general inequality between $\tau_{1}$ and $\tau_{2}$ (Chapter 4 Lemma 23) (yyy 10/11/94 version), which on a regular graph becomes

 $\tau_{1}\leq\tau_{2}(1+{\textstyle\frac{1}{2}}\log n).$ (10.8)

This immediately gives the upper bound $\tau_{1}=O(\log n)$. For the lower bound, having bounded degree obviously implies that the diameter $\Delta$ of the graph satisfies $\Delta=\Omega(\log n)$. And since the mean distance between an initial vertex $v$ and the position $X_{T}$ of the walk at a stopping time $T$ is at most $ET$, the definition of $\tau_{1}^{(2)}$ implies $d(v,w)\leq 2\tau_{1}^{(2)}$ for any pair of vertices, that is $\tau_{1}^{(2)}\geq\Delta/2$. This establishes (10.4). The general Markov chain fact $\tau_{0}=\Omega(n)$ is Chapter 3 Proposition 14 (yyy 1/26/93 version). Chapter 4 Lemma 25 gives $\tau_{0}\leq 2n\tau_{2}$. Combining these and the obvious inequality $\tau_{0}\leq\tau^{*}/2$ establishes (10.5,10.6). Finally, the lower bound in (10.7) follows from the general lower bound in Chapter 6 Theorem 31 (yyy 10/31/94 version), while the upper bound follows from the upper bound on $\tau^{*}$ combined with Chapter 2 Theorem 36 (yyy 8/18/94 version).

In many ways the important aspect of Theorem 10.1 is that $\tau_{1}$-type mixing times are of order $\log n$. We spell out some implications below. These hold for arbitrary regular graphs, though the virtue of expanders is that $\tau_{2}$ is bounded.

###### Proposition 10.2

There exists constants $K_{1},K_{2}$ such that the following inequalities hold on any regular graph.

(i) For each vertex $v$ there exists a stopping time $T_{v}$ such that $P_{v}(X(T_{v})\in\cdot)$ is uniform and $E_{v}T_{v}\leq K_{1}\tau_{2}\log n$.

(ii) For lazy random walk $\widetilde{X}_{t}$ (with hold-probability $1/2$)

 $P_{v}(\widetilde{X}_{t}=w)\geq{\textstyle\frac{1}{n}}(1-{\textstyle\frac{1}{2^% {j}}})\mbox{ for all }t\geq jK_{2}\tau_{2}\log n\mbox{ and all vertices }v,w.$

Proof. Part (i) is just the definition of $\tau_{1}^{(2)}$, combined with (10.8) and the fact $\tau_{1}^{(2)}=O(\tau_{1})$.

yyy relate (ii) to Chapter 4 section 3.3

Repeated use of (i) shows that we can get independent samples from $\pi$ by sampling at random times $T_{1},T_{2},T_{3},\ldots$ with $E(T_{j+1}-T_{j})\leq K_{1}\tau_{2}\log n$. Alternatively, repeated use of (ii) shows that we can get almost independent samples from $\pi$ by examining the lazy chain at deterministic times, as follows.

###### Corollary 10.3

Fix $j$ and let $t_{0}\geq jK_{2}\tau_{2}\log n$. Write $(Y_{1},\ldots,Y_{L})=(X_{t_{0}},\ldots,X_{Lt_{0}})$. Then

 $P(Y_{1}=y_{1},\ldots,Y_{L}=y_{L})\geq n^{-L}(1-{\textstyle\frac{L}{2^{j}}}),\ % \mbox{ for all }L,y_{1},\ldots,y_{L}$
 $\mbox{ the variation distance between }{\rm dist}(Y_{1},\ldots,Y_{L})\mbox{ % and }\pi\times\ldots\times\pi\mbox{ is at most }L/2^{j}.$

Examining the lazy chain at deterministic times means sampling the original walk at random times, but at bounded random times. Thus we can get $L$ precisely independent samples using (i) in mean number $K_{1}L\tau_{2}\log n$ of steps, but without a deterministic upper bound on the number of steps. Using Corollary 10.3 we get almost independent samples (up to variation distance $\varepsilon$) in a number of steps deterministically bounded by $K_{2}L\log(L/\varepsilon)\tau_{2}\log n$.

## 10.1.3 Counter-example constructions

Constructions with expanders are often useful in providing counter-examples to conjectures suggested by inspecting properties of random walk on the elementary examples of graphs in Chapter 5. For example, consider upper bounds on $\tau_{0}$ in terms of $\tau_{2}$ and $n$, in our setting of regular graphs. From general results for reversible chains in Chapter 4 (10/11/94 version: Lemma 24 and below (9))

 $\max(\tau_{2},{\textstyle\frac{(n-1)^{2}}{n}})\leq\tau_{0}\leq(n-1)\tau_{2}.$

The examples in Chapter 5 are consistent with a conjecture

 $\tau_{0}=^{?}O(\max(n,\tau_{2})\log n)$ (10.9)

where the $\log n$ term is needed for the 2-dimensional torus. We now outline a counter-example.

Take $m$ copies on the complete graph on $m$ vertices. Distinguish one vertex $v_{i}$ from each copy $i$. Add edges to make the $(v_{i})$ the vertices of a $r$-regular expander. For this graph $G_{m}$ we have, as $m\rightarrow\infty$ with fixed $r$,

 $n=m^{2};\ \tau_{2}=\Theta(m^{2});\ \tau_{0}=\Theta(m^{3})$

contradicting conjecture (10.9). We leave the details to the reader: the key point is that random walk on $G_{m}$ may be decomposed as random walk on the expander, with successive steps in the expander separated by sojourns of times $\Theta(m^{2})$ within a clique.