The Cheeger time constant discussed in Chapter 4 section 5.1 (yyy 10/11/94 version) becomes, for a -regular -vertex graph,
where is the set of edges from a proper subset of vertices to its complement . Our version of Cheeger’s inequality is (Chapter 4 Corollary 37 and Theorem 40) (yyy 10/11/94 version)
(10.1) |
Definition. An expander family is a sequence of -regular graphs (for some fixed ), with through some subsequence of integers, such that
or equivalently (by Cheeger’s inequality)
One informally says “expander” for a generic graph in the family. The expander property is stronger than the rapid mixing property exemplified by the -cube (Chapter 5 Example 15) (yyy 4/23/96 version). None of the examples in Chapter 5 is an expander family, and indeed there are no known elementary examples. Certain random constructions of regular graphs yield expanders: see Chapter 30 Proposition 1 (yyy 7/9/96 version). Explicit constructions of expander families, in particular the celebrated Ramanujan graphs, depend on group- and number-theoretic ideas outside our scope: see the elegant monograph of Lubotzky [243].
Graph parameters like are more commonly presented in inverted form (i.e. like ) as coefficients of expansion such as
(10.2) |
A more familiar version ([95] page 26) of Cheeger’s inequality in graph theory becomes, on regular graphs,
(10.3) |
Since trivially the two versions agree up to factors of . Inequalities involving coefficients of expansion are often called isoperimetric inequalities. Expanders and isoperimetric inequalities have been studied extensively in graph theory and the theory of algorithms, e.g. Chung [95] Chapters 2 and 6, the conference proceedings [157], and the introduction of Lubotzky [243].
One algorithmic motivation for Cheeger-type inequalities concerns computational complexity of calculating parameters like amd . Using the definition directly requires exponential (in ) time; but because eigenvalues can be calculated in polynomial time, these general inequalities imply that at least crude bounds can be computed in polynomial time.
If we don’t pay attention to numerical constants, then general results about reversible chains easily give us the orders of magnitude of other hitting and mixing time parameters for random walks on expanders.
For random walk on an expander family, as
(10.4) | |||||
(10.5) | |||||
(10.6) | |||||
(10.7) |
Proof. Recall the general inequality between and (Chapter 4 Lemma 23) (yyy 10/11/94 version), which on a regular graph becomes
(10.8) |
This immediately gives the upper bound . For the lower bound, having bounded degree obviously implies that the diameter of the graph satisfies . And since the mean distance between an initial vertex and the position of the walk at a stopping time is at most , the definition of implies for any pair of vertices, that is . This establishes (10.4). The general Markov chain fact is Chapter 3 Proposition 14 (yyy 1/26/93 version). Chapter 4 Lemma 25 gives . Combining these and the obvious inequality establishes (10.5,10.6). Finally, the lower bound in (10.7) follows from the general lower bound in Chapter 6 Theorem 31 (yyy 10/31/94 version), while the upper bound follows from the upper bound on combined with Chapter 2 Theorem 36 (yyy 8/18/94 version).
In many ways the important aspect of Theorem 10.1 is that -type mixing times are of order . We spell out some implications below. These hold for arbitrary regular graphs, though the virtue of expanders is that is bounded.
There exists constants such that the following inequalities hold on any regular graph.
(i) For each vertex there exists a stopping time such that is uniform and .
(ii) For lazy random walk (with hold-probability )
Proof. Part (i) is just the definition of , combined with (10.8) and the fact .
yyy relate (ii) to Chapter 4 section 3.3
Repeated use of (i) shows that we can get independent samples from by sampling at random times with . Alternatively, repeated use of (ii) shows that we can get almost independent samples from by examining the lazy chain at deterministic times, as follows.
Fix and let . Write . Then
Examining the lazy chain at deterministic times means sampling the original walk at random times, but at bounded random times. Thus we can get precisely independent samples using (i) in mean number of steps, but without a deterministic upper bound on the number of steps. Using Corollary 10.3 we get almost independent samples (up to variation distance ) in a number of steps deterministically bounded by .
Constructions with expanders are often useful in providing counter-examples to conjectures suggested by inspecting properties of random walk on the elementary examples of graphs in Chapter 5. For example, consider upper bounds on in terms of and , in our setting of regular graphs. From general results for reversible chains in Chapter 4 (10/11/94 version: Lemma 24 and below (9))
The examples in Chapter 5 are consistent with a conjecture
(10.9) |
where the term is needed for the 2-dimensional torus. We now outline a counter-example.
Take copies on the complete graph on vertices. Distinguish one vertex from each copy . Add edges to make the the vertices of a -regular expander. For this graph we have, as with fixed ,
contradicting conjecture (10.9). We leave the details to the reader: the key point is that random walk on may be decomposed as random walk on the expander, with successive steps in the expander separated by sojourns of times within a clique.