13 Continuous State, Infinite State and Random Environment (June 23, 2001)

13.3 Random Walks in Random Environments

In talking about random walk on a weighted graph, we have been assuming the graph is fixed. It is conceptually only a minor modification to consider the case where the “environment” (the graph or the edge-weights) is itself first given in some specified random manner. This has been studied in several rather different contexts, and we will give a brief description of known results without going into many details.

Quantities like our mixing time parameters τ\tau from Chapter 4 are now random quantities 𝝉\tau. In general we shall use boldface for quantities depending on the realization of the environment but not depending on a realization of the walk.

13.3.1 Mixing times for some random regular graphs

There is a body of work on estimating mixing times for various models of random regular graph. We shall prove two simple results which illustrate two basic techniques, and record some of the history in the Notes.

The first result is Proposition 1.2.1 of Lubotzky [243]. This illustrates the technique of proving expansion (i.e., upper-bounding the Cheeger time constant τc\tau_{c}) by direct counting arguments in the random graph.

Proposition 13.11

Let Gk,nG_{k,n} be the 2k2k-regular random graph on vertices {1,2,,n}\{1,2,\ldots,n\} with edges {(i,πj(i)):1in,1jk}\{(i,\pi_{j}(i)):1\leq i\leq n,1\leq j\leq k\}, where (πj,1ik)(\pi_{j},1\leq i\leq k) are independent uniform random permutations of {1,2,,n}\{1,2,\ldots,n\}. Write 𝛕c(k,n)\mbox{\boldmath$\tau$}_{c}(k,n) for the Cheeger time constant for random walk on Gk,nG_{k,n}. Then for fixed k7k\geq 7,

P(𝝉c(k,n)>2k)0 as n.P(\mbox{\boldmath$\tau$}_{c}(k,n)>2k)\to 0\mbox{ as }n\to\infty.

Note that a realization of Gk,nG_{k,n} may be disconnected (in which case τc=\tau_{c}=\infty) and have self-loops and multiple edges.

Outline of proof. Suppose a realization of the graph has the property

|A|n/2|A||A|/2|A|\leq n/2\Rightarrow|\partial A|\geq|A|/2 (13.59)

where A:={edges(i,j):iA,jAc}\partial A:=\{\mbox{edges}(i,j):i\in A,j\in A^{c}\}. Then

τc=supA:1|A|n/2k|A|(n-|A|)n|A|supA:1|A|n/2k|A|(n-|A|)n|A|/22k.\tau_{c}=\sup_{A:1\leq|A|\leq n/2}\frac{k|A|(n-|A|)}{n|\partial A|}\leq\sup_{A% :1\leq|A|\leq n/2}\frac{k|A|(n-|A|)}{n|A|/2}\leq 2k.

So we want to show that (13.59) holds with probability 1\to 1 as nn\to\infty. If (13.59) fails for some AA with |A|=a|A|=a, then there exists BB with |B|=32a=b|B|=\lfloor{\textstyle\frac{3}{2}}a\rfloor=b such that

πj(A)B, 1jk\pi_{j}(A)\subseteq B,\ 1\leq j\leq k (13.60)

(just take B=jπj(Aj)B=\cup_{j}\pi_{j}(A_{j}) plus, if necessary, arbitrary extra vertices). For given AA and BB, the chance of (13.60) equals ((b)a/(n)a)k\left((b)_{a}/(n)_{a}\right)^{k}, where (n)a:=r=0a-1(n-r)(n)_{a}:=\prod_{r=0}^{a-1}(n-r). So the chance that (13.59) fails is at most

1an/2q(a), where q(a)=(na)(nb)((b)a/(n)a)k.\sum_{1\leq a\leq n/2}q(a),\mbox{ where }q(a)={n\choose a}\ {n\choose b}\ % \left((b)_{a}/(n)_{a}\right)^{k}.

So it suffices to verify 1an/2q(a)0\sum_{1\leq a\leq n/2}q(a)\to 0. And this is a routine but tedious verification (see Notes). \Box

Of course the bound on τc\tau_{c} gives, via Cheeger’s inequality, a bound on τ2\tau_{2}, and thence a bound on τ1\tau_{1} via τ1=O(τ2logn)\tau_{1}=O(\tau_{2}\log n). But Proposition 13.11 is unsatisfactory in that these bounds get worse as kk increases, whereas intuitively they should get better. For bounds on τ1\tau_{1} which improve with kk we turn to the second technique, which uses the “L1L2L^{1}\leq L^{2}” inequality to bound the variation threshold time τ1\tau_{1}. Specifically, recall (Chapter 3 Lemma 8b) that for an nn-state reversible chain with uniform stationary distribution, the variation distance d(t)d(t) satisfies

d(t)2maxi(npii(2t)-1)1/2.d(t)\leq 2\max_{i}(np_{ii}(2t)-1)^{1/2}. (13.61)

This is simplest to use for random walk on a group, as illustrated by the following result of Roichman [296].

Proposition 13.12

Fix α>1\alpha>1. Given a group GG, let SS be a random set of k=logα|G|k=\lfloor\log^{\alpha}|G|\rfloor distinct elements of GG, and consider random walk on the associated Cayley graph with edges {(g,gs):gG,sSS-1}\{(g,gs):g\in G,s\in S\cup S^{-1}\}. For any sequence of groups with |G||G|\to\infty,

P(𝝉1>t1)0, where t1=αα-1log|G|logk.P(\mbox{\boldmath$\tau$}_{1}>t_{1})\to 0,\mbox{ where }t_{1}=\lceil{\textstyle% \frac{\alpha}{\alpha-1}}{\textstyle\frac{\log|G|}{\log k}}\rceil.

Proof. We first give a construction of the random walk jointly with the random set SS. Write A={a,b,}A=\{a,b,\ldots\} for a set of kk symbols, and write A¯={a,a-1,b,b-1,}\bar{A}=\{a,a^{-1},b,b^{-1},\ldots\}. Fix t1t\geq 1 and let (ξs,1st)(\xi_{s},1\leq s\leq t) be independent uniform on A¯\bar{A}. Choose (g(a),aA)(g(a),a\in A) by uniform sampling without replacement from GG, and set g(a-1)=(g(a))-1g(a^{-1})=(g(a))^{-1}. Then the process (Xs;1st)(X_{s};1\leq s\leq t) constructed via Xs=g(ξ1)g(ξ2)g(ξs)X_{s}=g(\xi_{1})g(\xi_{2})\ldots g(\xi_{s}) is distributed as the random walk on the random Cayley graph, started at the identity ι\iota. So P(Xt=ι)=E𝐩ιι(t)P(X_{t}=\iota)=E{\bf p}_{\iota\iota}(t) where 𝐩ιι(t){\bf p}_{\iota\iota}(t) is the tt-step transition probability in the random environment, and by (13.61) it suffices to take t=t1t=t_{1} (for t1t_{1} defined in the statement of the Proposition) and show

|G|P(X2t=ι)-10.|G|P(X_{2t}=\iota)-1\to 0. (13.62)

To start the argument, let J(2t)J(2t) be the number of distinct values taken by (ξs,1s2t)(\langle\xi_{s}\rangle,1\leq s\leq 2t), where we define a=a-1=a\langle a\rangle=\langle a^{-1}\rangle=a. Fix jtj\leq t and 1s1<s2<<sj2t1\leq s_{1}<s_{2}<\ldots<s_{j}\leq 2t. Then

P(J(2t)=j|ξsi distinct for 1ij)=(j/k)2t-j(t/k)t.P(J(2t)=j|\langle\xi_{s_{i}}\rangle\mbox{ distinct for }1\leq i\leq j)=(j/k)^{% 2t-j}\leq(t/k)^{t}.

By considering the possible choices of (si)(s_{i}),

P(J(2t)=j)(2tj)(t/k)t.P(J(2t)=j)\leq{2t\choose j}(t/k)^{t}.

Since j(2tj)=22t\sum_{j}{2t\choose j}=2^{2t} we deduce

P(J(2t)t)(4t/k)t.P(J(2t)\leq t)\leq(4t/k)^{t}. (13.63)

Now consider the construction of X2tX_{2t} given above. We claim

P(X2t=ι|ξs,1s2t)1|G|-2t on {J(2t)>t}.P(X_{2t}=\iota|\xi_{s},1\leq s\leq 2t)\leq\frac{1}{|G|-2t}\mbox{ on }\{J(2t)>t\}. (13.64)

For if J(2t)>tJ(2t)>t then there exists some bAb\in A such that ξs=b\langle\xi_{s}\rangle=b for exactly one value of ss in 1s2t1\leq s\leq 2t. So if we condition also on {g(a);aA,ab}\{g(a);a\in A,a\neq b\}, then X2t=g1g(b)g2X_{2t}=g_{1}g(b)g_{2} or g1g(b)-1g2g_{1}g(b)^{-1}g_{2} where g1g_{1} and g2g_{2} are determined by the conditioning, and then the conditional probability that P(X2t=ι)P(X_{2t}=\iota) is the conditional probability of g(b)g(b) taking a particular value, which is at most 1/(|G|-2t)1/(|G|-2t).

Combining (13.64) and (13.63),

P(X2t=ι)(4t/k)t+1|G|-2t(4t/k)t+1|G|+O(t|G|2).P(X_{2t}=\iota)\leq(4t/k)^{t}+{\textstyle\frac{1}{|G|-2t}}\leq(4t/k)^{t}+{% \textstyle\frac{1}{|G|}}+O({\textstyle\frac{t}{|G|^{2}}}).

So proving (13.62) reduces to proving

|G|(4t/k)t+t/|G|0|G|(4t/k)^{t}+t/|G|\to 0

and the definition of tt was made to ensure this.

13.3.2 Randomizing infinite trees

Simple random walk on the infinite regular tree is a fundamental process, already discussed in section 13.2.6. There are several natural ways to randomize the environment; we could take an infinite regular tree and attach random edge-weights; or we could consider a Galton–Watson tree, in which numbers of children are random. Let us start by considering these possibilities simultaneously. Fix a distribution (ξ;W1,W2,,Wξ)(\xi;W_{1},W_{2},\ldots,W_{\xi}) where

ξ1;P(ξ2)>0;Wi>0,iξ.\xi\geq 1;\ P(\xi\geq 2)>0;\ W_{i}>0,i\leq\xi. (13.65)

Note the (Wi)(W_{i}) may be dependent. Construct a tree via:

the root ϕ\phi has ξ\xi children, and the edge (ϕ,i)(\phi,i) to the iith child has weight WiW_{i}; repeat recursively for each child, taking independent realizations of the distribution (13.65).

So the case ξir-1\xi_{i}\equiv r-1 gives the randomly-weighted rr-ary tree (precisely, the modification where the root has degree r-1r-1 instead of rr), and the case Wi1W_{i}\equiv 1 gives a Galton–Watson tree. As in Chapter 3 section 2 to each realization of a weighted graph we associate a random walk with transition probabilities proportional to edge-weights. Since random walk on the unweighted rr-ary tree is transient, a natural first issue is prove transience in this “random environment” setting. In terms of the electrical network analogy (see comment below Theorem 13.5), interpreting WW as conductance, we want to know whether the (random) resistance 𝐑{\bf R} between ϕ\phi and \infty is a.s. finite. By considering the children of ϕ\phi, it is clear that the distribution of 𝐑{\bf R} satisfies

𝐑=d(i=1ξ(𝐑i+Wi-1)-1)-1{\bf R}\ \stackrel{d}{=}\ \left(\sum_{i=1}^{\xi}({\bf R}_{i}+W_{i}^{-1})^{-1}% \right)^{-1} (13.66)

where the (𝐑i)({\bf R}_{i}) are independent of each other and of (ξ;W1,W2,,Wξ)(\xi;W_{1},W_{2},\ldots,W_{\xi}), and 𝐑i=d𝐑{\bf R}_{i}\ \stackrel{d}{=}\ {\bf R}. But 𝐑^\hat{{\bf R}}\equiv\infty is a solution of (13.66), so we need some work to actually prove that 𝐑<{\bf R}<\infty.

Proposition 13.13

The resistance 𝐑{\bf R} between ϕ\phi and \infty satisfies 𝐑<{\bf R}<\infty a.s..

Proof. Write 𝐑(k){\bf R}^{(k)} for the resistance between ϕ\phi and height kk (i.e. the height-kk vertices, all shorted together). Clearly 𝐑(k)𝐑{\bf R}^{(k)}\uparrow{\bf R} as kk\to\infty, and analogously to (13.66)

𝐑(k+1)=d(i=1ξ(𝐑i(k)+Wi-1)-1)-1{\bf R}^{(k+1)}\ \stackrel{d}{=}\ \left(\sum_{i=1}^{\xi}({\bf R}^{(k)}_{i}+W_{% i}^{-1})^{-1}\right)^{-1}

where the (𝐑i(k))({\bf R}^{(k)}_{i}) are independent of each other and of (ξ;W1,W2,,Wξ)(\xi;W_{1},W_{2},\ldots,W_{\xi}), and 𝐑i(k)=d𝐑(k){\bf R}^{(k)}_{i}\ \stackrel{d}{=}\ {\bf R}^{(k)}.

Consider first the special case ξ3\xi\equiv 3. Choose xx such that P(Wi-1>x for some i)1/16P(W_{i}^{-1}>x\mbox{ for some }i)\leq 1/16. Suppose inductively that P(𝐑(k)>x)1/4P({\bf R}^{(k)}>x)\leq 1/4 (which holds for k=0k=0 since 𝐑(0)=0{\bf R}^{(0)}=0). Then

P(𝐑i(k)+Wi-1>2x for at least 2 ii’s )116+3(14)214.P({\bf R}^{(k)}_{i}+W_{i}^{-1}>2x\mbox{ for at least 2 $i$'s })\leq{\textstyle% \frac{1}{16}}+3({\textstyle\frac{1}{4}})^{2}\leq{\textstyle\frac{1}{4}}.

This implies P(𝐑(k+1)>x)1/4P({\bf R}^{(k+1)}>x)\leq 1/4, and the induction goes through. Thus P(𝐑>x)1/4P({\bf R}>x)\leq 1/4. By (13.66) p:=P(𝐑=)p:=P({\bf R}=\infty) satisfies p=p3p=p^{3}, so p=0p=0 or 11, and we just eliminated the possibility p=1p=1. So 𝐑<{\bf R}<\infty a.s..

Reducing the general case to the special case involves a comparison idea, illustrated by the figure.

ϕ\phiaabbcc\inftyr(1)r(1)r(3)r(3)r(5)r(5)r(2)r(2)r(4)r(4)s(1)s(1)s(2)s(2)s(3)s(3)ϕ\phiaabbcc\inftyrrrrrrs(1)s(1)s(2)s(2)s(3)s(3)

Here the edge-weights are resistances. In the left network, ϕ\phi is linked to {a,b,c}\{a,b,c\} via an arbitrary tree, and in the right network, this tree is replaced by three direct edges, each with resistance r=3(r(1)+r(2)++r(5))r=3(r(1)+r(2)+\ldots+r(5)). We claim that this replacement can only increase the resistance between ϕ\phi and \infty. This is a nice illustration of Thompson’s principle (Chapter 3 section 7.1) which says that in a realization of either graph, writing r*(e)r^{*}(e) for resistance and suming over undirected edges ee,

Rϕ=inf𝐟er*(e)f2(e)R_{\phi\infty}=\inf_{\bf f}\sum_{e}r^{*}(e)f^{2}(e)

where 𝐟=(f(e)){\bf f}=(f(e)) is a unit flow from ϕ\phi to \infty. Let 𝐟{\bf f} be the minimizing flow in the right network; use 𝐟{\bf f} to define a flow 𝐠{\bf g} in the left network by specifying that the flow through aa (resp. bb, cc) is the same in the left network and the right network. It is easy to check

(left network)er(e)g2(e)(right network)er(e)f2(e)\mbox{(left network)}\sum_{e}r(e)g^{2}(e)\leq\mbox{(right network)}\sum_{e}r(e% )f^{2}(e)

and hence the resistance RϕR_{\phi\infty} can indeed only be less in the left network.

In the general case, the fact P(ξ2)>0P(\xi\geq 2)>0 implies that the number of individuals in generation gg tends to \infty a.s. as gg\to\infty. So in particular we can find 33 distinct individuals {A,B,C}\{A,B,C\} in some generation GG. Retain the edges linking ϕ\phi with these 33 individuals, and cut all other edges within the first GG generations. Repeat recursively for descendants of {A,B,C}\{A,B,C\}. This procedure constructs an infinite subtree, and it suffices to show that the resistance between ϕ\phi and \infty in the subtree is a.s. finite. By the comparison argument above, we may replace the network linking ϕ\phi to {A,B,C}\{A,B,C\} by three direct edges with the same (random) resistance, and similarly for each stage of the construction of the subtree; this gives another tree 𝒯{\cal T}, and it suffices to shows its resistance is finite a.s.. But 𝒯{\cal T} fits the special case ξ3\xi\equiv 3.    

It is not difficult (we won’t give details) to show that the distribution of 𝐑{\bf R} is the unique distribution on (0,)(0,\infty) satisfying (13.66). It does seem difficult to say anything explicit about the distribution of 𝐑{\bf R} in Proposition 13.13. One can get a little from comparison arguments. On the binary tree (ξ2\xi\equiv 2), by using the exact potential function and the exact flows from the unweighted case as “test functions” in the Dirichlet principle and Thompson’s principle, one obtains

E𝐑EW-1; E𝐑-1EW.E{\bf R}\leq EW^{-1};\ \ E{\bf R}^{-1}\leq EW.

13.3.3 Bias and speed

Lyons et al [248, 245, 246], summarized in [249] Chapter 10, have studied in detail questions concerning a certain model of biased random walk on deterministic and random infinite trees. Much of their focus is on topics too sophisticated (boundary theory, dimension) to recount here, but let us give one simple result.

Consider the unweighted Galton–Watson tree with offspring distribution μ=dist (ξ)\mu=\mbox{dist }(\xi), i.e., the case Wi1W_{i}\equiv 1 of (13.65). Fix a parameter 0λ<0\leq\lambda<\infty. In the biased random walk XtX_{t}, from a vertex with rr children the walker goes to any particular child with probability 1/(λ+r)1/(\lambda+r), and to the parent with probability λ/(λ+r)\lambda/(\lambda+r). It turns out [246] that the biased random walk is recurrent for λEξ\lambda\geq E\xi and transient for λ<Eξ\lambda<E\xi. We will just prove one half of that result.

Proposition 13.14

The biased random walk is a.s. recurrent for λEξ\lambda\geq E\xi.

Proof. We use a “method of fictitious roots”. That is, to the root ϕ\phi of the Galton-Watson tree we append an extra edge to a “fictitious” root ϕ*\phi^{*}, and we consider random walk on this extended tree (rooted at ϕ*\phi^{*}). Write 𝐪{\bf q} for the probability (conditional on the realization of the tree) that the walk started at ϕ\phi never hits ϕ*\phi^{*}. It will suffice to prove P(𝐪=0)=1P({\bf q}=0)=1. Fix a realization of the tree, in which ϕ\phi has zz children. Then

q=i=1z1λ+z(qi+(1-qi)q)q=\sum_{i=1}^{z}\frac{1}{\lambda+z}(q_{i}+(1-q_{i})q)

where qiq_{i} is the probability (on this realization) that the walk started at the ii’th child of ϕ\phi never hits ϕ\phi. Rearrange to see q=(iqi)/(λ+iqi)q=(\sum_{i}q_{i})/(\lambda+\sum_{i}q_{i}). So on the random tree we have

𝐪=di=1ξ𝐪iλ+i=1ξ𝐪i{\bf q}\ \stackrel{d}{=}\ \frac{\sum_{i=1}^{\xi}{\bf q}_{i}}{\lambda+\sum_{i=1% }^{\xi}{\bf q}_{i}}

where the (𝐪i)({\bf q}_{i}) are independent of each other and ξ\xi, and 𝐪i=d𝐪{\bf q}_{i}\ \stackrel{d}{=}\ {\bf q}. Applying Jensen’s inequality to the concave function xxx+λx\to\frac{x}{x+\lambda} shows

E𝐪(Eξ)(E𝐪)λ+(Eξ)(E𝐪).E{\bf q}\leq\frac{(E\xi)(E{\bf q})}{\lambda+(E\xi)(E{\bf q})}.

By considering the relevant quadratic equation, one sees that for λEξ\lambda\geq E\xi this inequality has no solution with E𝐪>0E{\bf q}>0. So E𝐪=0E{\bf q}=0, as required.    

In the transient case, we expect there to exist a non-random speed s(λ,μ)1s(\lambda,\mu)\leq 1 such that

t-1d(Xt,ϕ)s(λ,μ) a.s. as t.t^{-1}d(X_{t},\phi)\to s(\lambda,\mu)\mbox{ a.s. as }t\to\infty. (13.67)

Lyons et al [246] show that, when Eξ<E\xi<\infty, (13.67) is indeed true and that s(λ,μ)>0s(\lambda,\mu)>0 for all 1λ<Eξ1\leq\lambda<E\xi. Moreover in the unbiased (λ=1\lambda=1) case there is a simple formula [245]

s(1,μ)=Eξ-1ξ+1.s(1,\mu)=E{\textstyle\frac{\xi-1}{\xi+1}}.

There is apparently no such simple formula for s(λ,μ)s(\lambda,\mu) in general. See Lyons et al [247] for several open problems in this area.

13.3.4 Finite random trees

Cayley’s formula ([313] p. 25) says there are nn-2n^{n-2} different trees on n2n\geq 2 labeled vertices {1,2,,n}\{1,2,\ldots,n\}. Assuming each such tree to be equally likely gives one tractable definition (there are others) of random nn-tree 𝐓n{\bf T}_{n}. One can combine the formulas from Chapter 5 section 3 for random walks on general trees with known distributional properties of 𝐓n{\bf T}_{n} to get a variety of formulas for random walk on 𝐓n{\bf T}_{n}, an idea going back to Moon [264].

As an illustration it is known [264] that the distance 𝐝(1,2){\bf d}(1,2) between vertex 11 and vertex 22 in 𝐓n{\bf T}_{n} has distribution

P(𝐝(1,2)=k)=(k+1)n-k(n-2)k-1, 1kn-1P({\bf d}(1,2)=k)=(k+1)n^{-k}(n-2)_{k-1},\ 1\leq k\leq n-1

where (m)s=m(m-1)(m-s+1)(m)_{s}=m(m-1)\cdots(m-s+1). Routine calculus gives

E𝐝(1,2)π/2n1/2.E{\bf d}(1,2)\sim\sqrt{\pi/2}\ n^{1/2}. (13.68)

Now on any nn-vertex tree, the mean hitting time t(i,j)=EiTjt(i,j)=E_{i}T_{j} satisfies

t(i,j)+t(j,i)=2(n-1)d(i,j)t(i,j)+t(j,i)=2(n-1)d(i,j) (13.69)

(Chapter 5 (84)), and so

E𝐭(1,2)=(n-1)E𝐝(1,2).E{\bf t}(1,2)=(n-1)E{\bf d}(1,2).

Combining with (13.68),

E𝐭(1,2)π/2n3/2.E{\bf t}(1,2)\sim\sqrt{\pi/2}\ n^{3/2}. (13.70)

Instead of deriving more formulas of this type for random walk on 𝐓n{\bf T}_{n}, let’s jump to the bottom line. It turns out that all the mixing and hitting time parameters 𝝉u(n)\mbox{\boldmath$\tau$}^{(n)}_{u} of Chapter 4, and the analogous “mean cover time” parameters of Chapter 6, are of order n3/2n^{3/2} but are random to first order: that is,

n-3/2𝝉u(n)d𝝉u() as nn^{-3/2}\mbox{\boldmath$\tau$}^{(n)}_{u}\ \stackrel{d}{\rightarrow}\ \mbox{% \boldmath$\tau$}^{(\infty)}_{u}\mbox{ as }n\to\infty (13.71)

for non-deterministic limits 𝝉u()\mbox{\boldmath$\tau$}^{(\infty)}_{u}. The fact that all these parameters have the same order is of course reminiscent of the cases of the nn-cycle and nn-path (Chapter 5 Examples 7 and 8), where all the parameters are Θ(n2)\Theta(n^{2}). And the sophisticated explanation is the same: one can use the weak convergence paradigm (section 13.1.1). In the present context, the random tree 𝐓n{\bf T}_{n} rescales to a limit continuum random tree 𝐓{\bf T}_{\infty}, and the random walk converges (with time rescaled by n3/2n^{3/2} and space rescaled by n1/2n^{1/2}) to Brownian motion on 𝐓{\bf T}_{\infty}, and (analogously to section 13.1.1) the rescaled limits of the parameters are just the corresponding parameters for the Brownian motion. See the Notes for further comments.

13.3.5 Randomly-weighted random graphs

Fix a distribution WW on (0,)(0,\infty) with EW<EW<\infty. For each nn consider the random graph G(n,p(n))G(n,p(n)), that is the graph on nn vertices where each possible edge has chance p(n)p(n) to be present. Attach independent random conductances, distributed as WW, to the edges. Aspects of this model were studied by Grimmett and Kesten [175]. As they observe, much of the behavior is intuitively rather clear, but technically difficult to prove: we shall just give the intuition.

Case (i): p(n)=μ/np(n)=\mu/n for fixed 1<μ<1<\mu<\infty. Here the number of edges at vertex 11 is asymptotically Poisson(μ\mu), and the part of the graph within a fixed distance dd of vertex 11 is asymptotically like the first dd generations in the random family tree 𝒯\mbox{${\cal T}$}^{\infty} of a Galton–Watson branching process with Poisson(μ\mu) offspring distribution, with independent edge-weights attached. This tree essentially fits the setting of Proposition 13.13, except that the number of offspring may be zero and so the tree may be finite, but it is not hard to show (modifying the proof of Proposition 13.13) that the resistance 𝐑{\bf R} in 𝒯\mbox{${\cal T}$}^{\infty} between the root and \infty satisfies {𝐑<}={𝒯 is infinite}\{{\bf R}<\infty\}=\{\mbox{${\cal T}$}^{\infty}\mbox{ is infinite}\} and its distribution is characterized by the analog of (

refRdef). The asymptotic approximation implies that, for d(n)d(n)\to\infty slowly, the resistance 𝐑n,d(n){\bf R}_{n,d(n)} between vertex 11 and the depth-d(n)d(n) vertices of G(n,p(n))G(n,p(n)) satisfies 𝐑n,d(n)d𝐑(1)=d𝐑{\bf R}_{n,d(n)}\ \stackrel{d}{\rightarrow}\ {\bf R}^{(1)}\ \stackrel{d}{=}\ {% \bf R}. We claim that the resistance 𝐑1,2(n){\bf R}_{1,2}^{(n)} between vertices 11 and 22 of G(n,p(n))G(n,p(n)) satisfies

𝐑1,2(n)d𝐑(1)+𝐑(2); where 𝐑1 and 𝐑2 are independent copies of 𝐑{\bf R} .{\bf R}_{1,2}^{(n)}\ \stackrel{d}{\rightarrow}\ {\bf R}^{(1)}+{\bf R}^{(2)};% \mbox{ where }{\bf R}^{1}\mbox{ and }{\bf R}^{2}\mbox{ are independent copies % of ${\bf R}$ }.

The lower bound is clear by shorting, but the upper bound requires a complicated construction to connect the two sets of vertices at distances d(n)d(n) from vertices 11 and 22 in such a way that the effective resistance of this connecting network tends to zero.

The number of edges of the random graph is asymptotic to nμ/2n\mu/2. So the total edge weight ijWij\sum_{i}\sum_{j}W_{ij} is asymptotic to nμEWn\mu EW, and by the commute interpretation of resistance the mean commute time 𝐂1,2(n){\bf C}^{(n)}_{1,2} for random walk on a realization of the graph satisfies

n-1𝐂1,2(n)dμEW(𝐑(1)+𝐑(2)).n^{-1}{\bf C}^{(n)}_{1,2}\ \stackrel{d}{\rightarrow}\ \mu EW({\bf R}^{(1)}+{% \bf R}^{(2)}).

Case (ii): p(n)=o(1)=Ω(nε-1)p(n)=o(1)=\Omega(n^{\varepsilon-1}), some ε>0\varepsilon>0. Here the degree of vertex 11 tends to \infty, and it is easy to see that the (random) stationary probability π1\pi_{1} and the (random) transition probabilities and stationary distribution the random walk satisfy

maxj𝐩1,jp 0,  n𝝅1p 1 as n.\max_{j}{\bf p}_{1,j}\ \stackrel{p}{\rightarrow}\ 0,\ n\mbox{\boldmath$\pi$}_{% 1}\ \stackrel{p}{\rightarrow}\ 1\mbox{ as }n\to\infty.

So for fixed k1k\geq 1, the kk-step transition probabilities satisfy p11(k)p 0p^{(k)}_{11}\ \stackrel{p}{\rightarrow}\ 0 as nn\to\infty. This suggests, but it is technically hard to prove, that the (random) fundamental matrix 𝐙{\bf Z} satisfies

𝐙11p 1 as n.{\bf Z}_{11}\ \stackrel{p}{\rightarrow}\ 1\mbox{ as }n\to\infty. (13.72)

Granted (13.72), we can apply Lemma 11 of Chapter 2 and deduce that the mean hitting times 𝐭(π,1)=EπT1{\bf t}(\pi,1)=E_{\pi}T_{1} on a realization of the random graph satisfies

n-1𝐭(π,1)=𝐙11n𝝅1p 1, as n.n^{-1}{\bf t}(\pi,1)={\textstyle\frac{{\bf Z}_{11}}{n\mbox{\boldmath$\pi$}_{1}% }}\ \stackrel{p}{\rightarrow}\ 1,\mbox{ as }n\to\infty. (13.73)

13.3.6 Random environments in dd dimensions

The phrase random walk in random environment (RWRE) is mostly used to denote variations of the classical “random flight in dd dimensions” model. Such variations have been studied extensively in mathematical physics as well as theoretical probability, and the monograph of Hughes [184] provides thorough coverage. To give the flavor of the subject we quote one result, due to Boivin [52].

Theorem 13.15

Assign random conductances (we)(w_{e}) to the edges of the two-dimensional lattice 𝐙2{\bf Z}^{2}, where

(i) the process (we)(w_{e}) is stationary ergodic.

(ii) c1wec2c_{1}\leq w_{e}\leq c_{2} a.s., for some constants 0<c1<c2<0<c_{1}<c_{2}<\infty.

Let (Xt;t0)(X_{t};t\geq 0) be the associated random walk on this weighted graph, X0=0X_{0}=0. Then t-1/2XtdZt^{-1/2}X_{t}\ \stackrel{d}{\rightarrow}\ Z where ZZ is a certain two-dimensional Normal distribution, and moreover this convergence holds for the conditional distribution of XtX_{t} given the environment, for almost all environments.