# 13.3 Random Walks in Random Environments

In talking about random walk on a weighted graph, we have been assuming the graph is fixed. It is conceptually only a minor modification to consider the case where the “environment” (the graph or the edge-weights) is itself first given in some specified random manner. This has been studied in several rather different contexts, and we will give a brief description of known results without going into many details.

Quantities like our mixing time parameters $\tau$ from Chapter 4 are now random quantities $\tau$. In general we shall use boldface for quantities depending on the realization of the environment but not depending on a realization of the walk.

## 13.3.1 Mixing times for some random regular graphs

There is a body of work on estimating mixing times for various models of random regular graph. We shall prove two simple results which illustrate two basic techniques, and record some of the history in the Notes.

The first result is Proposition 1.2.1 of Lubotzky [243]. This illustrates the technique of proving expansion (i.e., upper-bounding the Cheeger time constant $\tau_{c}$) by direct counting arguments in the random graph.

###### Proposition 13.11

Let $G_{k,n}$ be the $2k$-regular random graph on vertices $\{1,2,\ldots,n\}$ with edges $\{(i,\pi_{j}(i)):1\leq i\leq n,1\leq j\leq k\}$, where $(\pi_{j},1\leq i\leq k)$ are independent uniform random permutations of $\{1,2,\ldots,n\}$. Write $\mbox{\boldmath\tau}_{c}(k,n)$ for the Cheeger time constant for random walk on $G_{k,n}$. Then for fixed $k\geq 7$,

 $P(\mbox{\boldmath\tau}_{c}(k,n)>2k)\to 0\mbox{ as }n\to\infty.$

Note that a realization of $G_{k,n}$ may be disconnected (in which case $\tau_{c}=\infty$) and have self-loops and multiple edges.

Outline of proof. Suppose a realization of the graph has the property

 $|A|\leq n/2\Rightarrow|\partial A|\geq|A|/2$ (13.59)

where $\partial A:=\{\mbox{edges}(i,j):i\in A,j\in A^{c}\}$. Then

 $\tau_{c}=\sup_{A:1\leq|A|\leq n/2}\frac{k|A|(n-|A|)}{n|\partial A|}\leq\sup_{A% :1\leq|A|\leq n/2}\frac{k|A|(n-|A|)}{n|A|/2}\leq 2k.$

So we want to show that (13.59) holds with probability $\to 1$ as $n\to\infty$. If (13.59) fails for some $A$ with $|A|=a$, then there exists $B$ with $|B|=\lfloor{\textstyle\frac{3}{2}}a\rfloor=b$ such that

 $\pi_{j}(A)\subseteq B,\ 1\leq j\leq k$ (13.60)

(just take $B=\cup_{j}\pi_{j}(A_{j})$ plus, if necessary, arbitrary extra vertices). For given $A$ and $B$, the chance of (13.60) equals $\left((b)_{a}/(n)_{a}\right)^{k}$, where $(n)_{a}:=\prod_{r=0}^{a-1}(n-r)$. So the chance that (13.59) fails is at most

 $\sum_{1\leq a\leq n/2}q(a),\mbox{ where }q(a)={n\choose a}\ {n\choose b}\ % \left((b)_{a}/(n)_{a}\right)^{k}.$

So it suffices to verify $\sum_{1\leq a\leq n/2}q(a)\to 0$. And this is a routine but tedious verification (see Notes). $\Box$

Of course the bound on $\tau_{c}$ gives, via Cheeger’s inequality, a bound on $\tau_{2}$, and thence a bound on $\tau_{1}$ via $\tau_{1}=O(\tau_{2}\log n)$. But Proposition 13.11 is unsatisfactory in that these bounds get worse as $k$ increases, whereas intuitively they should get better. For bounds on $\tau_{1}$ which improve with $k$ we turn to the second technique, which uses the “$L^{1}\leq L^{2}$” inequality to bound the variation threshold time $\tau_{1}$. Specifically, recall (Chapter 3 Lemma 8b) that for an $n$-state reversible chain with uniform stationary distribution, the variation distance $d(t)$ satisfies

 $d(t)\leq 2\max_{i}(np_{ii}(2t)-1)^{1/2}.$ (13.61)

This is simplest to use for random walk on a group, as illustrated by the following result of Roichman [296].

###### Proposition 13.12

Fix $\alpha>1$. Given a group $G$, let $S$ be a random set of $k=\lfloor\log^{\alpha}|G|\rfloor$ distinct elements of $G$, and consider random walk on the associated Cayley graph with edges $\{(g,gs):g\in G,s\in S\cup S^{-1}\}$. For any sequence of groups with $|G|\to\infty$,

 $P(\mbox{\boldmath\tau}_{1}>t_{1})\to 0,\mbox{ where }t_{1}=\lceil{\textstyle% \frac{\alpha}{\alpha-1}}{\textstyle\frac{\log|G|}{\log k}}\rceil.$

Proof. We first give a construction of the random walk jointly with the random set $S$. Write $A=\{a,b,\ldots\}$ for a set of $k$ symbols, and write $\bar{A}=\{a,a^{-1},b,b^{-1},\ldots\}$. Fix $t\geq 1$ and let $(\xi_{s},1\leq s\leq t)$ be independent uniform on $\bar{A}$. Choose $(g(a),a\in A)$ by uniform sampling without replacement from $G$, and set $g(a^{-1})=(g(a))^{-1}$. Then the process $(X_{s};1\leq s\leq t)$ constructed via $X_{s}=g(\xi_{1})g(\xi_{2})\ldots g(\xi_{s})$ is distributed as the random walk on the random Cayley graph, started at the identity $\iota$. So $P(X_{t}=\iota)=E{\bf p}_{\iota\iota}(t)$ where ${\bf p}_{\iota\iota}(t)$ is the $t$-step transition probability in the random environment, and by (13.61) it suffices to take $t=t_{1}$ (for $t_{1}$ defined in the statement of the Proposition) and show

 $|G|P(X_{2t}=\iota)-1\to 0.$ (13.62)

To start the argument, let $J(2t)$ be the number of distinct values taken by $(\langle\xi_{s}\rangle,1\leq s\leq 2t)$, where we define $\langle a\rangle=\langle a^{-1}\rangle=a$. Fix $j\leq t$ and $1\leq s_{1}. Then

 $P(J(2t)=j|\langle\xi_{s_{i}}\rangle\mbox{ distinct for }1\leq i\leq j)=(j/k)^{% 2t-j}\leq(t/k)^{t}.$

By considering the possible choices of $(s_{i})$,

 $P(J(2t)=j)\leq{2t\choose j}(t/k)^{t}.$

Since $\sum_{j}{2t\choose j}=2^{2t}$ we deduce

 $P(J(2t)\leq t)\leq(4t/k)^{t}.$ (13.63)

Now consider the construction of $X_{2t}$ given above. We claim

 $P(X_{2t}=\iota|\xi_{s},1\leq s\leq 2t)\leq\frac{1}{|G|-2t}\mbox{ on }\{J(2t)>t\}.$ (13.64)

For if $J(2t)>t$ then there exists some $b\in A$ such that $\langle\xi_{s}\rangle=b$ for exactly one value of $s$ in $1\leq s\leq 2t$. So if we condition also on $\{g(a);a\in A,a\neq b\}$, then $X_{2t}=g_{1}g(b)g_{2}$ or $g_{1}g(b)^{-1}g_{2}$ where $g_{1}$ and $g_{2}$ are determined by the conditioning, and then the conditional probability that $P(X_{2t}=\iota)$ is the conditional probability of $g(b)$ taking a particular value, which is at most $1/(|G|-2t)$.

Combining (13.64) and (13.63),

 $P(X_{2t}=\iota)\leq(4t/k)^{t}+{\textstyle\frac{1}{|G|-2t}}\leq(4t/k)^{t}+{% \textstyle\frac{1}{|G|}}+O({\textstyle\frac{t}{|G|^{2}}}).$

So proving (13.62) reduces to proving

 $|G|(4t/k)^{t}+t/|G|\to 0$

and the definition of $t$ was made to ensure this.

## 13.3.2 Randomizing infinite trees

Simple random walk on the infinite regular tree is a fundamental process, already discussed in section 13.2.6. There are several natural ways to randomize the environment; we could take an infinite regular tree and attach random edge-weights; or we could consider a Galton–Watson tree, in which numbers of children are random. Let us start by considering these possibilities simultaneously. Fix a distribution $(\xi;W_{1},W_{2},\ldots,W_{\xi})$ where

 $\xi\geq 1;\ P(\xi\geq 2)>0;\ W_{i}>0,i\leq\xi.$ (13.65)

Note the $(W_{i})$ may be dependent. Construct a tree via:

the root $\phi$ has $\xi$ children, and the edge $(\phi,i)$ to the $i$th child has weight $W_{i}$; repeat recursively for each child, taking independent realizations of the distribution (13.65).

So the case $\xi_{i}\equiv r-1$ gives the randomly-weighted $r$-ary tree (precisely, the modification where the root has degree $r-1$ instead of $r$), and the case $W_{i}\equiv 1$ gives a Galton–Watson tree. As in Chapter 3 section 2 to each realization of a weighted graph we associate a random walk with transition probabilities proportional to edge-weights. Since random walk on the unweighted $r$-ary tree is transient, a natural first issue is prove transience in this “random environment” setting. In terms of the electrical network analogy (see comment below Theorem 13.5), interpreting $W$ as conductance, we want to know whether the (random) resistance ${\bf R}$ between $\phi$ and $\infty$ is a.s. finite. By considering the children of $\phi$, it is clear that the distribution of ${\bf R}$ satisfies

 ${\bf R}\ \stackrel{d}{=}\ \left(\sum_{i=1}^{\xi}({\bf R}_{i}+W_{i}^{-1})^{-1}% \right)^{-1}$ (13.66)

where the $({\bf R}_{i})$ are independent of each other and of $(\xi;W_{1},W_{2},\ldots,W_{\xi})$, and ${\bf R}_{i}\ \stackrel{d}{=}\ {\bf R}$. But $\hat{{\bf R}}\equiv\infty$ is a solution of (13.66), so we need some work to actually prove that ${\bf R}<\infty$.

###### Proposition 13.13

The resistance ${\bf R}$ between $\phi$ and $\infty$ satisfies ${\bf R}<\infty$ a.s..

Proof. Write ${\bf R}^{(k)}$ for the resistance between $\phi$ and height $k$ (i.e. the height-$k$ vertices, all shorted together). Clearly ${\bf R}^{(k)}\uparrow{\bf R}$ as $k\to\infty$, and analogously to (13.66)

 ${\bf R}^{(k+1)}\ \stackrel{d}{=}\ \left(\sum_{i=1}^{\xi}({\bf R}^{(k)}_{i}+W_{% i}^{-1})^{-1}\right)^{-1}$

where the $({\bf R}^{(k)}_{i})$ are independent of each other and of $(\xi;W_{1},W_{2},\ldots,W_{\xi})$, and ${\bf R}^{(k)}_{i}\ \stackrel{d}{=}\ {\bf R}^{(k)}$.

Consider first the special case $\xi\equiv 3$. Choose $x$ such that $P(W_{i}^{-1}>x\mbox{ for some }i)\leq 1/16$. Suppose inductively that $P({\bf R}^{(k)}>x)\leq 1/4$ (which holds for $k=0$ since ${\bf R}^{(0)}=0$). Then

 $i$

This implies $P({\bf R}^{(k+1)}>x)\leq 1/4$, and the induction goes through. Thus $P({\bf R}>x)\leq 1/4$. By (13.66) $p:=P({\bf R}=\infty)$ satisfies $p=p^{3}$, so $p=0$ or $1$, and we just eliminated the possibility $p=1$. So ${\bf R}<\infty$ a.s..

Reducing the general case to the special case involves a comparison idea, illustrated by the figure.

Here the edge-weights are resistances. In the left network, $\phi$ is linked to $\{a,b,c\}$ via an arbitrary tree, and in the right network, this tree is replaced by three direct edges, each with resistance $r=3(r(1)+r(2)+\ldots+r(5))$. We claim that this replacement can only increase the resistance between $\phi$ and $\infty$. This is a nice illustration of Thompson’s principle (Chapter 3 section 7.1) which says that in a realization of either graph, writing $r^{*}(e)$ for resistance and suming over undirected edges $e$,

 $R_{\phi\infty}=\inf_{\bf f}\sum_{e}r^{*}(e)f^{2}(e)$

where ${\bf f}=(f(e))$ is a unit flow from $\phi$ to $\infty$. Let ${\bf f}$ be the minimizing flow in the right network; use ${\bf f}$ to define a flow ${\bf g}$ in the left network by specifying that the flow through $a$ (resp. $b$, $c$) is the same in the left network and the right network. It is easy to check

 $\mbox{(left network)}\sum_{e}r(e)g^{2}(e)\leq\mbox{(right network)}\sum_{e}r(e% )f^{2}(e)$

and hence the resistance $R_{\phi\infty}$ can indeed only be less in the left network.

In the general case, the fact $P(\xi\geq 2)>0$ implies that the number of individuals in generation $g$ tends to $\infty$ a.s. as $g\to\infty$. So in particular we can find $3$ distinct individuals $\{A,B,C\}$ in some generation $G$. Retain the edges linking $\phi$ with these $3$ individuals, and cut all other edges within the first $G$ generations. Repeat recursively for descendants of $\{A,B,C\}$. This procedure constructs an infinite subtree, and it suffices to show that the resistance between $\phi$ and $\infty$ in the subtree is a.s. finite. By the comparison argument above, we may replace the network linking $\phi$ to $\{A,B,C\}$ by three direct edges with the same (random) resistance, and similarly for each stage of the construction of the subtree; this gives another tree ${\cal T}$, and it suffices to shows its resistance is finite a.s.. But ${\cal T}$ fits the special case $\xi\equiv 3$.

It is not difficult (we won’t give details) to show that the distribution of ${\bf R}$ is the unique distribution on $(0,\infty)$ satisfying (13.66). It does seem difficult to say anything explicit about the distribution of ${\bf R}$ in Proposition 13.13. One can get a little from comparison arguments. On the binary tree ($\xi\equiv 2$), by using the exact potential function and the exact flows from the unweighted case as “test functions” in the Dirichlet principle and Thompson’s principle, one obtains

 $E{\bf R}\leq EW^{-1};\ \ E{\bf R}^{-1}\leq EW.$

## 13.3.3 Bias and speed

Lyons et al [248, 245, 246], summarized in [249] Chapter 10, have studied in detail questions concerning a certain model of biased random walk on deterministic and random infinite trees. Much of their focus is on topics too sophisticated (boundary theory, dimension) to recount here, but let us give one simple result.

Consider the unweighted Galton–Watson tree with offspring distribution $\mu=\mbox{dist }(\xi)$, i.e., the case $W_{i}\equiv 1$ of (13.65). Fix a parameter $0\leq\lambda<\infty$. In the biased random walk $X_{t}$, from a vertex with $r$ children the walker goes to any particular child with probability $1/(\lambda+r)$, and to the parent with probability $\lambda/(\lambda+r)$. It turns out [246] that the biased random walk is recurrent for $\lambda\geq E\xi$ and transient for $\lambda. We will just prove one half of that result.

###### Proposition 13.14

The biased random walk is a.s. recurrent for $\lambda\geq E\xi$.

Proof. We use a “method of fictitious roots”. That is, to the root $\phi$ of the Galton-Watson tree we append an extra edge to a “fictitious” root $\phi^{*}$, and we consider random walk on this extended tree (rooted at $\phi^{*}$). Write ${\bf q}$ for the probability (conditional on the realization of the tree) that the walk started at $\phi$ never hits $\phi^{*}$. It will suffice to prove $P({\bf q}=0)=1$. Fix a realization of the tree, in which $\phi$ has $z$ children. Then

 $q=\sum_{i=1}^{z}\frac{1}{\lambda+z}(q_{i}+(1-q_{i})q)$

where $q_{i}$ is the probability (on this realization) that the walk started at the $i$’th child of $\phi$ never hits $\phi$. Rearrange to see $q=(\sum_{i}q_{i})/(\lambda+\sum_{i}q_{i})$. So on the random tree we have

 ${\bf q}\ \stackrel{d}{=}\ \frac{\sum_{i=1}^{\xi}{\bf q}_{i}}{\lambda+\sum_{i=1% }^{\xi}{\bf q}_{i}}$

where the $({\bf q}_{i})$ are independent of each other and $\xi$, and ${\bf q}_{i}\ \stackrel{d}{=}\ {\bf q}$. Applying Jensen’s inequality to the concave function $x\to\frac{x}{x+\lambda}$ shows

 $E{\bf q}\leq\frac{(E\xi)(E{\bf q})}{\lambda+(E\xi)(E{\bf q})}.$

By considering the relevant quadratic equation, one sees that for $\lambda\geq E\xi$ this inequality has no solution with $E{\bf q}>0$. So $E{\bf q}=0$, as required.

In the transient case, we expect there to exist a non-random speed $s(\lambda,\mu)\leq 1$ such that

 $t^{-1}d(X_{t},\phi)\to s(\lambda,\mu)\mbox{ a.s. as }t\to\infty.$ (13.67)

Lyons et al [246] show that, when $E\xi<\infty$, (13.67) is indeed true and that $s(\lambda,\mu)>0$ for all $1\leq\lambda. Moreover in the unbiased ($\lambda=1$) case there is a simple formula [245]

 $s(1,\mu)=E{\textstyle\frac{\xi-1}{\xi+1}}.$

There is apparently no such simple formula for $s(\lambda,\mu)$ in general. See Lyons et al [247] for several open problems in this area.

## 13.3.4 Finite random trees

Cayley’s formula ([313] p. 25) says there are $n^{n-2}$ different trees on $n\geq 2$ labeled vertices $\{1,2,\ldots,n\}$. Assuming each such tree to be equally likely gives one tractable definition (there are others) of random $n$-tree ${\bf T}_{n}$. One can combine the formulas from Chapter 5 section 3 for random walks on general trees with known distributional properties of ${\bf T}_{n}$ to get a variety of formulas for random walk on ${\bf T}_{n}$, an idea going back to Moon [264].

As an illustration it is known [264] that the distance ${\bf d}(1,2)$ between vertex $1$ and vertex $2$ in ${\bf T}_{n}$ has distribution

 $P({\bf d}(1,2)=k)=(k+1)n^{-k}(n-2)_{k-1},\ 1\leq k\leq n-1$

where $(m)_{s}=m(m-1)\cdots(m-s+1)$. Routine calculus gives

 $E{\bf d}(1,2)\sim\sqrt{\pi/2}\ n^{1/2}.$ (13.68)

Now on any $n$-vertex tree, the mean hitting time $t(i,j)=E_{i}T_{j}$ satisfies

 $t(i,j)+t(j,i)=2(n-1)d(i,j)$ (13.69)

(Chapter 5 (84)), and so

 $E{\bf t}(1,2)=(n-1)E{\bf d}(1,2).$

Combining with (13.68),

 $E{\bf t}(1,2)\sim\sqrt{\pi/2}\ n^{3/2}.$ (13.70)

Instead of deriving more formulas of this type for random walk on ${\bf T}_{n}$, let’s jump to the bottom line. It turns out that all the mixing and hitting time parameters $\mbox{\boldmath\tau}^{(n)}_{u}$ of Chapter 4, and the analogous “mean cover time” parameters of Chapter 6, are of order $n^{3/2}$ but are random to first order: that is,

 $n^{-3/2}\mbox{\boldmath\tau}^{(n)}_{u}\ \stackrel{d}{\rightarrow}\ \mbox{% \boldmath\tau}^{(\infty)}_{u}\mbox{ as }n\to\infty$ (13.71)

for non-deterministic limits $\mbox{\boldmath\tau}^{(\infty)}_{u}$. The fact that all these parameters have the same order is of course reminiscent of the cases of the $n$-cycle and $n$-path (Chapter 5 Examples 7 and 8), where all the parameters are $\Theta(n^{2})$. And the sophisticated explanation is the same: one can use the weak convergence paradigm (section 13.1.1). In the present context, the random tree ${\bf T}_{n}$ rescales to a limit continuum random tree ${\bf T}_{\infty}$, and the random walk converges (with time rescaled by $n^{3/2}$ and space rescaled by $n^{1/2}$) to Brownian motion on ${\bf T}_{\infty}$, and (analogously to section 13.1.1) the rescaled limits of the parameters are just the corresponding parameters for the Brownian motion. See the Notes for further comments.

## 13.3.5 Randomly-weighted random graphs

Fix a distribution $W$ on $(0,\infty)$ with $EW<\infty$. For each $n$ consider the random graph $G(n,p(n))$, that is the graph on $n$ vertices where each possible edge has chance $p(n)$ to be present. Attach independent random conductances, distributed as $W$, to the edges. Aspects of this model were studied by Grimmett and Kesten [175]. As they observe, much of the behavior is intuitively rather clear, but technically difficult to prove: we shall just give the intuition.

Case (i): $p(n)=\mu/n$ for fixed $1<\mu<\infty$. Here the number of edges at vertex $1$ is asymptotically Poisson($\mu$), and the part of the graph within a fixed distance $d$ of vertex $1$ is asymptotically like the first $d$ generations in the random family tree $\mbox{{\cal T}}^{\infty}$ of a Galton–Watson branching process with Poisson($\mu$) offspring distribution, with independent edge-weights attached. This tree essentially fits the setting of Proposition 13.13, except that the number of offspring may be zero and so the tree may be finite, but it is not hard to show (modifying the proof of Proposition 13.13) that the resistance ${\bf R}$ in $\mbox{{\cal T}}^{\infty}$ between the root and $\infty$ satisfies $\{{\bf R}<\infty\}=\{\mbox{{\cal T}}^{\infty}\mbox{ is infinite}\}$ and its distribution is characterized by the analog of (

refRdef). The asymptotic approximation implies that, for $d(n)\to\infty$ slowly, the resistance ${\bf R}_{n,d(n)}$ between vertex $1$ and the depth-$d(n)$ vertices of $G(n,p(n))$ satisfies ${\bf R}_{n,d(n)}\ \stackrel{d}{\rightarrow}\ {\bf R}^{(1)}\ \stackrel{d}{=}\ {% \bf R}$. We claim that the resistance ${\bf R}_{1,2}^{(n)}$ between vertices $1$ and $2$ of $G(n,p(n))$ satisfies

 ${\bf R}$

The lower bound is clear by shorting, but the upper bound requires a complicated construction to connect the two sets of vertices at distances $d(n)$ from vertices $1$ and $2$ in such a way that the effective resistance of this connecting network tends to zero.

The number of edges of the random graph is asymptotic to $n\mu/2$. So the total edge weight $\sum_{i}\sum_{j}W_{ij}$ is asymptotic to $n\mu EW$, and by the commute interpretation of resistance the mean commute time ${\bf C}^{(n)}_{1,2}$ for random walk on a realization of the graph satisfies

 $n^{-1}{\bf C}^{(n)}_{1,2}\ \stackrel{d}{\rightarrow}\ \mu EW({\bf R}^{(1)}+{% \bf R}^{(2)}).$

Case (ii): $p(n)=o(1)=\Omega(n^{\varepsilon-1})$, some $\varepsilon>0$. Here the degree of vertex $1$ tends to $\infty$, and it is easy to see that the (random) stationary probability $\pi_{1}$ and the (random) transition probabilities and stationary distribution the random walk satisfy

 $\max_{j}{\bf p}_{1,j}\ \stackrel{p}{\rightarrow}\ 0,\ n\mbox{\boldmath\pi}_{% 1}\ \stackrel{p}{\rightarrow}\ 1\mbox{ as }n\to\infty.$

So for fixed $k\geq 1$, the $k$-step transition probabilities satisfy $p^{(k)}_{11}\ \stackrel{p}{\rightarrow}\ 0$ as $n\to\infty$. This suggests, but it is technically hard to prove, that the (random) fundamental matrix ${\bf Z}$ satisfies

 ${\bf Z}_{11}\ \stackrel{p}{\rightarrow}\ 1\mbox{ as }n\to\infty.$ (13.72)

Granted (13.72), we can apply Lemma 11 of Chapter 2 and deduce that the mean hitting times ${\bf t}(\pi,1)=E_{\pi}T_{1}$ on a realization of the random graph satisfies

 $n^{-1}{\bf t}(\pi,1)={\textstyle\frac{{\bf Z}_{11}}{n\mbox{\boldmath\pi}_{1}% }}\ \stackrel{p}{\rightarrow}\ 1,\mbox{ as }n\to\infty.$ (13.73)

## 13.3.6 Random environments in $d$ dimensions

The phrase random walk in random environment (RWRE) is mostly used to denote variations of the classical “random flight in $d$ dimensions” model. Such variations have been studied extensively in mathematical physics as well as theoretical probability, and the monograph of Hughes [184] provides thorough coverage. To give the flavor of the subject we quote one result, due to Boivin [52].

###### Theorem 13.15

Assign random conductances $(w_{e})$ to the edges of the two-dimensional lattice ${\bf Z}^{2}$, where

(i) the process $(w_{e})$ is stationary ergodic.

(ii) $c_{1}\leq w_{e}\leq c_{2}$ a.s., for some constants $0.

Let $(X_{t};t\geq 0)$ be the associated random walk on this weighted graph, $X_{0}=0$. Then $t^{-1/2}X_{t}\ \stackrel{d}{\rightarrow}\ Z$ where $Z$ is a certain two-dimensional Normal distribution, and moreover this convergence holds for the conditional distribution of $X_{t}$ given the environment, for almost all environments.