6.4 Short-time bounds
It turns out that the bound
“ on a regular graph”
given by Corollary 6.9
can be used to obtain bounds concerning the short-time behavior of random walks.
Such bounds, and their applications, are the focus of this section.
We haven’t attempted to optimize numerical constants
(e.g. Theorem 6.15 implies that on regular graphs).
More elaborate arguments (see Notes) can be used to improve constants
and to deal with the irregular case,
but we’ll restrict attention to the regular case for simplicity.
Write for the number of visits to before time , i.e.
during .
Proposition 6.16
Consider random walk on an -vertex regular graph .
Let be a proper subset of vertices and let .
(i) .
(ii) .
(iii) .
(iv) .
Remarks.
For part (i) we give a slightly fussy argument repeating ingredients of the proof
of Corollary 6.9,
since these are needed for (ii).
The point of (iv) is to get a bound for .
On the -cycle, it can be shown that the probability in question really is
,
uniformly in and .
Proof of Proposition 6.16.
Choose a vertex at minimum distance from ,
and let
be a minimum-length path.
Let be the subgraph on vertex-set ,
and let be the subgraph on vertex-set together with all the
neighbors of .
Write superscripts and for the random walks on
and . Then
|
|
|
The inequality holds because we can specify the walk on in terms
of the walk on with possibly extra chances of jumping to at each step
(this is a routine stochastic comparison argument, written out as an example in Chapter 14 yyy).
The equality holds because the only routes in from to
are via ,
by the minimum-length assumption.
Now write for the edge-sets.
Using the commute interpretation of resistance,
|
|
|
(6.20) |
Writing for the number of neighbors of in ,
the effective resistance in between and is , so
the commute interpretation of resistance
give the first equality in
|
|
|
The neighbors of are all in ,
so the proof of Lemma 6.10 implies
|
|
|
(6.21) |
where is the degree of .
Since , the bound in (6.20) is at most
, and part (i) follows.
For part (ii), by the electrical network analogy (Chapter 3 yyy)
the quantity in question equals
|
|
|
(6.22) |
where is the effective resistance in between and .
Clearly this effective resistance is at most the distance (, in the
argument above) from to , which by (6.21) is at most
.
Thus the quantity (6.22) is at most , establishing
the desired result in the case .
If then there are at least edges from to ,
so and the quantity (6.22) is
at most
.
For part (iii), fix a state and an integer time .
Write for the number of visits to before time ,
i.e. during times .
Then
|
|
|
(6.23) |
the inequality by conditioning on .
Now choose real such that .
Since , the set
|
|
|
has ,
so part (ii) implies
|
|
|
(6.24) |
Now by regularity we can rewrite as
,
and so by conditioning on
|
|
|
Setting and combining with (6.24) gives (iii).
The bound in (iv) now follows from (iii) and (6.23).
6.4.1 Covering by multiple walks
The first application is a variant of work of Broder et al
[64] discussed further in section 6.8.2.
Proposition 6.17
On a regular -vertex graph, consider independent random walks,
each started at a uniform random vertex.
Let be the time until every vertex has been hit by some walk.
Then
|
|
|
Remarks.
The point is the dependence on .
On the -cycle, for it can be shown that initially the largest gap between adjacent
walkers is and that ,
so in this respect the bound is sharp.
Of course, for the bound would be no improvement over Theorem 6.4.
Proof.
As usual write for the hitting time on for a single walk,
and write for the first time is visited by some walk.
Then
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
by Proposition 6.16 (iii), provided .
So
|
|
|
(6.25) |
The bound becomes for
.
So
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
and the issue is to show that and are .
To handle , split the set of walks into subsets
of sizes and . By independence,
for we have
.
Then
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To bound we start with a calculus exercise:
for
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The sum is bounded by the corresponding integral over
and the obvious calculation, whose details we omit, bounds this integral
by .
6.4.2 Bounding point probabilities
Our second application is to universal bounds on point probabilities.
A quite different universal bound will be given in Chapter yyy.
Proposition 6.18
For continuous-time random walk on a regular -vertex graph,
|
|
|
|
|
|
|
|
|
|
where and are absolute constants.
In discrete time one can get essentially the same result, but
with the bounds multiplied by , though we shall not give details
(see Notes).
Proof.
is decreasing in , so
|
|
|
where the last inequality is Proposition 6.16 (iii), whose proof is unchanged
in continuous time, and which holds for .
This gives the first inequality when , and the general case follows
from Chapter 3 yyy.
For the second inequality, recall the definition of separation from Chapter 4 yyy.
Given a vertex and a time , there exists a probability distribution
such that
|
|
|
Then for ,
|
|
|
Thus, defining
,
we have proved
|
|
|
(6.26) |
Now by the first inequality of the Proposition,
and by definition of
in Chapter 4 yyy,
so by iterating (6.26) we have
|
|
|
(6.27) |
But by Chapter 4 yyy we have
for an absolute constant ,
and then by Corollary 6.9 we have
.
The desired inequality now follows from (6.27).
6.4.3 A cat and mouse game
Here we reconsider the cat and mouse game discussed in Chapter 4 section yyy.
Recall that the cat performs continuous-time random walk on a
-vertex graph, and the mouse moves according to some arbitrary deterministic strategy.
Let be the first meeting time, and let be the maximum of
over all pairs of initial vertices and all strategies for the mouse.
Proposition 6.19
On a regular graph,
for some absolute constant .
Proof.
The proof relies on Proposition 6.18, whose conclusion implies
there exists a constant such that
|
|
|
Consider running the process forever.
The point is that, regardless of the initial positions, the chance
that the cat and mouse are “together” (i.e. at the same vertex) at time is at most
.
So in the case where the cat starts with the (uniform) stationary
distribution,
|
|
|
|
|
|
|
|
where is the density function of |
|
|
|
|
|
|
|
|
|
|
|
So
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Rearranging,
.
Writing , Markov’s inequality gives
.
This inequality assumes the cat starts with the stationary distribution.
When it starts at some arbitrary vertex,
we may use the definition of separation
(recall Chapter 4 yyy) to see
.
Then by iteration,
.
So appealing to the definition of ,
|
|
|
But results from Chapter 4 and this chapter show
,
establishing the Proposition.