It is useful to record some elementary facts about simple symmetric random walk $(X_{t})$ on the (infinite) set of all integers. As we shall observe, these may be derived in several different ways.

A fundamental formula gives exit probabilities:

$P_{b}(T_{c}<T_{a})=\frac{b-a}{c-a},\ a<b<c.$ | (5.5) |

An elementary argument is that $g(i)\equiv P_{i}(T_{c}<T_{a})$ satisfies the $1$-step recurrence

$g(i)={\textstyle\frac{1}{2}}g(i+1)+{\textstyle\frac{1}{2}}g(i-1),\ a<i<b$ |

$g(a)=0,\ \ g(b)=1,$ |

whose solution is $g(i)=(i-a)/(b-a)$. At a more sophisticated level, (5.5) is a martingale result. The quantity $p\equiv P_{b}(T_{c}<T_{a})$ must satisfy

$b=E_{b}\,X(T_{a}\wedge T_{c})=pc+(1-p)a,$ |

where the first equality is the optional sampling theorem for the martingale $X$, and solving this equation gives (5.5).

For $a<c$, note that $T_{a}\wedge T_{c}$ is the “exit time” from the open interval $(a,c)$. We can use (5.5) to calculate the “exit before return” probability

$\displaystyle P_{b}(T^{+}_{b}>T_{a}\wedge T_{c})$ | $\displaystyle=$ | $\displaystyle{\textstyle\frac{1}{2}}P_{b+1}(T_{c}<T_{b})+{\textstyle\frac{1}{2% }}P_{b-1}(T_{a}<T_{b})$ | (5.6) | ||

$\displaystyle=$ | $\displaystyle\frac{1}{2}\frac{1}{c-b}+\frac{1}{2}\frac{1}{b-a}$ | ||||

$\displaystyle=$ | $\displaystyle\frac{c-a}{2(c-b)(b-a)}.$ |

For the walk started at $b$, let $m(b,x;a,c)$ be the mean number of visits to $x$ before the exit time $T_{a}\wedge T_{c}$. (Recall from Chapter 2 our convention that “before time $t$” includes time $0$ but excludes time $t$). The number of returns to $b$ clearly has a Geometric distribution, so by (5.6)

$m(b,b;a,c)=\frac{2(c-b)(b-a)}{c-a},\ \ a\leq b\leq c.$ | (5.7) |

To get the analog for visits to $x$ we consider whether or not $x$ is hit at all before exiting; this gives

$m(b,x;a,c)=P_{b}(T_{x}<T_{a}\wedge T_{c})\ m(x,x;a,c).$ |

Appealing to (5.5) and (5.7) gives the famous mean occupation time formula

$\displaystyle m(b,x;a,c)$ | $\displaystyle=$ | $\displaystyle\left\{\begin{array}[]{l}\frac{2(x-a)(c-b)}{c-a},\ a\leq x\leq b% \leq c\\ \\ \frac{2(c-x)(b-a)}{c-a},\ a\leq b\leq x\leq c.\end{array}\right.$ | (5.8) |

Now the (random) time to exit must equal the sum of the (random) times spent at each state. So, taking expectations,

$E_{b}(T_{a}\wedge T_{c})=\sum_{x=a}^{c}m(b,x;a,c),$ |

and after a little algebra we obtain

$E_{b}(T_{a}\wedge T_{c})=(b-a)(c-b),\ a<b<c.$

This derivation of Lemma 5.2 from (5.8) has the advantage of giving the mean occupation time formula (5.8) on the way. There are two alternative ways to prove Lemma 5.2. An elementary proof is to set up and solve the $1$-step recurrence for $h(i)\equiv E_{i}(T_{a}\wedge T_{c})$:

$h(i)=1+{\textstyle\frac{1}{2}}h(i+1)+{\textstyle\frac{1}{2}}h(i-1),\ a<i<c$ |

$h(a)=h(c)=0.$ |

The more elegant proof uses a martingale argument. Taking $b=0$ without loss of generality, the first equality below is the optional sampling theorem for the martingale $(X^{2}(t)-t)$:

$\displaystyle E_{0}(T_{a}\wedge T_{c})$ | $\displaystyle=$ | $\displaystyle E_{0}X^{2}(T_{a}\wedge T_{c})$ | ||

$\displaystyle=$ | $\displaystyle a^{2}P_{0}(T_{a}<T_{c})+c^{2}P_{0}(T_{c}<T_{a})$ | |||

$\displaystyle=$ | $\displaystyle a^{2}\frac{c}{c-a}+c^{2}\frac{-a}{c-a}\mbox{ by }(\ref{wZ1})$ | |||

$\displaystyle=$ | $\displaystyle-ac.$ |

The preceding discussion works in discrete or continuous time. Exact distributions at time $t$ will of course differ in the two cases. In discrete time we appeal to the Binomial distribution for the number of $+1$ steps, to get

$P_{0}(X_{2t}=2j)=\frac{(2t)!}{(t+j)!(t-j)!}2^{-2t},\ -t\leq j\leq t$ | (5.9) |

and a similar expression for odd times $t$. In continuous time, the numbers of $+1$ and of $-1$ steps in time $t$ are independent Poisson$(t)$ variables, so

$P_{0}(X_{t}=-j)=P_{0}(X_{t}=j)=e^{-2t}\sum_{i=0}^{\infty}\frac{t^{2i+j}}{i!(i+% j)!},\ j\geq 0.$ | (5.10) |

Consider the $n$-vertex linear graph $0$ – $1$ – $2$ – $\cdots$ – $(n-1)$ with arbitrary edge-weights $(w_{1},\ldots,w_{n-1})$, where $w_{i}>0$ is the weight on edge $(i-1,i)$. Set $w_{0}=w_{n}=0$ to make some later formulas cleaner. The corresponding discrete-time random walk has transition probabilities

$p_{i,i+1}=\frac{w_{i+1}}{w_{i}+w_{i+1}},\ p_{i,i-1}=\frac{w_{i}}{w_{i}+w_{i+1}% },\ \ 0\leq i\leq n-1$ |

and stationary distribution

$\pi_{i}=\frac{w_{i}+w_{i+1}}{w},\ \ 0\leq i\leq n-1$ |

where $w=2\sum_{i}w_{i}$. In probabilistic terminology, this is a birth-and-death process, meaning that a transition cannot alter the state by more than $1$. It is elementary that such processes are automatically reversible (xxx spells out the more general result for trees), so as discussed in Chapter 3 yyy the set-up above with weighted graphs gives the general discrete-time birth-and-death process with $p_{ii}\equiv 0$. But note that the continuization does not give the general continuous-time birth-and-death process, which has $2(n-1)$ parameters $(q_{i,i-1},q_{i,i+1})$ instead of just $n-1$ parameters $(w_{i})$. The formulas below could all be extended to this general case (the analog of Proposition 5.3 can be found in undergraduate textbooks, e.g., Karlin and Taylor [208] Chapter 4) but our focus is on the simplifications which occur in the “weighted graphs” case.

(a) For $a<b<c$,

$P_{b}(T_{c}<T_{a})=\frac{\sum_{i=a+1}^{b}w^{-1}_{i}}{\sum_{i=a+1}^{c}w^{-1}_{i% }}.$ |

(b) For $b<c$,

$E_{b}T_{c}=c-b+2\sum_{j=b+1}^{c}\sum_{i=1}^{j-1}w_{i}w^{-1}_{j}.$ |

(c) For $b<c$,

$E_{b}T_{c}+E_{c}T_{b}=w\ \sum_{i=b+1}^{c}w_{i}^{-1}.$ |

Note that we can obtain an expression for $E_{c}T_{b}$, $b<c$, by reflecting the weighted graph about its center.

Proof. These are extensions of (5.5,5.1,5.2) and recycle some of the previous arguments. Writing $h(j)=\sum_{i=1}^{j}w^{-1}_{i}$, we have that $(h(X_{t}))$ is a martingale, so

$h(b)=E_{b}h(X(T_{a}\wedge T_{c}))=ph(c)+(1-p)h(a)$ |

for $p\equiv P_{b}(T_{c}<T_{a})$. Solving this equation gives $p=\frac{h(b)-h(a)}{h(c)-h(a)}$, which is (a).

The mean hitting time formula (b) has four different proofs! Two that we will not give are as described below Lemma 5.2: Set up and solve a recurrence equation, or use a well-chosen martingale. The slick argument is to use the essential edge lemma (Lemma 5.1) to show

$E_{j-1}T_{j}=1+2\frac{\sum_{i=1}^{j-1}w_{i}}{w_{j}}.$ |

Then

$E_{b}T_{c}=\sum_{j=b+1}^{c}E_{j-1}T_{j},$ |

establishing (b). Let us also write out the non-slick argument, using mean occupation times. By considering mean time spent at $i$,

$E_{b}T_{c}=\sum_{i=0}^{b-1}P_{b}(T_{i}<T_{c})m(i,i,c)+\sum_{i=b}^{c-1}m(i,i,c),$ | (5.11) |

where $m(i,i,c)$ is the expectation, starting at $i$, of the number of visits to $i$ before $T_{c}$. But

$\displaystyle m(i,i,c)$ | $\displaystyle=$ | $\displaystyle\frac{1}{P_{i}(T_{c}<T^{+}_{i})}$ | ||

$\displaystyle=$ | $\displaystyle\frac{1}{p_{i.i+1}P_{i+1}(T_{c}<T_{i})}$ | |||

$\displaystyle=$ | $\displaystyle(w_{i}+w_{i+1})\ \sum_{j=i+1}^{c}w^{-1}_{j}\mbox{ using~{}(a).}$ |

Substituting this and (a) into (5.11) leads to the formula stated in (b).

Finally, (c) can be deduced from (b), but it is more elegant to use the essential edge lemma to get

$E_{i-1}T_{i}+E_{i}T_{i-1}=w/w_{i}$ | (5.12) |

and then use

$E_{b}T_{c}+E_{c}T_{b}=\sum_{i=b+1}^{c}(E_{i-1}T_{i}+E_{i}T_{i-1}).~{}\ \ \rule% {4.3pt}{4.3pt}$ |

We now start some little calculations relating to the parameters discussed in Chapter 4. Plainly, from Proposition 5.3

$\tau^{*}=w\ \sum_{i=1}^{n-1}w^{-1}_{i}.$ | (5.13) |

Next, consider calculating $E_{\pi}T_{b}$. We could use Proposition 5.3(b), but instead let us apply Theorem yyy of Chapter 3, giving $E_{\pi}T_{b}$ in terms of unit flows from $b$ to $\pi$. In a linear graph there is only one such flow, which for $i\geq b$ has $f_{i,i+1}=\pi[i+1,n-1]=\sum_{j=i+1}^{n-1}\pi_{j}$, and for $i\leq b-1$ has $f_{i,i+1}=-\pi[0,i]$, and so the Proposition implies

$E_{\pi}T_{b}=w\sum_{i=b+1}^{n-1}\frac{\pi^{2}[i,n-1]}{w_{i}}+w\sum_{i=1}^{b}% \frac{\pi^{2}[0,i-1]}{w_{i}}.$ | (5.14) |

There are several ways to use the preceding results to compute the average hitting time parameter $\tau_{0}$. Perhaps the most elegant is

$\displaystyle\tau_{0}$ | $\displaystyle=$ | $\displaystyle\sum_{i}\sum_{j>i}\pi_{i}\pi_{j}(E_{i}T_{j}+E_{j}T_{i})$ | (5.15) | ||

$\displaystyle=$ | $\displaystyle\sum_{k=1}^{n-1}\pi[0,k-1]\pi[k,n-1](E_{k-1}T_{k}+E_{k}T_{k-1})$ | ||||

$\displaystyle=$ | $\displaystyle\sum_{k=1}^{n-1}\pi[0,k-1]\pi[k,n-1]w/w_{k}\mbox{ by }(\ref{cii})$ | ||||

$\displaystyle=$ | $\displaystyle w^{-1}\sum_{k=1}^{n-1}w^{-1}_{k}\left(w_{k}+2\sum_{j=1}^{k-1}w_{% j}\right)\left(w_{k}+2\sum_{j=k+1}^{n-1}w_{j}\right).$ |

There are sophisticated methods (see Notes) of studying $\tau_{1}$, but let us just point out that Proposition 5.23 later (proved in the more general context of trees) holds in the present setting, giving

$\frac{1}{K_{1}}\min_{x}\max(E_{0}T_{x},E_{n-1}T_{x})\leq\tau_{1}\leq K_{2}\min% _{x}\max(E_{0}T_{x},E_{n-1}T_{x}).$ | (5.16) |

We do not know an explicit formula for $\tau_{2}$, but we can get an upper bound easily from the “distinguished paths” result Chapter 4 yyy. For $x<y$ the path $\gamma_{xy}$ has $r(\gamma_{xy})=\sum_{u=x+1}^{y}1/w_{u}$ and hence the bound is

$\tau_{2}\leq\frac{1}{w}\max_{j}\sum_{x=0}^{j-1}\sum_{y=j}^{n-1}\sum_{u=x+1}^{y% }\frac{(w_{x}+w_{x+1})(w_{y}+w_{y+1})}{w_{u}}.$ | (5.17) |

jjj This uses the Diaconis–Stroock version. The Sinclair version is

$\tau_{2}\leq\frac{1}{w}\max_{j}\frac{1}{w_{j}}\sum_{x=0}^{j-1}\sum_{y=j}^{n-1}% (w_{x}+w_{x+1})(w_{y}+w_{y+1})(y-x).$ |

xxx literature on $\tau_{2}$ (van Doorn, etc.)

jjj Also relevant is work of N. Kahale (and others) on how optimal choice of weights in use of Cauchy–Schwarz inequality for Diaconis–Stroock–Sinclair leads to equality in case of birth-and-death chains.

jjj See also Diaconis and Saloff-Coste Metropolis paper, which mentions work of Diaconis students on Metropolizing birth-and-death chains.

xxx examples of particular w. jjj might just bring up as needed?

xxx contraction principle and lower bounds on $\tau_{2}$ (relating to current Section 6 of Chapter 4)

By Chapter 4 Lemma yyy,

$\tau_{c}=\max_{1\leq i\leq n-1}\frac{\pi[0,i-1]\pi[i,n-1]}{w_{i}}.$ | (5.18) |

The two-state chain.

This is the birth-and-death chain on $\{0,1\}$ with $p_{01}=1-p_{00}=p$ and $p_{10}=1-p_{11}=q$, where $0<p<1$ and $0<q<1$ are arbitrarily specified. Since $p_{00}$ and $p_{11}$ are positive, this does not quite fit into the framework of Section 5.1.2, but everything is nonetheless easy to calculate. The stationary distribution is given by

$\pi_{0}=q/(p+q),\ \ \pi_{1}=p/(p+q).$ |

In discrete time, the eigenvalues are $\lambda_{1}=1$ and $\lambda_{2}=1-p-q$, and in the notation of Chapter 3, Section yyy for the spectral representation, the matrix $S$ has $s_{11}=1-p$, $s_{22}=1-q$, and $s_{12}=s_{21}=(pq)^{1/2}$ with normalized right eigenvectors

$u_{1}=[(q/(p+q))^{1/2},\,(p/(p+q))^{1/2}]^{T},\ \ \ u_{2}=[(p/(p+q))^{1/2},\,-% (q/(p+q))^{1/2}]^{T}.$ |

The transition probabilities are given by

$\displaystyle P_{0}(X_{t}=1)$ | $\displaystyle=$ | $\displaystyle 1-P_{0}(X_{t}=0)=\frac{p}{p+q}[1-(1-p-q)^{n}],$ | ||

$\displaystyle P_{1}(X_{t}=0)$ | $\displaystyle=$ | $\displaystyle 1-P_{1}(X_{t}=1)=\frac{q}{p+q}[1-(1-p-q)^{n}]$ |

in discrete time and by

$\displaystyle P_{0}(X_{t}=1)$ | $\displaystyle=$ | $\displaystyle 1-P_{0}(X_{t}=0)=\frac{p}{p+q}[1-e^{-(p+q)t}],$ | ||

$\displaystyle P_{1}(X_{t}=0)$ | $\displaystyle=$ | $\displaystyle 1-P_{1}(X_{t}=1)=\frac{q}{p+q}[1-e^{-(p+q)t}]$ |

in continuous time. It is routine to calculate $E_{0}T_{1}=1/p$, $E_{1}T_{0}=1/q$, and

$\bar{d}(t)=e^{-(p+q)t},\ \ d(t)=\max(p/(p+q),q/(p+q))\,e^{-(p+q)t},$ |

and then

$\max_{ij}E_{i}T_{j}=\max(E_{0}T_{1},E_{1}T_{0})=\frac{1}{\min(p,q)},\ \ \tau^{% *}=E_{0}T_{1}+E_{1}T_{0}=\frac{p+q}{pq},$ |

and

$\tau_{0}=\tau_{1}=\tau_{2}=\tau_{c}=1/(p+q).$ |

Biased random walk with reflecting barriers.

We consider the chain on $\{0,1,\ldots,n-1\}$ with reflecting barriers at $0$ and $n-1$ that at each unit of time moves distance $1$ rightward with probability $p$ and distance $1$ leftward with probability $q=1-p$. Formally, the setting is that of Section 5.1.2 with

$w_{i}=\rho^{i-1},\ \ w=\frac{2(1-\rho^{n-1})}{1-\rho}\rightarrow\frac{2}{1-% \rho},$ |

where we assume $\rho\equiv p/q<1$ and all asymptotics developed for this example are for fixed $\rho$ and large $n$. If $\rho\neq 1$, there is by symmetry no loss of generality in assuming $\rho<1$, and the case $\rho=1$ will be treated later in Example 5.8.

Specializing the results of Section 5.1.2 to the present example, one can easily derive the asymptotic results

$\max_{ij}E_{i}T_{j}\sim\tau^{*}\sim E_{\pi}T_{n-1}\sim 2\rho^{-(n-2)}/(1-\rho)% ^{2}$ | (5.19) |

and, by use of (5.15),

$\tau_{0}\sim\frac{1+\rho}{1-\rho}n.$ | (5.20) |

For $\tau_{c}$, the maximizing $i$ in (5.18) equals $(1+o(1))n/2$, and this leads to

$\tau_{c}\rightarrow(1+\rho)/(1-\rho).$ | (5.21) |

The spectral representation can be obtained using the orthogonal polynomial techniques described in Karlin and Taylor [209] Chapter 10; see especially Section 5(b) there. The reader may verify that the eigenvalues of ${\bf P}$ in discrete time are $1$, $-1$, and, for $m=1,\ldots,n-2$,

$\frac{2\rho^{1/2}}{1+\rho}\cos\theta_{m},\mbox{\ where\ }\theta_{m}\equiv\frac% {m\pi}{n-1}$ |

with (unnormalized) right eigenvector

$\rho^{-i/2}\left[2\cos(i\theta_{m})-(1-\rho)\frac{\sin((i+1)\theta_{m})}{\sin(% \theta_{m})}\right],\ \ i=0,\ldots,n-1.$ |

In particular,

$\tau_{2}=\left[1-\frac{2\rho^{1/2}}{1+\rho}\cos\left(\frac{\pi}{n-1}\right)% \right]^{-1}\rightarrow\frac{1+\rho}{(1-\rho^{1/2})^{2}}.$ | (5.22) |

The random walk has drift $p-q=-(1-\rho)/(1+\rho)\equiv-\mu$. It is not hard to show for fixed $t>0$ that the distances $\bar{d}_{n}(tn)$ and $d_{n}(tn)$ of Chapter 4 yyy converge to $1$ if $t<\mu$ and to $0$ if $t>\mu$.

jjj include details? In fact, the cutoff occurs at $\mu n+c_{\rho}n^{1/2}$: cf. (e.g.) Example 4.46 in [113]. Continue same paragraph:

In particular,

$\tau_{1}\sim\frac{1-\rho}{1+\rho}n$ | (5.23) |

The $\mbox{M}/\mbox{M}/1$ queue.

We consider the $\mbox{M}/\mbox{M}/1/(n-1)$ queue. Customers queue up at a facility to wait for a single server (hence the “$1$”) and are handled according to a “first come, first served” queuing discipline. The first “M” specifies that the arrival point process is Markovian, i.e., a Poisson process with intensity parameter $\lambda$ (say); likewise, the second “M” reflects our assumption that the service times are exponential with parameter $\mu$ (say). The parameter $n-1$ is the queue size limit; customers arriving when the queue is full are turned away.

We have described a continuous-time birth-and-death process with constant birth and death rates $\lambda$ and $\mu$, respectively. If $\lambda+\mu=1$, this is nearly the continuized biased random walk of Example 5.5, the only difference being in the boundary behavior. In particular, one can check that the asymptotics in (5.19)–(5.23) remain unchanged, where $\rho\equiv\lambda/\mu$, called the traffic intensity, remains fixed and $n$ becomes large. For the $\mbox{M}/\mbox{M}/1/(n-1)$ queue, the stationary distribution is the conditional distribution of $G-1$ given $G\leq n$, where $G$ has the Geometric($1-\rho$) distribution. The eigenvalues are $1$ and, for $m=1,\ldots,n-1$,

$\frac{2\rho^{1/2}}{1+\rho}\cos\theta_{m},\mbox{\ where now\ }\theta_{m}\equiv% \frac{m\pi}{n}$ |

with (unnormalized) right eigenvector

$\frac{2\rho^{-i/2}}{1+\rho}\left[\cos(i\theta_{m})+(\rho^{1/2}\cos\theta_{m}-1% )\frac{\sin((i+1)\theta_{m})}{\sin(\theta_{m})}\right],\ \ i=0,\ldots,n-1.$ |

Generated on Mon Jun 2 14:23:48 2014 by LaTeXML