5 Examples: Special Graphs and Trees (April 23 1996)

5.1 One-dimensional chains

5.1.1 Simple symmetric random walk on the integers

It is useful to record some elementary facts about simple symmetric random walk (Xt)(X_{t}) on the (infinite) set of all integers. As we shall observe, these may be derived in several different ways.

A fundamental formula gives exit probabilities:

Pb(Tc<Ta)=b-ac-a,a<b<c.P_{b}(T_{c}<T_{a})=\frac{b-a}{c-a},\ a<b<c. (5.5)

An elementary argument is that g(i)Pi(Tc<Ta)g(i)\equiv P_{i}(T_{c}<T_{a}) satisfies the 11-step recurrence

g(i)=12g(i+1)+12g(i-1),  a<i<bg(i)={\textstyle\frac{1}{2}}g(i+1)+{\textstyle\frac{1}{2}}g(i-1),\ a<i<b
g(a)=0, g(b)=1,g(a)=0,\ \ g(b)=1,

whose solution is g(i)=(i-a)/(b-a)g(i)=(i-a)/(b-a). At a more sophisticated level, (5.5) is a martingale result. The quantity pPb(Tc<Ta)p\equiv P_{b}(T_{c}<T_{a}) must satisfy

b=EbX(TaTc)=pc+(1-p)a,b=E_{b}\,X(T_{a}\wedge T_{c})=pc+(1-p)a,

where the first equality is the optional sampling theorem for the martingale XX, and solving this equation gives (5.5).

For a<ca<c, note that TaTcT_{a}\wedge T_{c} is the “exit time” from the open interval (a,c)(a,c). We can use (5.5) to calculate the “exit before return” probability

Pb(Tb+>TaTc)\displaystyle P_{b}(T^{+}_{b}>T_{a}\wedge T_{c}) =\displaystyle= 12Pb+1(Tc<Tb)+12Pb-1(Ta<Tb)\displaystyle{\textstyle\frac{1}{2}}P_{b+1}(T_{c}<T_{b})+{\textstyle\frac{1}{2% }}P_{b-1}(T_{a}<T_{b}) (5.6)
=\displaystyle= 121c-b+121b-a\displaystyle\frac{1}{2}\frac{1}{c-b}+\frac{1}{2}\frac{1}{b-a}
=\displaystyle= c-a2(c-b)(b-a).\displaystyle\frac{c-a}{2(c-b)(b-a)}.

For the walk started at bb, let m(b,x;a,c)m(b,x;a,c) be the mean number of visits to xx before the exit time TaTcT_{a}\wedge T_{c}. (Recall from Chapter 2 our convention that “before time tt” includes time 00 but excludes time tt). The number of returns to bb clearly has a Geometric distribution, so by (5.6)

m(b,b;a,c)=2(c-b)(b-a)c-a, abc.m(b,b;a,c)=\frac{2(c-b)(b-a)}{c-a},\ \ a\leq b\leq c. (5.7)

To get the analog for visits to xx we consider whether or not xx is hit at all before exiting; this gives

m(b,x;a,c)=Pb(Tx<TaTc)m(x,x;a,c).m(b,x;a,c)=P_{b}(T_{x}<T_{a}\wedge T_{c})\ m(x,x;a,c).

Appealing to (5.5) and (5.7) gives the famous mean occupation time formula

m(b,x;a,c)\displaystyle m(b,x;a,c) =\displaystyle= {2(x-a)(c-b)c-a,  axbc2(c-x)(b-a)c-a,  abxc.\displaystyle\left\{\begin{array}[]{l}\frac{2(x-a)(c-b)}{c-a},\ a\leq x\leq b% \leq c\\ \\ \frac{2(c-x)(b-a)}{c-a},\ a\leq b\leq x\leq c.\end{array}\right. (5.8)

Now the (random) time to exit must equal the sum of the (random) times spent at each state. So, taking expectations,

Eb(TaTc)=x=acm(b,x;a,c),E_{b}(T_{a}\wedge T_{c})=\sum_{x=a}^{c}m(b,x;a,c),

and after a little algebra we obtain

Lemma 5.2

Eb(TaTc)=(b-a)(c-b),  a<b<c.E_{b}(T_{a}\wedge T_{c})=(b-a)(c-b),\ a<b<c.

This derivation of Lemma 5.2 from (5.8) has the advantage of giving the mean occupation time formula (5.8) on the way. There are two alternative ways to prove Lemma 5.2. An elementary proof is to set up and solve the 11-step recurrence for h(i)Ei(TaTc)h(i)\equiv E_{i}(T_{a}\wedge T_{c}):

h(i)=1+12h(i+1)+12h(i-1),  a<i<ch(i)=1+{\textstyle\frac{1}{2}}h(i+1)+{\textstyle\frac{1}{2}}h(i-1),\ a<i<c
h(a)=h(c)=0.h(a)=h(c)=0.

The more elegant proof uses a martingale argument. Taking b=0b=0 without loss of generality, the first equality below is the optional sampling theorem for the martingale (X2(t)-t)(X^{2}(t)-t):

E0(TaTc)\displaystyle E_{0}(T_{a}\wedge T_{c}) =\displaystyle= E0X2(TaTc)\displaystyle E_{0}X^{2}(T_{a}\wedge T_{c})
=\displaystyle= a2P0(Ta<Tc)+c2P0(Tc<Ta)\displaystyle a^{2}P_{0}(T_{a}<T_{c})+c^{2}P_{0}(T_{c}<T_{a})
=\displaystyle= a2cc-a+c2-ac-a by (5.5)\displaystyle a^{2}\frac{c}{c-a}+c^{2}\frac{-a}{c-a}\mbox{ by }(\ref{wZ1})
=\displaystyle= -ac.\displaystyle-ac.

The preceding discussion works in discrete or continuous time. Exact distributions at time tt will of course differ in the two cases. In discrete time we appeal to the Binomial distribution for the number of +1+1 steps, to get

P0(X2t=2j)=(2t)!(t+j)!(t-j)!2-2t,-tjtP_{0}(X_{2t}=2j)=\frac{(2t)!}{(t+j)!(t-j)!}2^{-2t},\ -t\leq j\leq t (5.9)

and a similar expression for odd times tt. In continuous time, the numbers of +1+1 and of -1-1 steps in time tt are independent Poisson(t)(t) variables, so

P0(Xt=-j)=P0(Xt=j)=e-2ti=0t2i+ji!(i+j)!,j0.P_{0}(X_{t}=-j)=P_{0}(X_{t}=j)=e^{-2t}\sum_{i=0}^{\infty}\frac{t^{2i+j}}{i!(i+% j)!},\ j\geq 0. (5.10)

5.1.2 Weighted linear graphs

Consider the nn-vertex linear graph 001122\cdots(n-1)(n-1) with arbitrary edge-weights (w1,,wn-1)(w_{1},\ldots,w_{n-1}), where wi>0w_{i}>0 is the weight on edge (i-1,i)(i-1,i). Set w0=wn=0w_{0}=w_{n}=0 to make some later formulas cleaner. The corresponding discrete-time random walk has transition probabilities

pi,i+1=wi+1wi+wi+1,  pi,i-1=wiwi+wi+1,  0in-1p_{i,i+1}=\frac{w_{i+1}}{w_{i}+w_{i+1}},\ p_{i,i-1}=\frac{w_{i}}{w_{i}+w_{i+1}% },\ \ 0\leq i\leq n-1

and stationary distribution

πi=wi+wi+1w,  0in-1\pi_{i}=\frac{w_{i}+w_{i+1}}{w},\ \ 0\leq i\leq n-1

where w=2iwiw=2\sum_{i}w_{i}. In probabilistic terminology, this is a birth-and-death process, meaning that a transition cannot alter the state by more than 11. It is elementary that such processes are automatically reversible (xxx spells out the more general result for trees), so as discussed in Chapter 3 yyy the set-up above with weighted graphs gives the general discrete-time birth-and-death process with pii0p_{ii}\equiv 0. But note that the continuization does not give the general continuous-time birth-and-death process, which has 2(n-1)2(n-1) parameters (qi,i-1,qi,i+1)(q_{i,i-1},q_{i,i+1}) instead of just n-1n-1 parameters (wi)(w_{i}). The formulas below could all be extended to this general case (the analog of Proposition 5.3 can be found in undergraduate textbooks, e.g., Karlin and Taylor [208] Chapter 4) but our focus is on the simplifications which occur in the “weighted graphs” case.

Proposition 5.3

(a) For a<b<ca<b<c,

Pb(Tc<Ta)=i=a+1bwi-1i=a+1cwi-1.P_{b}(T_{c}<T_{a})=\frac{\sum_{i=a+1}^{b}w^{-1}_{i}}{\sum_{i=a+1}^{c}w^{-1}_{i% }}.

(b) For b<cb<c,

EbTc=c-b+2j=b+1ci=1j-1wiwj-1.E_{b}T_{c}=c-b+2\sum_{j=b+1}^{c}\sum_{i=1}^{j-1}w_{i}w^{-1}_{j}.

(c) For b<cb<c,

EbTc+EcTb=wi=b+1cwi-1.E_{b}T_{c}+E_{c}T_{b}=w\ \sum_{i=b+1}^{c}w_{i}^{-1}.

Note that we can obtain an expression for EcTbE_{c}T_{b}, b<cb<c, by reflecting the weighted graph about its center.

Proof. These are extensions of (5.5,5.1,5.2) and recycle some of the previous arguments. Writing h(j)=i=1jwi-1h(j)=\sum_{i=1}^{j}w^{-1}_{i}, we have that (h(Xt))(h(X_{t})) is a martingale, so

h(b)=Ebh(X(TaTc))=ph(c)+(1-p)h(a)h(b)=E_{b}h(X(T_{a}\wedge T_{c}))=ph(c)+(1-p)h(a)

for pPb(Tc<Ta)p\equiv P_{b}(T_{c}<T_{a}). Solving this equation gives p=h(b)-h(a)h(c)-h(a)p=\frac{h(b)-h(a)}{h(c)-h(a)}, which is (a).

The mean hitting time formula (b) has four different proofs! Two that we will not give are as described below Lemma 5.2: Set up and solve a recurrence equation, or use a well-chosen martingale. The slick argument is to use the essential edge lemma (Lemma 5.1) to show

Ej-1Tj=1+2i=1j-1wiwj.E_{j-1}T_{j}=1+2\frac{\sum_{i=1}^{j-1}w_{i}}{w_{j}}.

Then

EbTc=j=b+1cEj-1Tj,E_{b}T_{c}=\sum_{j=b+1}^{c}E_{j-1}T_{j},

establishing (b). Let us also write out the non-slick argument, using mean occupation times. By considering mean time spent at ii,

EbTc=i=0b-1Pb(Ti<Tc)m(i,i,c)+i=bc-1m(i,i,c),E_{b}T_{c}=\sum_{i=0}^{b-1}P_{b}(T_{i}<T_{c})m(i,i,c)+\sum_{i=b}^{c-1}m(i,i,c), (5.11)

where m(i,i,c)m(i,i,c) is the expectation, starting at ii, of the number of visits to ii before TcT_{c}. But

m(i,i,c)\displaystyle m(i,i,c) =\displaystyle= 1Pi(Tc<Ti+)\displaystyle\frac{1}{P_{i}(T_{c}<T^{+}_{i})}
=\displaystyle= 1pi.i+1Pi+1(Tc<Ti)\displaystyle\frac{1}{p_{i.i+1}P_{i+1}(T_{c}<T_{i})}
=\displaystyle= (wi+wi+1)j=i+1cwj-1 using (a).\displaystyle(w_{i}+w_{i+1})\ \sum_{j=i+1}^{c}w^{-1}_{j}\mbox{ using~{}(a).}

Substituting this and (a) into (5.11) leads to the formula stated in (b).

Finally, (c) can be deduced from (b), but it is more elegant to use the essential edge lemma to get

Ei-1Ti+EiTi-1=w/wiE_{i-1}T_{i}+E_{i}T_{i-1}=w/w_{i} (5.12)

and then use

EbTc+EcTb=i=b+1c(Ei-1Ti+EiTi-1).   E_{b}T_{c}+E_{c}T_{b}=\sum_{i=b+1}^{c}(E_{i-1}T_{i}+E_{i}T_{i-1}).~{}\ \ \rule% {4.3pt}{4.3pt}

We now start some little calculations relating to the parameters discussed in Chapter 4. Plainly, from Proposition 5.3

τ*=wi=1n-1wi-1.\tau^{*}=w\ \sum_{i=1}^{n-1}w^{-1}_{i}. (5.13)

Next, consider calculating EπTbE_{\pi}T_{b}. We could use Proposition 5.3(b), but instead let us apply Theorem yyy of Chapter 3, giving EπTbE_{\pi}T_{b} in terms of unit flows from bb to π\pi. In a linear graph there is only one such flow, which for ibi\geq b has fi,i+1=π[i+1,n-1]=j=i+1n-1πjf_{i,i+1}=\pi[i+1,n-1]=\sum_{j=i+1}^{n-1}\pi_{j}, and for ib-1i\leq b-1 has fi,i+1=-π[0,i]f_{i,i+1}=-\pi[0,i], and so the Proposition implies

EπTb=wi=b+1n-1π2[i,n-1]wi+wi=1bπ2[0,i-1]wi.E_{\pi}T_{b}=w\sum_{i=b+1}^{n-1}\frac{\pi^{2}[i,n-1]}{w_{i}}+w\sum_{i=1}^{b}% \frac{\pi^{2}[0,i-1]}{w_{i}}. (5.14)

There are several ways to use the preceding results to compute the average hitting time parameter τ0\tau_{0}. Perhaps the most elegant is

τ0\displaystyle\tau_{0} =\displaystyle= ij>iπiπj(EiTj+EjTi)\displaystyle\sum_{i}\sum_{j>i}\pi_{i}\pi_{j}(E_{i}T_{j}+E_{j}T_{i}) (5.15)
=\displaystyle= k=1n-1π[0,k-1]π[k,n-1](Ek-1Tk+EkTk-1)\displaystyle\sum_{k=1}^{n-1}\pi[0,k-1]\pi[k,n-1](E_{k-1}T_{k}+E_{k}T_{k-1})
=\displaystyle= k=1n-1π[0,k-1]π[k,n-1]w/wk by (5.12)\displaystyle\sum_{k=1}^{n-1}\pi[0,k-1]\pi[k,n-1]w/w_{k}\mbox{ by }(\ref{cii})
=\displaystyle= w-1k=1n-1wk-1(wk+2j=1k-1wj)(wk+2j=k+1n-1wj).\displaystyle w^{-1}\sum_{k=1}^{n-1}w^{-1}_{k}\left(w_{k}+2\sum_{j=1}^{k-1}w_{% j}\right)\left(w_{k}+2\sum_{j=k+1}^{n-1}w_{j}\right).

There are sophisticated methods (see Notes) of studying τ1\tau_{1}, but let us just point out that Proposition 5.23 later (proved in the more general context of trees) holds in the present setting, giving

1K1minxmax(E0Tx,En-1Tx)τ1K2minxmax(E0Tx,En-1Tx).\frac{1}{K_{1}}\min_{x}\max(E_{0}T_{x},E_{n-1}T_{x})\leq\tau_{1}\leq K_{2}\min% _{x}\max(E_{0}T_{x},E_{n-1}T_{x}). (5.16)

We do not know an explicit formula for τ2\tau_{2}, but we can get an upper bound easily from the “distinguished paths” result Chapter 4 yyy. For x<yx<y the path γxy\gamma_{xy} has r(γxy)=u=x+1y1/wur(\gamma_{xy})=\sum_{u=x+1}^{y}1/w_{u} and hence the bound is

τ21wmaxjx=0j-1y=jn-1u=x+1y(wx+wx+1)(wy+wy+1)wu.\tau_{2}\leq\frac{1}{w}\max_{j}\sum_{x=0}^{j-1}\sum_{y=j}^{n-1}\sum_{u=x+1}^{y% }\frac{(w_{x}+w_{x+1})(w_{y}+w_{y+1})}{w_{u}}. (5.17)

jjj This uses the Diaconis–Stroock version. The Sinclair version is

τ21wmaxj1wjx=0j-1y=jn-1(wx+wx+1)(wy+wy+1)(y-x).\tau_{2}\leq\frac{1}{w}\max_{j}\frac{1}{w_{j}}\sum_{x=0}^{j-1}\sum_{y=j}^{n-1}% (w_{x}+w_{x+1})(w_{y}+w_{y+1})(y-x).

xxx literature on τ2\tau_{2} (van Doorn, etc.)

jjj Also relevant is work of N. Kahale (and others) on how optimal choice of weights in use of Cauchy–Schwarz inequality for Diaconis–Stroock–Sinclair leads to equality in case of birth-and-death chains.

jjj See also Diaconis and Saloff-Coste Metropolis paper, which mentions work of Diaconis students on Metropolizing birth-and-death chains.

xxx examples of particular w.  jjj might just bring up as needed?

xxx contraction principle and lower bounds on τ2\tau_{2} (relating to current Section 6 of Chapter 4)

By Chapter 4 Lemma yyy,

τc=max1in-1π[0,i-1]π[i,n-1]wi.\tau_{c}=\max_{1\leq i\leq n-1}\frac{\pi[0,i-1]\pi[i,n-1]}{w_{i}}. (5.18)

5.1.3 Useful examples of one-dimensional chains

Example 5.4

The two-state chain.

This is the birth-and-death chain on {0,1}\{0,1\} with p01=1-p00=pp_{01}=1-p_{00}=p and p10=1-p11=qp_{10}=1-p_{11}=q, where 0<p<10<p<1 and 0<q<10<q<1 are arbitrarily specified. Since p00p_{00} and p11p_{11} are positive, this does not quite fit into the framework of Section 5.1.2, but everything is nonetheless easy to calculate. The stationary distribution is given by

π0=q/(p+q), π1=p/(p+q).\pi_{0}=q/(p+q),\ \ \pi_{1}=p/(p+q).

In discrete time, the eigenvalues are λ1=1\lambda_{1}=1 and λ2=1-p-q\lambda_{2}=1-p-q, and in the notation of Chapter 3, Section yyy for the spectral representation, the matrix SS has s11=1-ps_{11}=1-p, s22=1-qs_{22}=1-q, and s12=s21=(pq)1/2s_{12}=s_{21}=(pq)^{1/2} with normalized right eigenvectors

u1=[(q/(p+q))1/2, (p/(p+q))1/2]T,   u2=[(p/(p+q))1/2, -(q/(p+q))1/2]T.u_{1}=[(q/(p+q))^{1/2},\,(p/(p+q))^{1/2}]^{T},\ \ \ u_{2}=[(p/(p+q))^{1/2},\,-% (q/(p+q))^{1/2}]^{T}.

The transition probabilities are given by

P0(Xt=1)\displaystyle P_{0}(X_{t}=1) =\displaystyle= 1-P0(Xt=0)=pp+q[1-(1-p-q)n],\displaystyle 1-P_{0}(X_{t}=0)=\frac{p}{p+q}[1-(1-p-q)^{n}],
P1(Xt=0)\displaystyle P_{1}(X_{t}=0) =\displaystyle= 1-P1(Xt=1)=qp+q[1-(1-p-q)n]\displaystyle 1-P_{1}(X_{t}=1)=\frac{q}{p+q}[1-(1-p-q)^{n}]

in discrete time and by

P0(Xt=1)\displaystyle P_{0}(X_{t}=1) =\displaystyle= 1-P0(Xt=0)=pp+q[1-e-(p+q)t],\displaystyle 1-P_{0}(X_{t}=0)=\frac{p}{p+q}[1-e^{-(p+q)t}],
P1(Xt=0)\displaystyle P_{1}(X_{t}=0) =\displaystyle= 1-P1(Xt=1)=qp+q[1-e-(p+q)t]\displaystyle 1-P_{1}(X_{t}=1)=\frac{q}{p+q}[1-e^{-(p+q)t}]

in continuous time. It is routine to calculate E0T1=1/pE_{0}T_{1}=1/p, E1T0=1/qE_{1}T_{0}=1/q, and

d¯(t)=e-(p+q)t, d(t)=max(p/(p+q),q/(p+q))e-(p+q)t,\bar{d}(t)=e^{-(p+q)t},\ \ d(t)=\max(p/(p+q),q/(p+q))\,e^{-(p+q)t},

and then

maxijEiTj=max(E0T1,E1T0)=1min(p,q), τ*=E0T1+E1T0=p+qpq,\max_{ij}E_{i}T_{j}=\max(E_{0}T_{1},E_{1}T_{0})=\frac{1}{\min(p,q)},\ \ \tau^{% *}=E_{0}T_{1}+E_{1}T_{0}=\frac{p+q}{pq},

and

τ0=τ1=τ2=τc=1/(p+q).\tau_{0}=\tau_{1}=\tau_{2}=\tau_{c}=1/(p+q).
Example 5.5

Biased random walk with reflecting barriers.

We consider the chain on {0,1,,n-1}\{0,1,\ldots,n-1\} with reflecting barriers at 00 and n-1n-1 that at each unit of time moves distance 11 rightward with probability pp and distance 11 leftward with probability q=1-pq=1-p. Formally, the setting is that of Section 5.1.2 with

wi=ρi-1, w=2(1-ρn-1)1-ρ21-ρ,w_{i}=\rho^{i-1},\ \ w=\frac{2(1-\rho^{n-1})}{1-\rho}\rightarrow\frac{2}{1-% \rho},

where we assume ρp/q<1\rho\equiv p/q<1 and all asymptotics developed for this example are for fixed ρ\rho and large nn. If ρ1\rho\neq 1, there is by symmetry no loss of generality in assuming ρ<1\rho<1, and the case ρ=1\rho=1 will be treated later in Example 5.8.

Specializing the results of Section 5.1.2 to the present example, one can easily derive the asymptotic results

maxijEiTjτ*EπTn-12ρ-(n-2)/(1-ρ)2\max_{ij}E_{i}T_{j}\sim\tau^{*}\sim E_{\pi}T_{n-1}\sim 2\rho^{-(n-2)}/(1-\rho)% ^{2} (5.19)

and, by use of (5.15),

τ01+ρ1-ρn.\tau_{0}\sim\frac{1+\rho}{1-\rho}n. (5.20)

For τc\tau_{c}, the maximizing ii in (5.18) equals (1+o(1))n/2(1+o(1))n/2, and this leads to

τc(1+ρ)/(1-ρ).\tau_{c}\rightarrow(1+\rho)/(1-\rho). (5.21)

The spectral representation can be obtained using the orthogonal polynomial techniques described in Karlin and Taylor [209] Chapter 10; see especially Section 5(b) there. The reader may verify that the eigenvalues of 𝐏{\bf P} in discrete time are 11, -1-1, and, for m=1,,n-2m=1,\ldots,n-2,

2ρ1/21+ρcosθm, where θmmπn-1\frac{2\rho^{1/2}}{1+\rho}\cos\theta_{m},\mbox{\ where\ }\theta_{m}\equiv\frac% {m\pi}{n-1}

with (unnormalized) right eigenvector

ρ-i/2[2cos(iθm)-(1-ρ)sin((i+1)θm)sin(θm)], i=0,,n-1.\rho^{-i/2}\left[2\cos(i\theta_{m})-(1-\rho)\frac{\sin((i+1)\theta_{m})}{\sin(% \theta_{m})}\right],\ \ i=0,\ldots,n-1.

In particular,

τ2=[1-2ρ1/21+ρcos(πn-1)]-11+ρ(1-ρ1/2)2.\tau_{2}=\left[1-\frac{2\rho^{1/2}}{1+\rho}\cos\left(\frac{\pi}{n-1}\right)% \right]^{-1}\rightarrow\frac{1+\rho}{(1-\rho^{1/2})^{2}}. (5.22)

The random walk has drift p-q=-(1-ρ)/(1+ρ)-μp-q=-(1-\rho)/(1+\rho)\equiv-\mu. It is not hard to show for fixed t>0t>0 that the distances d¯n(tn)\bar{d}_{n}(tn) and dn(tn)d_{n}(tn) of Chapter 4 yyy converge to 11 if t<μt<\mu and to 00 if t>μt>\mu.

jjj include details? In fact, the cutoff occurs at μn+cρn1/2\mu n+c_{\rho}n^{1/2}: cf. (e.g.) Example 4.46 in [113]. Continue same paragraph:

In particular,

τ11-ρ1+ρn\tau_{1}\sim\frac{1-\rho}{1+\rho}n (5.23)
Example 5.6

The 𝑀/𝑀/1\mbox{M}/\mbox{M}/1 queue.

We consider the M/M/1/(n-1)\mbox{M}/\mbox{M}/1/(n-1) queue. Customers queue up at a facility to wait for a single server (hence the “11”) and are handled according to a “first come, first served” queuing discipline. The first “M” specifies that the arrival point process is Markovian, i.e., a Poisson process with intensity parameter λ\lambda (say); likewise, the second “M” reflects our assumption that the service times are exponential with parameter μ\mu (say). The parameter n-1n-1 is the queue size limit; customers arriving when the queue is full are turned away.

We have described a continuous-time birth-and-death process with constant birth and death rates λ\lambda and μ\mu, respectively. If λ+μ=1\lambda+\mu=1, this is nearly the continuized biased random walk of Example 5.5, the only difference being in the boundary behavior. In particular, one can check that the asymptotics in (5.19)–(5.23) remain unchanged, where ρλ/μ\rho\equiv\lambda/\mu, called the traffic intensity, remains fixed and nn becomes large. For the M/M/1/(n-1)\mbox{M}/\mbox{M}/1/(n-1) queue, the stationary distribution is the conditional distribution of G-1G-1 given GnG\leq n, where GG has the Geometric(1-ρ1-\rho) distribution. The eigenvalues are 11 and, for m=1,,n-1m=1,\ldots,n-1,

2ρ1/21+ρcosθm, where now θmmπn\frac{2\rho^{1/2}}{1+\rho}\cos\theta_{m},\mbox{\ where now\ }\theta_{m}\equiv% \frac{m\pi}{n}

with (unnormalized) right eigenvector

2ρ-i/21+ρ[cos(iθm)+(ρ1/2cosθm-1)sin((i+1)θm)sin(θm)], i=0,,n-1.\frac{2\rho^{-i/2}}{1+\rho}\left[\cos(i\theta_{m})+(\rho^{1/2}\cos\theta_{m}-1% )\frac{\sin((i+1)\theta_{m})}{\sin(\theta_{m})}\right],\ \ i=0,\ldots,n-1.