11 Markov Chain Monte Carlo (January 8 2001)

11.6 Other theory

11.6.1 Sampling from log-concave densities

As mentioned in Chapter 9 section 5.1 (yyy version 9/1/99) there has been intense theoretical study of the problem of sampling uniformly from a convex set in RdR^{d}, in the dd\to\infty limit. This problem turns out to be essentially equivalent to the problem of sampling from a log-concave density ff, that is a density of the form f(x)exp(-H(x))f(x)\propto\exp(-H(x)) for convex HH. The results are not easy to state; see Bubley et al [78] for discussion.

11.6.2 Combining MCMC with slow exact sampling

Here is a special setting in which one can make rigorous inferences from MCMC without rigorous bounds on mixing times. Suppose we have a guess τ^\hat{\tau} at the relaxation time of a Markov sampler from a traget distribution π\pi; suppose we have some separate method of sampling exactly from π\pi, but where the cost of one exact sample is larger than the cost of τ^\hat{\tau} steps of the Markov sampler. In this setting it is natural to take mm exact samples and use them as initial states of mm multiple runs of the Markov sampler. It turns out (see [5] for precise statement) that one can obtain confidence intervals for a mean g¯\bar{g} which are always rigorously correct (without assumptions on τ2\tau_{2}) and which, if τ^\hat{\tau} is indeed approximately τ2\tau_{2}, will have optimal length, that is the length which would be implied by this value of τ2\tau_{2}.