Positivity of hit-and-run and related algorithms

We prove positivity of the Markov operators that correspond to the hit-and-run algorithm, random scan Gibbs sampler, slice sampler and an Metropolis algorithm with positive proposal. In all of these cases the positivity is independent of the state space and the stationary distribution. In particular, the results show that it is not necessary to consider the lazy versions of these Markov chains. The proof relies on a well known lemma which relates the positivity of the product M T M^*, for some operators M and T, to the positivity of T. It remains to find that kind of representation of the Markov operator with a positive operator T.


Introduction
In many applications, for example volume computation [1,6,9] or integration of functions [7,11,15,16], it is essential that one can approximately sample a distribution in a convex body. The dimension d might be very large. One approach that is feasible for a general class of problems is to run a Markov chain that has the desired distribution as its limit distribution. In the following let us explain why the positivity of the Markov operator is helpful to prove efficiency results for such sampling procedures.
We assume that we have a Markov chain in K ⊂ R d which is reversible with respect to (w.r.t.) the distribution π. Let P : L 2 (π) → L 2 (π) be the corresponding Markov operator and let L 2 (π) be all (w.r.t. π) square integrable functions f : K → R. We assume that P is ergodic, which means that P f = f implies that f is constant. Then let gap(P ) = 1−β be the absolute spectral gap, where β denotes the largest absolute value of the elements of the spectrum of P without 1. In formulas β = sup{|α| : α ∈ spec(P ) \ 1}, where spec(P ) denotes the spectrum of P . For example a lower bound for gap(P ) implies an upper bound of the total variation distance [6] and on the mean square error of Markov chain Monte Carlo algorithms for the approximation of expectations with respect to π, see e.g. [16].
Maybe the most successful technique to bound gap(P ) is the conductance technique [3,6]. But, unfortunately, bounds on the conductance allow only bounds on the second largest element of the spectrum of the Markov operator. This is known as Cheeger's inequality [3]. To handle variation distance and absolute spectral gap it is necessary to consider also the smallest element of the spectrum, which describes some kind of periodicity of Markov chains. Usually, this problem is avoided by considering the lazy version of a Markov chain. That is, in each step, the Markov chain remains at the current state with probability 1/2. Such a lazy version induces a Markov operator with non-negative spectrum, which implies that the smallest element of the spectrum does not matter. This strategy has almost no influence on the computational cost, since, compared to the overall cost of one step of the chain, one additional random number is mostly negligible. However, it is desirable to omit any slowdown whenever possible.
In particular, the best known bounds on the total variation distance of the hit-and-run algorithm, see [5,7,8,10], rely on the conductance and Corollary 1.5 resp. Corollary 1.6 of [6]. These corollaries give upper bounds on the total variation distance in terms of the conductance resp. s-conductance, but it has to be assumed that the corresponding Markov operator is positive, cf. Section 1.B of [6]. (More precisely, the assumption that the smallest element of the spectrum is smaller in absolute value than the second largest one is sufficient.) Thus, there is a small gap in the proofs of [7] and [8], which might be easily fixed by considering the lazy version of the hit-and-run algorithm. We prove, among others, that hit-and-run is positive. Thereby, we close the small gap and show in addition that the results of [5] and [10] hold also for the non-lazy hit-and-run algorithm as originally proposed in [17].
The technique that we will use to prove that the spectrum of a Markov operator is positive is based on a simple and well known lemma from functional analysis. This was already successfully applied in a discrete setting to prove positivity (and comparison results) for the Swendsen-Wang process from statistical physics, see [18,19]. Here, we show that the hit-and-run algorithm, random scan Gibbs sampler, slice sampler and the Metropolis algorithm with positive proposal are positive. In particular, it implies that the independent Metropolis algorithm is positive. The result is new for the hit-and-run algorithm and the Metropolis algorithm with positive proposal, whereas for the random scan Gibbs sampler and the slice sampler it is known [4,13].

The procedure
We consider a time-homogeneous Markov chain (X i ) i∈N , where the X i are random variables on a common probability space (Ω, F, P) that map into R d , equipped with the Borel σ-algebra B, and satisfy the Markov property. Namely, for all n ≥ 1 and any sequence of B-measurable sets A 0 , A 1 , . . . with the property P(X n−1 ∈ A n−1 , ..., X 0 ∈ A 0 ) > 0. We assume that the Markov chain has a unique stationary distribution π and that it is reversible with respect to this measure. For a more comprehensive introduction to the theory of Markov chains we refer to [12,14].
To every Markov chain (X i ) i∈N corresponds a Markov kernel P : is a probability measure on B and, for each A ∈ B, P (·, A) is B-measurable. This Markov kernel is given by and describes the probability that the Markov chain reaches the set A in one step from x. Using this Markov kernel we define the Markov operator P (for notational convenience we use the same letter as for the Markov kernel) by By reversibility of the Markov chain we know that P is a self-adjoint operator on L 2 . A self-adjoint operator P is called positive, It is well known that positive operators have only non-negative spectrum, for further details see for example [2].
Our aim is to show that several Markov chains that are used to sample from distributions in R d induce positive Markov operators. In this case, we say that the Markov chain is positive. We will basically utilize the following lemma. If we can show, additionally, that T is a positive operator, we obtain by the lemma above that P is also positive. Thus, the proof of positivity of the Markov chains under consideration is done by a construction of a suitable second Hilbert space such that the corresponding Markov operator can be written in the above mentioned form.

Applications
Throughout this section we consider Markov chains in a subset K of R d with nonempty interior. Additionally, we denote by B d the d-dimensional unit ball and by S d−1 its boundary. Let ρ : K → [0, ∞) be a (not necessarily normalized) density, i.e. a nonnegative Lebesgue-integrable function. We define the measure with density ρ by for all measurable sets A ⊂ K. For example, if ρ(x) = 1 K (x) then π is simply the uniform distribution on K. In what follows we present some Markov chains that can be used sample approximately from π, that is, π is their stationary distribution. We will see that each of them is positive, independent from the choice of the density ρ.
We will define only the Markov operators for the corresponding Markov chains, since the corresponding Markov kernel can be obtained by applying the operators to indicator functions.

Hit-and-run
The hit-and-run algorithm consists of two steps: Starting from x ∈ K, choose a random direction θ ∈ S d−1 and then choose the next state of the Markov chain with respect to the density ρ restricted to the chord determined by x and θ. For x ∈ K and θ ∈ S d−1 we denote by L(x, θ) the chord in K through x and x + θ, i.e.
for all f ∈ L 2 (π). To rewrite H in the desired form let µ be the product measure of π and the uniform distribution on S d−1 and L 2 (µ) be the Hilbert space of functions g : for g 1 , g 2 ∈ L 2 (µ). We define the operators M : L 2 (µ) → L 2 (π) and T : g(y, θ) ρ(y) dy.
Recall that the adjoint operator of M is the unique operator M * that satisfies f, M g = M * f, g µ for all f ∈ L 2 (π), g ∈ L 2 (µ), see [2, Thm. 3.9-2]. Since we obtain that, for all θ ∈ S d−1 and x ∈ K, This implies and thus, that M and T are the desired "building blocks" for Lemma 2.1. First of all, note that by Fubini's Theorem the operator T is self-adjoint in L 2 (µ). It remains to show that T is positive. We know that L(x, θ) = L(y, θ) for all y ∈ L(x, θ). It follows that Thus, T is a self-adjoint and idempotent operator on L 2 (µ), which implies that T is a projection and, in particular, that it is positive, see e.g. [2,. Finally, Lemma 2.1 shows that H is positive.

Gibbs sampler
The Gibbs sampler, or specifically the random scan Gibbs sampler, is conceptually very similar to the hit-and-run algorithm. In each step, we choose a direction and sample with respect to ρ restricted to the chord in this direction. But, in contrast to the hit-and-run, we choose the direction from the d possible directions of the coordinate axes.
By the same calculations as in Subsection 3.1 we obtain for all f ∈ L 2 (π), x ∈ K and j ∈ [d], that M * f (x, j) = f (x) and that G = M T M * . It is easily seen that T is self-adjoint and idempotent. Hence, T is a projection and thus, positive, which proves the assertion by Lemma 2.1.

Slice sampler
For any t > 0 assume that R t is the transition kernel of a Markov chain on the level set K(t) of ρ, i.e.
Note that vol d (K(t)) < ∞ for each t > 0 by the integrability of ρ. Further let R t be reversible with respect to U t , the uniform distribution on K(t), i.e.
where vol d denotes the d-dimensional Lebesgue measure. Note that if K = K(0) is bounded, then also U 0 is well-defined and denotes the uniform distribution in K. The slice sampler, starting from a state x works as follows: First choose a level t uniformly distributed in (0, ρ(x)] and then sample the next state with respect to R t (x, ·) in the level set K(t). If R t (x, ·) = U t (·) then the slice sampler is called simple slice sampler [13].
The corresponding Markov operator is defined by for all f ∈ L 2 (π). For any t > 0 we assume that R t is a positive operator on L 2 (U t ), which is the set of all square integrable real functions with respect U t on K(t), i.e.
To show that R is positive, let and let µ be the uniform distribution in K ρ . Let L 2 (µ) be the Hilbert space of functions g : K ρ → R with inner product for g 1 , g 2 ∈ L 2 (µ). We define the operators M : L 2 (µ) → L 2 (π) and T : for g ∈ L 2 (µ). The adjoint operator M * : L 2 (π) → L 2 (µ) is M * f (x, t) = f (x). The operator T is self-adjoint, since R t is reversible with respect to U t . For the positivity define g t (x) = g(x, t), (x, t) ∈ K ρ . We have T f, f µ = ∞ 0 R t g t , g t ) Ut vol d (K(t)) vol d+1 (K ρ ) dt, which implies positivity of T by the positivity of R t . By the fact that R = M T M * and by Lemma 2.1 it is proven that R is positive.

Metropolis algorithm
For simplicity we additionally assume that K ⊂ R d is bounded. With some extra work one could avoid this assumption. Let B be a positive proposal kernel which is reversible with respect to U 0 , the uniform distribution in K. Then the Markov operator of the Metropolis algorithm is given by where α(x, y) = 1 ∧ ρ(y) ρ(x) and f ∈ L 2 (π). We interpret the Metropolis algorithm as a slice sampler. For t ≥ 0, x ∈ K(t) and A ⊂ K define R t (x, A) = B x, A ∩ K(t) + 1 − B x, K(t) 1 A (x).

Recall that
Rf (x) = 1 ρ(x) is the Markov operator of the slice sampler and that U t is the uniform distribution in K(t).