Optimal linear drift for the speed of convergence of an hypoelliptic diffusion

Among all generalized Ornstein-Uhlenbeck processes which sample the same invariant measure and for which the same amount of randomness (a $N$-dimensional Brownian motion) is injected in the system, we prove that the asymptotic rate of convergence is maximized by a non-reversible hypoelliptic one.


Introduction
For a potential V : R N → R such that e −V < ∞, consider µ the associated Gibbs law, namely the probability measure with a density with respect to the Lebesgue measure proportional to e −V . In order to compute expectations with respect to µ, Markov Chain Monte Carlo (MCMC) algorithms are widely spread. Such an algorithm is based on an ergodic Markov process (X t ) t≥0 whose unique invariant law is µ, so that for T large enough X T is not far to be distributed according to µ. The efficiency of the algorithm is directly linked to the rate of convergence of X toward its equilibrium, which is why a fair amount of work has been devoted to accelerating this convergence (see [11] and references within). In particular, since there are many possible Markov processes to sample the same equilibrium µ, the question arises to choose the fastest, if any.
Along with those obtained from a Metropolis-Hasting procedure (see e.g. [15] and references within), one of the most classical Gibbs sampler is the Fokker-Planck diffusion that solves the SDE where B is a standard N -dimensional Brownian motion. Its generator is where we recall the generator of a Markov process X is formally defined by The Fokker-Planck diffusion is a reversible process in the sense its generator is self-adjoint in L 2 (µ). This property is of theoretical interest but, from a practical point of view, reversible processes are usually not optimal with regard to their speed of convergence. A particular technique for improving the convergence of X to µ is to add a divergence-free (with respect to µ) drift b, namely to consider the SDE dX t = (−∇V (X t ) + b(X t )) dt + √ 2dB t (2) with b such that ∇ · be −V = 0 where ∇· stands for the divergence operator. That way, the equilibrium is not affected, but the process is no longer reversible and the convergence is improved (cf. [8,9,1,10]). It can be easily seen if we consider for example convergence in L 2 (µ), where the spectral gap of the reversible dynamic is a lower bound for the speed of convergence of the non reversible one (just by comparison of Dirichlet forms), see [9]. Another way to improve the convergence is to consider a kinetic process (X, Y ) where X is the position and Y = dX/dt is the velocity, which acts as an instantaneous memory (see [6,15,5]). For instance the Langevin diffusion admits e −H as an invariant measure where the Hamiltonian is H(x, y) = V (x) + 1 2 |y| 2 . In particular, the first marginal of this equilibrium is µ. The Langevin diffusion is non-reversible and moreover it is hypoelliptic. It has been observed in [15] that it may converge faster than the reversible Fokker-Planck diffusion in some applied problems. It recently regained much interest under the name Hamiltonian Monte Carlo methods [7].
For both dynamics (2) and (3) it is difficult for a general potential V to obtain sharp theoretical bounds on the rates of convergence (see [13] for consideration on this matter in the metastable case, namely the regime ε → 0 with the potential V ε = 1 ε V where V has several local minima). A particular simple situations is the case where V is quadratic or, in other words, µ is a Gaussian measure. Of course MCMC algorithms are not really relevant in practice regarding sampling according to Gaussian measures, but then the exact rates of convergence for (2) and (3) are trackable (see [2,12] and below).
In this context, the purpose of the present work is to answer the problem raised in [10], namely: for a given Gaussian law µ, is it possible to find the optimal divergence-free linear drift one can add to (3) in order to obtain the largest rate of convergence ? More generally, what is the largest rate of convergence one can get when sampling according to µ using a (possibly hypoelliptic) Markov diffusion with linear drift and constant diffusion coefficients ?
Obviously this question is ill-posed since the invariant measure of (W t ) t≥0 = (X λt ) t≥0 is still µ for any λ > 0, and W goes λ times faster than X to equilibrium. Following [6] we will thus work under the additional assumption that the total amount of randomness instantaneously injected in the system (that is, the trace of the diffusion matrix) is prescribed.
In the following, we will first introduce the main notations and recall basic facts about generalized Ornstein-Uhlenbeck processes. In Section 2, we present our main results, giving a positive and definite answer to the problem raised in [10]. Section 3 is dedicated to the proofs of our main results, whereas Section 4 presents numerical illustration of our results and present some thoughts on the general case we wish to tackle in the future.

Notations
In this whole work, M N (R) is the set of N × N real matrices, S >0 N (R) (resp. S 0 N (R)) the set of positive definite (resp. semi-definite) symmetric ones and A N (R) is the set of antisymmetric ones. The spectrum of a matrix A is σ(A), its trace is T r(A), its transpose is A T and vectors are considered as column matrices, so that the scalar product x · y is x T y. Finally (λ) stands for the real part of λ ∈ C and diag(a 1 , . . . , a N ) stands for the the diagonal matrix with coefficients a i .

Basic facts about Ornstein-Uhlenbeck processes
We recall here some facts whose proofs and details may be found for instance in [2]. A generalized Ornstein-Uhlenbeck process (OUP) is any diffusion with a linear drift and a constant matrix diffusion. In other words in dimension N it is the solution of an SDE of the form where A is a constant matrix, the σ j 's are constant vectors and the B j 's are 1-dimensional independent Brownian motions. A Markov process is an OUP if and only if its generator is of the form where D = 1 2 σ j σ T j is a positive semi-definite matrix and ∇· stands for the divergence operator. Recall that a measure µ is said invariant for X (or equivalently for L A,D ) if Law (X 0 ) = µ implies Law (X t ) = µ for all t ≥ 0. For an OUP, an invariant measure is necessarily a (possibly degenerated) Gaussian distribution.
On the other hand the process is hypoelliptic if and only if KerD does not contain any nontrivial subspace which is invariant by A T , and in that case an invariant measure is necessarily unique and non-degenerated (it has a positive density with respect to the Lebesgue measure on R N ). In the case where the invariant measure exists and is unique, its density ψ ∞ is the unique solution of L A,D ψ = 0 where is the dual in the Lebesgue sense of L A,D .
We will focus mainly on generalized Ornstein-Uhlenbeck processes which have a non degenerate Gaussian distribution as invariant probability measure. Let N ≥ 1, S ∈ S >0 N (R) and be the density of the (non-degenerated) Gaussian distribution with covariance matrix S −1 .
Starting from an initial distribution ψ 0 , the law ψ t and the density with respect to equilibrium ψt ψ∞ at time t of an OUP generated by L A,D are (weak) solutions of

Main results
Let be the set of drift/diffusions matrices such that ψ ∞ is invariant for the corresponding OUP and with at most the same amount of randomness injected in the system as the reversible dynamics with generator As we will see later, if (A, D) ∈ I(S) with ρ(A) > 0 then ψ t → ψ ∞ as t → ∞ for all ψ 0 , which implies ψ ∞ to be the unique invariant measure of L A,D , which is by consequence necessarily hypoelliptic.
Our main result identifies the maximum of ρ(A) under the constraint (A, D) ∈ I(S).
For an OUP with drift matrix A the rate of convergence to equilibrium is ρ(A) (see [12] and below). Hence Theorem 1 states that it is possible to sample an OUP that converges at rate max σ(S) to ψ ∞ using the same amount of randomness as the classical reversible dynamics with generator L −S,I N , while the latter converges at rate This should be compared to the results of Hwang, Hwang-Ma and Sheu [8] or of Lelièvre, Nier and Pavliotis [10] (the first one being anterior, and the second one giving more explicit bounds for the convergence) that reads and that the equalities hold only when S is a homogeneous dilation, in which case no nonreversible dynamics can yield any improvement of the rate of convergence to equilibrium. On the other hand when the eigenvalues have different orders of magnitude (which means the problem is multi-scale; cf. [10, Fig. 4] where S has uniformly distributed coefficients in [0, 1]), the improvement is already clear from the reversible diffusion (with D = I N ) to the non-reversible (but still elliptic) ones, but yet it seems even more drastic when hypoelliptic dynamics are allowed. Obviously, the cost to pay for an optimal asymptotic speed of convergence is an initial delay for small times.Moreover, due to possibly large coefficients in the drift, a theoretically optimal continuous-time diffusion may lead, after discretization, to a not-so-efficient algorithm implemented in practice. We will leave aside this consideration, and only concentrate on the continuous-time problem. We will focus here on the convergence in the entropy sense, ensuring for example also convergence in total variation via Pinsker's inequality. However, the same line of reasoning will also work for L 2 convergence or Φ−entropies (see [4]). More precisely, for a measure µ, denote by the entropy of a positive function h with respect to µ. For the reversible elliptic OUP with generator L −S,I N , it is well known that for all h > 0 This is nothing else than an equivalent formulation of the Gaussian logarithmic Sobolev inequality of Nelson, see for example [3] and references therein. For a general OUP, if (A, D) ∈ I(S), according to [12,Corollary 12] there exists a constant c ≥ 1 such that for all h > 0 at least if A is diagonalizable (when it is not the case, a polynomial in t prefactor should be added). For a non-reversible yet elliptic OUP, c may be strictly greater than 1 due to a change of norm, exactly as when we consider the transport semi-group e tL A,0 f = f e tA · alone and write When the process is both non-reversible and non-elliptic, there are two reasons for c to be greater than 1: the change of norm for e tA , and the initial regularization which is really slower than in the elliptic case. Indeed, the part of c which is due to slow regularization may badly behave with N . More precisely, since the optimal (A, D) ∈ I(S) we will consider will be very degenerated (the rank of D being 1), [12, Remark p.16] yields a constant c of order N 40N 2 which is, at the very least, absolutely awful. Of course this estimate is the result of a succession of rough bounds and a more careful (and involved) analysis could certainly refine it, but it is unclear whether the optimal bound is less than exponential with respect to N . Fortunately this problem disappears if we start the dynamics with an elliptic one and then switch to the hypoelliptic optimal one, ensuring thus first a quick regularization property.
Theorem 2. For any C > 1 we can construct (A, D) ∈ I(S) such that for all h > 0, with finite entropy, and for all t, t 0 > 0 with t ≥ t 0 , is the Frobenius norm) such that for all h > 0, with finite entropy, and for all t ≥ t 0 > 0 It is thus possible to completely quantify the initial loss, which, due to the role of the initial warm up via the reversible diffusion, boils down to the change of norm in the energy.

Remarks:
• As a function of t 0 , the rhs of (5) is minimal for t −1 0 = 2 max σ(S), for which • The proof furnishes an explicit, algorithmic construction of an optimal (A, D), with D of rank 1. More generally, it is not hard to see from the proof that D can be chosen with a rank at most the dimension of the eigenspace associated to max σ(S).
• In order to sample according to an N -dimensional Gaussian measure ψ ∞ , we can always add an (N + 1) th auxiliary variable and sample, with a given amount of randomness and at a rate arbitrarily large, according to an N + 1-dimensional Gaussian measure whose first N -dimensional marginal is ψ ∞ . Of course, this should be counteracted in practice by numerical problems due to the discretization of the dynamics.
• In the same line, we can also consider the question of the kinetic process (3). More precisely, given S ∈ S >0 N (R), then the equilibrium of the 2N -dimenstional process (X, Y ) that solves dy is the Gaussian distribution with covariance matrix ν 2 I N . If the aim is to sample according to ψ ∞ , then the choice for the second N -dimensional marginal is open. Hence, we have to chose ν in order to maximize the rate of convergence to equilibrium of the whole process.
If v is an eigenvector of S associated to an eigenvalue λ, then (v, −rv) is an eigenvector of A = 0 I N −νS − 1 ν I N associated to the eigenvalue −r if and only if r 2 − 1 ν r + λν = 0, namely When 4ν 3 > max σ S −1 , the rate of convergence of (X, Y ) is thus (2ν) −1 , which goes to zero as ν goes to ∞. On the other hand, when ν goes to 0, this rate is equivalent to ν 2 min σ(S), which again goes to 0. This proves that it is not possible to reach an arbitrarily large rate of convergence with the dynamics (6), and that the best choice for ν, the variance of the velocity, is neither to be found at infinity nor at zero (which answers a question raised in [13]). Of course this would be a completely different story if we were to consider which also allows to sample according to ψ ∞ , but is no longer a kinetic process in the sense Y is no more the velocity dX dt . The optimal value of ν in (6) is explicit if S = λI N is a homothety. Indeed, in that case, for 4λν 3 > 1 the rate r(ν) is (2ν) −1 and thus is decreasing, while for 4λν 3 < 1, and thus is increasing. Hence, for a homothety S = λI N , the rate of convergence to equilibrium in (6) is maximal for ν = (4λ) − 1 3 , and in that case this optimal rate is (λ/2)

Proofs
Let us start with an easy lemma which will enable us to characterize A in the couple (A, D) ∈ I(S) once D is fixed. One may object that [10,Propositions 1 and 4] are written for an invertible matrix, which is not necessarily the case for D. It can be seen that this restriction is not necessary in the proof; or to save the reader from a careful check of these proofs, we may note that D + εI N falls within the scope of [10], which yields the same result. Hence Let Q be an orthonormal matrix such that S Tr (S) and to prove equality holds we need to find D ∈ S 0 N (R) with TrD ≤ N such that Tr D = N max σ(S). Let i ∈ 1, N be such that λ i = max σ(S), let E i,i be the N × N matrix with all coefficients being zero except the coefficient (i, i) being equal to 1, and set D = N Q T E i,i Q.
Proof of Theorem 2. Without loss of generality (up to an orthonormal change of variables) we can assume the vector v with all coordinates being equal to 1 is an eigenvector of S associated to λ = max σ(S). Let D = vv T , which is the matrix with all coefficients equal to 1. Let 0 < ν 1 < ν 2 < · · · < ν N (to be specified later) and Q = diag(ν 1 , . . . , ν N ). Note that the eigenvectors of Q are obviously the canonical basis vectors (e 1 , . . . , e N ), which satisfy e T i De i = 1 = TrD/N for all i. Let J be the antisymmetric matrix defined by Let α be the function r > 0 → α(r) = r ln r (or r ∈ R → α(r) = 1 2 r 2 if one desires to deal with L 2 decay rather than entropy), so that α (r) is r −1 (or 1). According to [12,Lemma 8], for all M ∈ S >0 N (R) and for all h > 0, denoting by h t = e tL C,D h, where the first equality comes from the fact that Q and D are symmetric and J is antisymmetric, and the last equality from (7). Hence The log-Sobolev inequality for the standard Gaussian distribution γ(dx) = 1 √ 2π e − 1 2 |x| 2 dx reads for all f > 0 such that the r.h.s is finite. By the change of variable z = S 1 2 x it yields Finally the elliptic reversible generator L −S,I N satisfies the Bakry-Emery criterion Γ 2 ≥ 0 (see [3] for instance, more precisely [3, Equation 1.16.5 p. 72] together with [3, Theorem 5.5.2.(v), p.259] with ρ = 0 and integrated with respect to ψ ∞ ), so that if now h t = e tL −S,I N h, We can then take ν N arbitrarily close to ν 1 to get the first part of Theorem 2. On the other hand, following [10,Remark 8], if we choose ν k = N + k (so that ν N ≤ 2ν 1 ), using that for any Note that an optimal (A, D) is explicitly constructed in this proof. The procedure may be decomposed in two steps: first, exhibit a normalized eigenvector v of S associated to max σ(S), and set D = N vv T . The second step answers the following general question: given any D ∈ S 0 N (R), and writing D = (S)

Numerical illustrations
In dimension 2, consider S = diag(ε, 1). For any h ∈ R, the corresponding ψ ∞ is the unique equilibrium of where B = (B 1 , B 2 ) is a 2-dimensional Brownian motion. It is also an equilibrium for which is hypoelliptic as soon as h = 0. When h is large enough, away from the origin (so that the random forces are small with respect to the deterministic drift), the behaviours of X and Z are similar, mostly driven by a fast rotation, while the first coordinate of the reversible process solving (8) with h = 0 moves slower, and thus covers the space less efficiently (see Fig. 1, where the parameter p is the step size of the Euler Scheme). Note that in the case of Z, even if this rotation is randomly perturbed, since dZ 1 = hZ 2 dt, the process always goes from left to right in the lower half-plane {(x, y), y < 0} and from right to left in the upper one.
In (8), the optimal rate 1+ε 2 is obtained for h 2 ≥ (1+ε) 2 4ε − 1 while in (9) the optimal rate 1 is obtained for h 2 ≥ 1 ε . For instance if we chose h = 2 ε , then both conditions are fulfilled and in both cases the drift matrix is diagonalizable with two conjugated distinct eigenvalues. For This is represented in Figure 2 (with ε = 0.05 for every curves, h = 2 ε for the second and third ones and h = 1 ε for the last one). A large h seems to improve the prefactor, and indeed, note that and that, normalizing the eigenvectors v j = (−h, λ j ) for j = 1, 2, we can compute which decreases with h 2 , together with the prefactor. As h goes to infinity, M (h) goes to its minimum, which is 1 ε (if ε ≤ 1).
As remarked in the Introduction, how to boost the speed of convergence of a Markov process to sample a Gibbs measure is perhaps less relevant for quadratic potentials. There Figure 2: Norms of the drift matrix exponentials are thus two distinct problems that we have in mind for the future. The first one deals with the direct generalization of our result when we replace the Gaussian measure ψ ∞ by e −V where V satisfies Hess(V ) ≥ S > 0, with S constant positive symmetric. Is it possible to add a divergence free drift and a potentially degenerate (constant) diffusion matrix so that the rate of convergence to equilibrium is max(σ(S))? The question is also of great interest when the dynamics is metastable, namely when V has several local minima. A toy problem of this phenomenon would be to consider in dimension 1 (even if MCMC algorithm usually outperforms deterministic algorithms only in large dimension) a two-wells potential with a, b > 0. Then V has two minima −b 2 /(4a) attained at x = ± b/(2a) and separated by a local maxima 0 at x = 0. Depending on the energy barrier V (0) − V b/(2a) = b 2 /(4a) to overcome in order to go from one catchment area to the other, the reversible Fokker-Planck diffusion (1) will take a long time to achieve such a crossing. In Figures 3 and 4 are represented two such trajectories over different periods, along with trajectories of the first coordinate of a kinetic Langevin diffusion (3) (in each case both the reversible and the kinetic diffusions are driven by the same Brownian motion). As discussed at the end of Section 2, for the Langevin process there should be another parameter to tune, the variance ν of the velocity at equilibrium. Since the optimal choice of ν is already non trivial in the quadratic case, we won't address it here in this metastable context: for now, our considerations are only qualitative.
The first point to comment in Figure 3 is that the trajectory is smoother in the kinetic case than in the reversible one, which is obvious since in the first case it is 3/2-Hölder continuous while in the second one it is only 1/2-Hölder continuous. Second, due to its inertia, the trajectory in the kinetic case shows large oscillations in which kinetic and potential energy successively convert one to the other. In particular from the times t 6.5 to 10 we can see the  process has a high level of total energy and thus these large oscillations cross the energy barrier at x = 0 without difficulty. At some point the total energy will decrease sufficiently for the process to stay trapped in the vicinity of one of the two minima, which has then a reasonable chance to be different from the one from which it started before the energy level got high.
That way we would interpret Figure 4 as an illustration to the fact the Langevin dynamics deals more efficiently with metastability (or at least energy barriers) than the reversible Fokker-Planck diffusion. With Theorem 2 in mind, we could also interpolate from these figures the behaviour of a process that switch at random times from Equation (8) to (9) (or anything else in that spirit). However it is difficult to export an intuition based on a toy model in dimension 1 and with a fixed set of parameter (especially the variance of the velocity in (9)) to a more general case.