Entropy decay for interacting systems via the Bochner--Bakry--Emery approach

We obtain estimates on the exponential rate of decay of the relative entropy from equilibrium for Markov processes with a non-local infinitesimal generator. We adapt some of the ideas coming from the Bakry-Emery approach to this setting. In particular, we obtain volume-independent lower bounds for the Glauber dynamics of interacting point particles and for various classes of hardcore models.


Introduction
The study of contractivity and hypercontractivity of Markov Semigroups has received a tremendous impulse from seminal paper [1], which has introduced the so-called Γ 2 -approach, and has originated a number of developments in different directions (see e.g. [9,12,14]). In particular, for Brownian diffusions in a convex potential, the Γ 2 -approach provides a short and elegant proof of the fact that lower bounds on the Hessian of the potential translate into lower bounds for both the spectral gap and the Logarithmic Sobolev constant. How much these ideas can be adapted to non-local operators, such as generators of discrete Markov Chains, is not yet fully understood. Although Γ 2 -type computations had been performed for specific example (see e.g. [11]), the first attempt to approach systematically this problem appeared in [3], where lower bounds on the spectral gap of various classes of generators were given. In [4] and [5] we have addressed the problem of going beyond spectral gap estimates for non-local operators, looking for estimates on the exponential rate of decay of the relative entropy from equilibrium. Note that, in the case of diffusion operators, a strictly positive exponential rate is equivalent to the validity of a Logarithmic Sobolev Inequality. In the non-local case, the exponential entropy decay corresponds to a weaker inequality, to which we will refer to as the Entropy Inequality (often also called Modified Logarithmic Sobolev Inequality or L 1 -Logarithmic Sobolev Inequality, [19]). We have shown in [4] that estimates on the best constant in the Entropy Inequality can also be obtained from a Γ 2 -approach; however, when looking for explicit estimates, we have encountered technical difficulties, that will be illustrated in the next section. More specifically, our results were restricted to particle systems where the only allowed interactions were the exclusion rule ( [4]) or a zero-range interaction ( [5]). This paper improves substantially the results mentioned above; we obtain, more specifically, high temperature estimates on the best constant in the Entropy Inequality for Glauber-type dynamics of interacting systems. The main example concerns interacting point particles, where estimates on the spectral gap, as well as constants for other functional inequalities, have been obtained with various techniques [2,11,16] . This is however, to our knowledge, the first estimate concerning the Entropy Inequality, that we obtain under the classical Dobrushin Uniqueness Condition.
It should be made clear that the aim of this paper is to extend to non-local operators those implications of the Bakry-Emery's results which are concerned with the rate of convergence to equilibrium of Markov processes. The Bakry-Emery theory has many different, although related, applications, in particular to differential geometry. In this context, extensions to the discrete setting have also been recently considered, see e.g. [8,17].
The paper is organized as follows. In Section 2 we recall the approach to the spectral gap and entropy decay rate that we have introduced in [4], to which we add the main original ingredient of this paper, consisting in a bivariate real inequality. The rest of the paper is devoted to specific examples.

Generalities
2.1. The Entropy Inequality. We begin by recalling the basic Functional Inequality we will be concerned with. Consider a time-homogeneous Markov process (X t ) t 0 , with values on a measurable space (S, S), having an invariant measure π. We assume the semigroup is strongly right-continuous, so that the infinitesimal generator L exists, i.e. T t = e tL . We also assume, for what follows, reversibility of the process, i.e. L is self-adjoint in L 2 (π). We define the non-negative quadratic form on D(L) × D(L), called Dirichlet form of L, where D(L) is the domain of L, and we use the notation π[f ] for f dπ. Given a probability measure µ on (S, S), we denote by µT t the distribution of X t assuming X 0 is distributed according to µ, i.e.
An ergodic Markov process, in particular a countable-state, irreducible and recurrent one, has a unique invariant measure π, and the rate of convergence of µT t to π is a major topic of research. Quantitative estimates on this rate of convergence can be obtained by analyzing functional inequalities. To set up the necessary notations, define the relative entropy h(µ|π) of the probability µ with respect to π by where h(µ|π) is meant to be infinite whenever µ ≪ π or dµ dπ log dµ dπ ∈ L 1 (π). Although h(· | ·) is not a metric in the usual sense, its use as "pseudo-distance" is motivated by a number of relevant properties, the most basic ones being: and (see [7] 1) where · T V denotes the total variation norm. For a generic measurable function f 0 it is common to write so that h(µ|π) = Ent π dµ dπ . Ignoring technical problems concerning the domains of Dirichlet forms, a simple formal computation shows that where f := dµ dπ . Therefore, assuming that, for each f 0, the following Entropy Inequality (EI) holds: Ent with α > 0 (independent of f ), then (2.2) can be closed to get a differential inequality, obtaining h(µT t |π) e −αt h(µ|π).
In other words, estimates on the best constant α for which the (EI) holds provide estimates for the rate of exponential convergence to equilibrium of the process, in the relative entropy sense. It is known (see [7] even though (EI) is never explicitly mentioned) that α 2γ, where γ is the spectral gap for L: Convex decay of Entropy. We now introduce a strengthened version of (EI). Again at a formal level, we compute the second derivative of the entropy along the semigroup: Assume now the inequality holds for some κ > 0 and every f > 0. Then as for the first derivative with (EI), (2.5) can be closed to get the differential inequality, and integrating from 0 to +∞ we get So (2.6) implies (EI) for every α κ. This result is well known; however, when one tries to make rigorous the above arguments, some difficulties arise due to the fact that generators are only defined in suitable domains. For this reason we give here the following precise statement: although the assumptions we make are likely to be not optimal, they are sufficient to cover the applications presented in this paper.
Proposition 2.1. Assume L is self-adjoint in L 2 (π), and denote by D(L) its domain of selfadjointness. We write E(f, g) = −π[gLf ] whenever f ∈ D(L) and g ∈ L 2 (π). For each M ∈ N define for every f ∈ A. (3) If (2.6) holds for some κ and every f ∈ A, then (EI) holds with α κ and every f ∈ A.
The proof is postponed to the Appendix. Note that (2.6) gives estimates on the second derivative of the entropy along the flow of the semigroup T t . In particular, being E(f, log f ) 0, it implies time convexity of the entropy. There are cases (see [4] Section 4.2) where (EI) holds but the entropy is non convex in time. Therefore, (2.6) is strictly stronger that (EI).

Remark 2.2.
By a similar proof one shows that the spectral gap γ is the best constant in the inequality kE(f, f ) π (Lf ) 2 , (2.9) that is equivalent to the Poincaré inequality whose best constant is, by definition, the spectral gap of L. Inequality (2.9) is related to the convex decay of the variance along the flow of the semigroup. Unlike the entropy, the variance decay is always convex in time.

2.3.
A class of non-local dynamics. Suppose the probability space (S, S, π) is given, together with a set G of measurable functions γ : S → S, that we call moves. We also assume G is provided with a measurable structure, i.e. a σ-algebra G of subsets of G. In this paper we deal with Markov generators that, can be written in the form where • the discrete gradient ∇ γ is defined by • for η ∈ S, c(η, dγ) is a positive, finite measure on (G, G), such that for each A ∈ G the map η → c(η, A) is measurable, and π[c 2 (η, G)] < +∞. It should be stressed that not necessarily an expression as in (2.10) defines a Markov generator. We assume this is the case. We make the following additional assumption on the generator L.
Admissible measures guarantee the following Bochner-type identities. A proof of these identities is in [4]; we include it here for completeness.
Proposition 2.4. The following identities hold for every bounded measurable functions f, g : S → R: Proof. We begin by proving (2.14). By i) of Definition 2.3, R-almost everywhere. Thus, R-almost everywhere, We show that the R-integral of each summand of (2.16) equals ∇ γ f (η)∇ δ g(η)R(dη, dγ, dδ), from which (2.14) follows. For the fourth summand there is nothing to prove. In the steps that follow we use admissibility of R, in particular first ii), then i), then ii) and i) again of Definition 2.3, and the simple identity : Note that (2.17) takes care of the third (and by symmetry the second) summand, while (2.18) takes care of the first summand. This completes the proof of (2.14). We now prove (2.15). By admissibility of R (used twice), Thus that, by a simple calculation, is shown to equal the right hand side of (2.15).
The use of admissible measures in establishing convex entropy decay is illustrated in what follows. Consider the inequality (2.6); the two sides if the inequality, for generators of the form (2.10) take the form Admissible measures allow to modify the term (2.20), the purpose being to make it comparable with (2.19).
Proposition 2.5. Let R be an admissible measure. Then for every f > 0 measurable with log f bounded, Proof. Using (2.14) with g = log f , we get Thus, using also (2.15), The fact that (2.23) is nonnegative, follows from the nonnegativity of for every η, γ, δ. Indeed, setting a := f (η), b := f (δ(η)), c := f (γ(η)), d := f (δγ(η)), one checks that this last expression equals the sum of the following 4 expressions for every f > 0 measurable, with log f bounded. To illustrate the treatment of (2.24), we consider the corresponding inequality for the spectral gap studied in [3]: The strategy to obtain (2.25) can be described in two steps.
i) Determine an admissible function r(η, γ, δ) and a "nearly diagonal" D ⊆ G × G such that for some u > 0. ii) The remaining integral on D c is estimated from below using the inequality 2ab − a 2 − b 2 which, by symmetry, yields If h < u, we thus obtain (2.25) with k := 2(u − h).
The feasibility of steps i) and ii) above depends on a suitable choice of an admissible function r. We do not have a general procedure to determine it. It turns out, for example, that equation (2.13) does not uniquely (up to constant factors) determine r. Condition (2.13) is, for instance, satisfied by which is well defined whenever the Radon-Nikodym derivative c(γ(η),dδ) c(η,dδ) exists. Not necessarily, however, (2.29) defines an admissible function, in particular it is not necessarily supported on the set {(η, γ, δ) : γ(δ(η)) = δ(γ(η))}. The admissible functions in the examples in [3], [4] as well as those in this paper, are all obtained by suitable modifications of (2.29).
The main purpose of this paper is to extend the procedure above to inequality (2.24). The main difficulty consists in the comparison of the "off diagonal terms" with corresponding diagonal terms (i.e. δ = γ). The simple inequality 2ab − a 2 − b 2 is the replaced by the following inequality.
Lemma 2.6. The following inequality holds for every a, b > 0: Proof. Inequality (2.30) can be rewritten as Letting z := a + b, w = ab, we are left to show that for z, w > 0 Case z 2. Using the inequality log(1 + x) x for every x > −1, Case z < 2. Using again log(1 + x) x for every x > −1, Letting . Consider a nonnegative measurable and even function ϕ : R d → [0, +∞) (everything works with minor modifications for ϕ : R d → [0, +∞] allowing "hardcore repulsion"). We fix a boundary condition τ ∈ Ω Λ c := {η ∈ Ω : η ⊆ Λ c }, and define the Hamiltonian The dependence of H τ Λ on Λ and τ is omitted in the sequel. We assume the nonnegative pair potential ϕ and the inverse temperature β to satisfy the condition For N ∈ N we let S N = {η ∈ S : |η| = N } denote the subset of S consisting of all possible configurations of N particles in Λ. Note that a measurable function f : S N → R may be identified with a symmetric function from Λ N → R. With this identification, we assume, for every N ∈ N, that the boundary condition τ is such that H(η) < +∞ in a subset of Λ N having positive Lebesgue measure. Functions from S to R may be identified with symmetric functions from n Λ n to R.
With this identification, we define the finite volume grand canonical Gibbs measure π with inverse temperature β > 0 and activity z > 0 by where Z is the normalization. We define the creation and annihilation maps on S: In the sequel we write ∇ + x and ∇ − x rather than ∇ γ + x and ∇ γ − x . Note that ∇ − x f (η) = 0 unless x ∈ η. We consider the following Markov generator It is shown in [2], Proposition 2.1, that L generates a Markov semigroup. This generator is of the form (2.10) if we define c(η, dγ) by In particular, it is easy to show that the reversibility condition (2.11) holds, after having observed that Moreover c(η, G) |η| + C|Λ|, where |Λ| is the Lebesgue measure of Λ; therefore π[c 2 (η, G)] < +∞. Now we define Lemma 3.1. The function r is admissible.
Thus, for zε(β) < 1, the entropy decays exponentially with a rate which is uniformly positive in Λ and in the boundary condition τ .
Remark 3.3. Theorem 3.2 provides the lower bound α 1 − zε(β) for the best constant α in the entropy inequality. Note that it coincides with the lower bound, obtained e.g. in [3], for the spectral gap γ. The upper bound γ 1 + zε(β) has also be obtained in [20].

Interacting birth and death processes and a simple non perturbative example.
With no essential change, the arguments in Section 3.1 can be adapted to the following discrete version of the model, which can be viewed as describing a family of interacting birth and death processes. x,y∈Λ L ϕ(x, y, η). (3.11) We assume the function h to satisfy the condition for 0 β < β 0 . The finite volume grand canonical Gibbs measure π with inverse temperature β and activity z is the probability measure defined on S as where Z is the normalization. Fix x ∈ Z d ; given any configuration η ∈ S we define η ± δ x as (η ± δ x )(y) := η(y) ± 1(x = y). Define also the creation and annihilation maps at x, γ ± x : S → S, as We let G := {γ − x , γ + x : x ∈ T }. In the sequel we write ∇ + x and ∇ − x rather than ∇ γ + x and ∇ γ − x . We consider the Markov generator It is easy to show that L is self adjoint in L 2 (π), and that generates a Markov semigroup. It can be written in the form (2.10) by defining c(η, dγ) analogously to section 3.1. In particular, the condition π[c 2 (η, G)] < +∞ is satisfied. By defining the admissible function r : and following the same arguments of section 3.1, it can be shown that Theorem 3.2 holds also in this case. The condition zǫ(β) < 1, under which the convex exponential decay of entropy has been established in both the continuous and discrete space, is a high temperature/low density condition, i.e. a condition which states that the measure π and the associated dynamics generated by L are small perturbations of a system of independent particles, for which (2.3) holds by standard tensorization properties.
It is interesting to observe that the same technique can be applied to cases which are far from a product case, by requiring some convexity on the Hamiltonian H. This is quite natural in the Γ 2 approach (see [1]). However, the nonlocality of the generators is a source of serious limitations. The main problem is the fact that inequality (2.30) is only bivariate: rather surprisingly, "natural" multivariate extensions of it are false. This forces us to consider systems of only two interacting birth and death processes.
In the notations of the present section choose d = 1, L = 2, z = 1, H(η) = K(η 1 + η 2 ), with K an increasing convex function (e.g. K(u) = u 2 ). Notice that under these conditions ∇ As in the proof of Theorem 3.2 it can be shown that, for f > 0 with log f bounded, where Z is the normalization. We are going to define a Markov chain on S reversible with respect to π. Fix x ∈ T , given any configuration η ∈ S we define η + δ x and η − δ x as (η ± δ x )(y) := η(y) ± 1(x = y). Define also the creation and annihilation maps at x, γ ± x : A → A as Consider now the Markov generator It is easy to check that L is self-adjoint in L 2 (π), it can be written in the form (2.10), with π[c 2 (η, G)] < +∞. Now we define It is elementary to check that r is admissible. This allows us to prove the following result.
Proof. Observe that Thus, for f > 0 with log f bounded, Following the by now usual steps, using reversibility and symmetrization, we obtain Finally, by (2.33) 1 2 x =y ν(x)ν(y)π 1(· + δ x ∈ A)1(· + δ y ∈ A)1(· + δ x + δ y ∈ A) and using reversibility on the last term we obtain 1 2 x =y ν(x)ν(y)π 1(· + δ x ∈ A)1(· + δ y ∈ A)1(· + δ x + δ y ∈ A) all this sums up to In the next examples we give some application of Theorem 3.4. Then define the maximum degree of G as ∆ := max x∈V deg(x, V ) = max x∈V y∈V 1({x, y} ∈ E). We have that ǫ 0 = ∆ρ and ǫ 1 = ρ. This gives κ 1 − ρ(∆ − 1) for ρ 1/∆, i.e. the mixing time does not depend on the size of the graph provided that ρ 1/∆. The hardcore model has been widely studied in literature (see [13] section 22.4 and the discussion therein). The best result on the mixing time for this model on general graph known by authors is the fast mixing result for ρ < 2/(∆ − 2) contained in [15,18]. We want to stress that the model considered in [15,18] is a discrete time Markov chain which can be compared with our result by using Theorem 20.3 of [13].

Loss Networks.
For a complete introduction to loss networks we refer to [10]. Here we give only a brief sketch of the model. Consider a finite, connected (symmetric, simple) graph G = (V, E) and a function C : E → N ∪ {+∞} called capacity function. A path in G is a sequence (e 1 , . . . , e n ) of edges in E such that e i ∩ e i+1 = ∅, i = 1, . . . , n and e i = e j for any i = j. Given a path x = (e 1 , . . . , e n ) and an edge e ∈ E we say that e belongs to x if e = e i for some i ∈ {1, . . . , n}. We write e ∈ x in this case. Let T be a collection of paths in G. A configuration η is an element of S := {η : T → N ∪ {0}}. G should be thought as the graph of a "telecommunication network" in which v ∈ V are "callers" and e ∈ E are "links". T represents the set of possible "routes" which a call can use to connect two callers. For η ∈ S and x ∈ T , η(x) is the number of routes of type x ∈ T . So, given η ∈ S the number of calls using the link e ∈ E is x∋e η(x). We impose that there are at most C(e) calls using the link e by requiring that the set of allowed route is given by the decreasing set A = η ∈ S : x∋e η(x) C(e) for any e ∈ E . Now fix an intensity function ν : T → (0, +∞). The generator given by (3.12) is the generator a Markov chain in which calls arrive independently with intensity ν. If a call which violates the constraint defined by A arrives, it is rejected. Any call lasts for an exponential time of mean 1. Is should be clear that ǫ 0 is small if max x∈T ν(x) is small enough (depending on the geometry of G, T and on the function C). So we can get lower bound on κ by taking small intensities. Denote by T + the set of horizontal rods of length k. Similarly a vertical rod of length k is a sequence of k + 1 adjacent vertexes of V in "vertical" direction {(u 1 , u 2 ), (u 1 , u 2 + 1), . . . , (u 1 , u 2 + k)}.
Denote by T − the set of vertical rods of length k. We set T = T + ∪ T − , A := {η ∈ S : η(x) ∈ {0, 1}, η(x)η(y) = 0 if x = y and x ∩ y = ∅} (i.e. rods can not touch), and ν ≡ ρ > 0. We then obtain ǫ 0 = ρ(k 2 + 4k + 1), ǫ 1 = ρ so that if ρ 1/(k 2 + 4k + 1) then κ 1 − ρ(k 2 + 4k). We recall that for k sufficiently large (see [6]) the Gibbs measure π exhibits a phase transition in the limit L → +∞ at some point ρ c which is expected to be of order 1/k 2 . We therefore obtain the exponential decay of entropy for ρ up to 1/k 2 , which has the same order in k as the critical value ρ c .  then T t f is differentiable in the L 2 sense. We also claim that, if f ∈ A M , then log T t f is L 1 -differentiable, and We now show the converse implication. Assume (2.8) holds for every f 0 measurable, such that Ent π (f ) < +∞. In particular, it holds for f ∈ A. The functions of t Ent π (T t f ) and e −αt Ent π (f ) are both differentiable, and coincide at t = 0. Necessarily d dt Ent π (T t f ) t=0 d dt e −αt Ent π (f ) t=0 , which gives (EI). (2) Note that, so far, we have not used all properties of A M , but only the facts that f ∈ D(L) and | log f | M . The other properties are used below to take the second derivative of the entropy. Suppose (2.6) holds for all f ∈ A. The point is to justify the differentiation d dt Similarly to what we have done in point 1, we can differentiate using the product rule since: • t → T t Lf is L 2 -differentiable for f ∈ A, since Lf ∈ D(L), and log T t f is uniformly bounded; • log T t f is L 1 -differentiable, and T t Lf is uniformly bounded. We obtain d dt that, by (2.6) and Gromwall's Lemma, implies E(T t f, log T t f ) e −κt E(f, log f ). Conversely, (2.6) follows from E(T t f, log T t f ) e −κt E(f, log f ) by taking derivatives at t = 0, as in point 1. that, integrated from 0 to ∞, yields k Ent π (f ) E(f, log f ), which completes the proof.