Eyring-Kramers law for Fokker-Planck type differential operators

We consider Fokker-Planck type differential operators associated with general Langevin processes admitting a Gibbs stationary distribution. Under assumptions insuring suitable resolvent estimates, we prove Eyring-Kramers formulas for the bottom of the spectrum of these operators in the low temperature regime. Our approach is based on the construction of sharp Gaussian quasimodes which avoids supersymmetry or PT-symmetry assumptions.

where the matrix field A, the vector field b and the function c depend smoothly on x ∈ R d , and where h > 0 is a small parameter.We assume that the matrix field A is pointwise symmetric and positive semidefinite and that the function c is nonnegative.In stochastic analysis, such operators arise naturally as the generators of time homogeneous Langevin processes where (B t ) denotes the Brownian motion on R d , the vector field ξ is the drift coefficient, the matrix field σ is the diffusion coefficient and the parameter h is proportional to the temperature of the system.Given any test function ϕ, the expectation u(t, x) = E(ϕ(X t ) X 0 = x) is solution of the Fokker-Planck equation where with A = (a i,j ) = σσ t .Up to the multiplicative factor h, this operator has the form (1.1) for some suitable b and c.Denoting L † the formal L 2 (dx) adjoint of L, (1.3) is equivalent to say that the probability density (t, ⋅) of the process (X t ) is solution of the adjoint equation Among many examples of such operators, let us mention two cases of particular interest.
Taking ξ = −∇f for some smooth function f on R d and σ = Id R d , the generator L of the overdamped Langevin process (1.4) dX t = −∇f (X t ) + √ 2h dB t , writes (1.5) L = L KS = −h∆ + ∇f ⋅ ∇, which is sometimes called the Kramers-Smoluchowski operator.Depending on the field of research, this operator is also known as the weighted Laplacian or Bakry-Émery Laplacian and is unitarily equivalent to the Witten Laplacian.
Another famous example is given by the case where σ ∶ R 2d → R 2d is the projection onto the subspace 0 ⊕ R d , σ(x, v) = (0, v) and ξ ∶ R 2d → R 2d , defined by ξ(x, v) = (v, −∇ x V − v), is related to the energy function f (x, v) = 1  2 v 2 + V (x) depending on a smooth potential V on R d .The associated Langevin equation writes where (B t ) is the Brownian motion on R d .The associated generator is the Kramers-Fokker-Planch operator , where ∆ v is the Laplace operator in the v variable only.
The study of the operators L KS and L KF P has been the subject of many works in the last decades.It is particularly motivated by its applications to computational physics.The above processes are indeed ergodic with respect to their Gibbs measure and can thus be used to sample this distribution.We refer to [21] for details on these topics.
From a theoretical point of view, the study of the qualitative properties (well-posedness, asymptotic behavior) as well as of the quantitative properties (precise spectral asymptotics) of the Fokker-Planck equation (1.3) has recently known some major progresses on the impulse of microlocal techniques.When the matrix field A is positive definite, the operator P is elliptic and standard tools apply to prove general properties on the operator P (maximal accretivity, compactness of the resolvent).When A is not invertible, the operator P is not elliptic anymore (it is sometimes called a degenerate diffusion) and major progresses were recently made using hypoelliptic methods in the spirit of Hörmander.For the Kramers-Fokker-Planck operator L KF P , exponential convergence to equilibrium was proved in [25] and explicit rate of decay in the non-semiclassical setting was given in [15].More generally, hypocoercive methods developed for various kinetic models provide now robust tools to prove return to equilibrium and spectral gap estimates (see [26] for an overview).
In the semiclassical setting h → 0, computing sharp spectral asymptotics for the low spectrum of P is a classical problem having a long history.In the elliptic self-adjoint case, i.e. when A is uniformly positive definite and b = 0, the low-lying eigenvalues of P are localized near the absolute minimum value of the zeroth order part of P , that is the minimum value of the function c.Moreover, the harmonic and WKB approximations of P near the absolute minima of c yield spectral expansions in powers of h of the low-lying eigenvalues of P (see [7,Chapters 3 and 4] for a detailed study in the case of Schrödinger operators).
However, in certain situations, these expansions are identical and, thus, do not permit to discriminate between these low-lying eigenvalues.This is for instance the case for Witten Laplacians associated with a confining Morse function f (in this case, the corresponding function c also depends on h), for which we know from the early works of Witten [27] and Helffer-Sjöstrand [9] that P admits exponentially small eigenvalues (that is of order O(e −C h ) for some C > 0, and hence indistinguishable on the basis of their expansions in powers of h), in one-to-one correspondence with the minima of f , and that the rest of its spectrum is above ε * h for some ε * > 0.
Up to a unitary conjugation, the Witten Laplacian is nothing else but the Kramers-Smoluchowski operator (1.5) and its small eigenvalues govern the time of return to equilibrium for the overdamped Langevin process (1.4).The computation of these eigenvalues is a historical problem which at least goes back to Kramers [17].In the early '00s, sharp asymptotics of these small eigenvalues were obtained in [4] and [8].Known as Eyring-Kramers formulas, such spectral asymptotics were also obtained recently in [2,19] in elliptic non-self-adjoint settings, associated with non-reversible overdamped Langevin processes generalizing (1.4).Concerning the transition times of these processes, Eyring-Kramers formulas have been respectively established in both reversible and non-reversible settings in [3] and [18,20] under similar assumptions.We also refer to [1] for a nice introduction to these topics.
In the non-elliptic case, major progresses in the analysis of the operator P were made by Hérau, Hitrik and Sjöstrand in a series of works.In [12], the authors proved resolvent estimates and established harmonic approximation of the spectrum under dynamical assumptions on the symbol of the operator P .In [13], they applied these results to operators satisfying additional symmetries (supersymmetry and PT-symmetry).Under these assumptions, the operator P admits a natural Gibbs stationary distribution e −f h and the authors proved spectral Eyring-Kramers formulas in this setting, where the small eigenvalues govern the time of return to equilibrium for the Langevin process (1.6).
Though it is satisfied by many interesting examples (as Kramers-Fokker-Planck operators), the assumptions of supersymmetry and PT-symmetry do not look necessary to prove sharp spectral asymptotics, as shown by the paper [19].The aim of the present paper is to prove spectral Eyring-Kramers formulas for general operators P satisfying the assumptions of [12] and admitting an explicit Gibbs stationary distribution, but without the additional symmetry assumptions of [13].
1.2.Framework and results.Let P = P (x, h∂ x , h) be a semiclassical second order differential operator on R d , d ≥ 1, with smooth real coefficients, (1.8) where the real functions a i,j , b j , and c depend smoothly on x ∈ R d , a i,j = a j,i for all i, j = 1, . . ., d, and where h ∈]0, 1] denotes the semiclassical parameter.We suppose that the coefficients of P satisfy the following growth condition at infinity (1.9) 1), uniformly with respect to h.We assume moreover that the above coefficients admit classical expansions: a i,j (x, h) ∼ ∑ k∈N h k a k i,j (x), b j (x, h) ∼ ∑ k∈N h k b k j (x) and c(x, h) ∼ ∑ k∈N h k c k (x) in the sense for all α ∈ N d , K ∈ N and e ∈ {a i,j , b j , c}.This yields classical expansions for the matrix field A(x, h) = (a i,j (x, h)) i,j ∼ ∑ k h k A k (x) and the vector field b(x, h) ∼ ∑ k h k b k (x).Considering symbols which have a classical expansion allows to deal with, e.g., Witten Laplacians and Kramers-Fokker-Planck operators, which naturally have subprincipal terms.Eventually, we also assume a partial positivity of the symbols of the operator P : for all x ∈ R d and h ∈]0, 1], (1.11) c 0 (x) ≥ 0 and A(x, h) = (a i,j (x, h)) i,j is positive semidefinite.
Such operators were studied in details in [12], where the authors establish resolvent estimates together with spectral asymptotics.In particular, the graph closure of the operator P initially defined on the Schwartz space S(R d ), still denoted by P , is maximal accretive and has domain Let us introduce the symbol p(x, ξ, h) of P in the semiclassical Weyl quantization.It satisfies where, throughout the paper, x ⋅ y denotes the usual scalar product of x and y in R d (in order to facilitate the reading, we will also sometimes use the notation ⟨x, y⟩ = x ⋅ y).It admits a classical expansion p(x, ξ, h) ∼ ∑ k h k p k (x, ξ) and the principal symbol p 0 is given by p In order to lighten the notation, we will drop the superscript 0 when it is unambiguous.Consider the symbol (1.12) p(x, ξ) = p 0 0 (x) + Thanks to (1.11), one has p 0 0 , p 0 2 ≥ 0 and hence p ≥ 0. Given T > 0, let us define the symbol ⟨p⟩ T by where 1 ⋅∂ ξ denotes the Hamilton vector field associated with the symbol p 0 1 .The critical set associated with p is defined by As in [12], we suppose that the set C is finite, C = {ρ 1 , . . ., ρ N }, and that for some fixed T > 0 (see (4.21) and (4.23) in [12]): there exists some constant C > 0 such that (Harmo) for all ρ near any ρ j , we have and, for any neighborhood U of π x C, one has Assumption (Harmo) may look difficult to check in the applications, but Corollary 2.4 and Remark 2.5 provide a concrete criterion to verify it.Observe also that (Hypo) holds true for instance when c 0 is uniformly positive outside each neighborhood of π x C. Note that it is not necessary to assume (4.22) of [12] here since this is a consequence of (Harmo) and (Hypo), as explained in [12, page 223].
Under these assumptions, Hérau, Hitrik and Sjöstrand obtained spectral informations that we sum up below.For this purpose, we introduce the fundamental matrix F p 0 of the symbol p 0 at a critical point ρ ∈ C (see (1.14)) as the linearization of the Hamilton field H p 0 at ρ.Its eigenvalues are of the form ±λ ρ, , 1 ≤ ≤ d, with Im λ ρ, > 0. We use the notation Combining Proposition 7.2, Theorem 8.3 and Theorem 8.4 of [12], we get Theorem 1 (Hérau-Hitrik-Sjöstrand).Assume that (Harmo) and (Hypo) hold true.For any B > 0, there exists C > 0 such that for h small enough, the operator P has no spectrum in {z ∈ C; Re z < Bh and Im z > Ch}.Moreover, for any B > 0, there exists α > 0 such that, for h small enough, the spectrum of P in D(0, Bh) is discrete and made of eigenvalues (with multiplicity) of the form where the (µ 0 ρ,k ) ρ∈C,k∈N are all the possible numbers of the form Finally, for every B > 0, there exists C > 0 such that In addition, they showed that the remainder terms O(h α ) have a classical expansion in fractional powers of h.It is assumed in [12] that the coefficients a i,j , b j , c of the operator P (see (1.8)) do not depend on h, but a direct adaption to our setting gives Theorem 1.It turns out that in many situations, some leading coefficients µ 0 ρ,k vanish and one aims at having a more accurate description of the spectrum.This is the case for instance for Witten Laplacians or Kramers-Fokker-Planck operators, which both admit an invariant distribution.In the present paper, we consider the situation where the operator P satisfies the assumptions of Theorem 1 and there exists a smooth function P (e −f h ) = 0 and P † (e −f h ) = 0, where P † denotes the formal adjoint of P .In particular, e −f h ∈ D(P ) ∩ D(P * ).Moreover, we will assume that (Morse) f is a Morse function with a finite number of critical points.
In the sequel, we denote by U the set of critical points of the Morse function f and by U (j) the set of its critical points of index j = 0, . . ., d (that is the set of its critical points u such that Hess f (u) has signature (d − j, j)).Moreover, we denote by n 0 ∶= ♯ U (0) the number of local minima of f , by H(x) ∶= Hess f (x) the Hessian matrix of f at x ∈ R d , and we call saddle points the elements of U (1) .
For j ∈ {0, 1, 2}, let us denote by P j the jth order part of the operator P = P 2 + P 1 + P 0 with P 2 = −h div ○A ○ h∇ formally self-adjoint, P 1 = 1 2 (b ⋅ h∇ + h div ○b) formally anti-adjoint, and P 0 = c formally self-adjoint.It then follows from (Gibbs) that P 1 (e −f h ) = 0 and (P 2 + P 0 )(e −f h ) = 0. Using the classical expansions of the coefficients and looking at the terms of order 0 in the expansion, we obtain the following eikonal equations: for all x ∈ R d , (1.16) (1.17) ⟨b 0 (x), ∇f (x)⟩ R d = 0.The first consequence of these identities is the following lemma whose proof is postponed to the next section.Using this lemma, we obtain our first localization result on the spectrum of P .Its proof will also be given in the next section.Proposition 1.2.Assume the hypotheses of Theorem 1, (Gibbs) and (Morse).There exist h 0 , ε * > 0 such that, for every h ∈]0, h 0 ], P has exactly n 0 eigenvalues in {z ∈ C; Re z < ε * h}.Moreover, these eigenvalues are of order O(h 1+α ), where α > 0 is given by Theorem 1.
The aim of this paper is to give sharp asymptotics on these n 0 small eigenvalues of P .For this purpose, we recall the general labeling of minima introduced in [8] and generalized in [13].The presentation below originates from [23] and [19], where extra material can be found.The main ingredient is the notion of separating saddle point which is defined as follows.Note that, for a saddle point s ∈ U (1) of f and r > 0 small enough, the set {x ∈ D(s, r); f (x) < f (s)} has exactly two connected components C j (s, r), j = 1, 2. Observe that this set is empty when s ∈ U (0) and connected when s ∈ R d ∖ (U (0) ∪ U (1) ).The following definition comes from [13,Definition 4.1].
Definition 1.3.We say that s ∈ U (1) is a separating saddle point of f if, for every r > 0 small enough, C 1 (s, r) and C 2 (s, r) are contained in two different connected components of {x ∈ R d ; f (x) < f (s)}.We will denote by V (1) the set made of these points.
We say that σ ∈ R is a separating saddle value of f if it has the form σ = f (s) for some s ∈ V (1) .
We say that E ⊂ R d is a critical component of f if there exists σ ∈ f (V (1) ) such that E is a connected component of {f < σ} and ∂E ∩ V (1) ≠ ∅.
Let us now describe the labeling procedure of [13].Assume that f (x) → +∞ when x → +∞ and that f satisfies (Morse).The set f (V (1) ) is then finite.We denote by σ 2 > σ 3 > ⋯ > σ N its elements and, for convenience, we also introduce a fictive infinite saddle value σ 1 = +∞.Starting from σ 1 , we will recursively associate to each σ i a finite family of local minima (m i,j ) j and a finite family of critical components (E i,j ) j : We let m 1,1 be any global minimum of f (not necessarily unique) and E 1,1 = R d .In the following, we will write m = m 1,1 .
⋆ Next, we consider ⋆ Suppose now that the families (m k,j ) j and (E k,j ) j have been constructed until rank has again finitely many connected components and we label E i,j , j = 1, . . ., N i , those of these components that do not contain any m k, with k < i.They are all critical and, in each E i,j , we pick up a point m i,j which is a global minimum of f E i,j .
At the end of this procedure, all the minima have been labeled.
We now recall some constructions of [23] and [19] that will be useful in the sequel.Throughout, we denote by s 1 a fictive saddle point such that f (s 1 ) = σ 1 = +∞ and, for any set A, P(A) denotes the power set of A. From the above labelling, we define two mappings ∪ {s 1 }) as follows: for every i ∈ {1, . . ., N } and j ∈ {1, . . ., N i },
We then define the mappings where, with a slight abuse of notation, we have identified the set f (j(m)) with its unique element.Note that S(m) = +∞ if and only if m = m.
We are now in position to introduce our last assumption.In addition to (Gibbs), (Confin), and (Morse), we assume the following (Gener) ⋆ for any m ∈ U (0) , m is the unique global minimum of f E(m) , In particular, (Gener) implies that f uniquely attains its global minimum at m ∈ U (0) .This assumption is a generalization of [13, Assumption 5.1] which was already used in [19].In Section 6, we discuss how to remove this hypothesis and deal with the general case, in the spirit of [23].
In order to state our main result, we need the following lemma which is proved in Section 2. Throughout the paper, we denote C ± = {z ∈ C; ± Re z > 0} and M t the transpose of any matrix M .Lemma 1.4.Assume (Harmo), (Gibbs), and (Morse), and let k ∈ {0, . . ., d}.Let u ∈ U (k) be a critical point of f of index k.Denote B(u) = db 0 (u) and recall that H(u) is the Hessian matrix of f at u.Then, i) the matrix Λ(u) ∶= 2H(u)A 0 (u) + B t (u) admits exactly k eigenvalues in C − and d − k eigenvalues in C + , ii) if k = 1, its unique eigenvalue µ(u) in C − is real (and thus µ(u) < 0).
We are now in position to state our main result.
Let us make some comments on the above result.In [13], the authors studied the case where the operator satisfies some supersymmetry property.More precisely, they assumed the existence of a smooth matrix-valued function G(x) such that P = ∆ f,G , where and d f denotes the semiclassical Hodge derivative twisted "à la Witten": for some smooth function f .Under suitable assumptions on f and G, ∆ f,G satisfies the general hypotheses of Theorem 1.Moreover, one has obviously ∆ f,G (e Assuming additionally that f is a Morse function, the authors proved that the smallest eigenvalues of ∆ f,G are exponentially small with respect to h, and established Eyring-Kramers type formulas under suitable topological assumptions (see [13,Theorem 5.10, Proposition 6.7, and Formula (6.71)]).In their paper, the supersymmetry assumption is fundamental since, combined with the PT-symmetry property, it permits to follow the strategy used by Helffer, Klein and Nier [8] in the supersymmetric framework of Witten Laplacians.More recently, the two last authors [19] studied the case of non-reversible diffusions (1.24) where ∆ f = −h 2 ∆ + ∇f 2 − h∆f denotes the Witten Laplacian and b is a vector field verifying div b = 0 and b ⋅ ∇f = 0.In this setting, which is not supersymmetric in general, the authors built accurate quasimodes and used them to prove Eyring-Kramers asymptotics.
The interest of the approach developed in the present paper is to deal simultaneously with all these models without assuming additional symmetry.In particular, Theorem 2 applies to both (1.23) and (1.24).Moreover, we give in Appendix B examples of operators P satisfying our assumption but which do not enjoy a nice supersymmetric structure (1.23).Compared to the results of [13], our approach has also the advantage to give formulas which are completely explicit in terms of the coefficients of the operator and of the function f .Moreover, compared to the results of [19], we would like to emphasize that we obtain a full asymptotic expansion of the prefactor z(m)a(h).The proof relies on the resolvent estimates of [12] and on the construction of accurate quasimodes near the saddle points of f .These constructions, inspired by [3,6,19], are the main novelty of this paper.To be more precise, we generalize the use of Gaussian cut-off functions introduced in [19] by using geometric constructions, which lead to complete asymptotic expansions.
Theorem 2 permits to give the long time behavior of the solutions of the evolution equation associated with P , (1.25) h∂ t u + P u = 0, u t=0 = u 0 .
Since the operator P is maximal accretive, this Cauchy problem has, for every We first state the spectral expansion of the propagator.
Corollary 1.5.In the setting of Theorem 2, there exist C, ε > 0 such that, for all u 0 ∈ L 2 (R d ) and h small enough, there exists (u m,n ) m,n ⊂ C such that the solution u(t) of (1.25) satisfies Moreover, for all N ∈ N, there exists C N > 0 such that, for all u 0 ∈ L 2 (R d ) and h small enough, the solution u(t) of (1.25) satisfies The double sum appearing in (1.26) is nothing but e −tP h Π h , where Π h denotes the spectral projector of P associated with its n 0 exponentially small eigenvalues.In particular, when the λ(m, h) are pairwise distinct, (1.26) writes where Π m is the rank-one spectral projector of P associated with the eigenvalue λ(m, h).
Estimate (1.27) implies that u m,0 = Π m u 0 = e −f h −2 ⟨e −f h , u 0 ⟩e −f h and that u m,n = 0 for every n ≥ 1 in (1.26), where Π m is the (orthogonal) spectral projector of P associated with the eigenvalue 0 with corresponding eigenspace e −f h C. Equation (1.27) is a return to equilibrium formula generalizing [19,Theorem 1.11].We see that the spectral gap (that is min m≠m Re(λ(m, h))) indeed gives the rate of convergence to the equilibrium state modulo O(h ∞ ).In addition, when there exists precisely one then the eigenvalue λ(m ⋆ , h) is simple and real and we can replace min by λ(m ⋆ , h) in (1.27).
One can also show the metastable behavior of the solutions of (1.25).More precisely, Corollary 1.6 (Metastability).In the setting of Theorem 2, let S 1 < ⋯ < S K = +∞ denote the increasing sequence of the S(m)'s defined in (1.20) and let Π ≤ k be the spectral projector of P associated with its eigenvalues of modulus less than e −2S k h .For two positive functions , we define the times Then, for every h small enough, the solution u(t) of (1.25) satisfies (1.29) uniformly with respect to t, 1 ≤ k ≤ K, and In other words, e −tP h ≈ Π ≤ k in the time interval whereas transitions occur around the times t k = e 2S k h .In this result, one can take for instance g − (h) = e −δ h , with δ > 0, and is the spectral projector of P associated with its n 0 exponentially small eigenvalues, and Π ≤ K = Π m is the orthogonal projector on e −f h C. The proofs of Corollaries 1.5 and 1.6 are done at the end of Section 5.
We conclude this introduction by applying Theorem 2 to a generalization of the Kramers-Fokker-Planck operator defined in (1.7).For two smooth functions V (x) and W (v) and a friction coefficient γ > 0, the generator associated with the SDE is given by is the Witten Laplacian in the variable v associated with the function W (v) 2. Thus, P has the form (1.8) with Of course, f satisfies (Morse) if and only if V and W do it.In that case, Corollary 2.4, Remark 2.5 and show that (Harmo) is satisfied.Under some additional growth assumptions on V and W at infinity, one can verify that (Confin) and (Hypo) hold true.At a critical point u, the matrix Λ(u) of Lemma 1.4 is given by The setting of (1.7) corresponds to W (v) = v 2 2. In that case, the saddle points of f are of the form s = (x, 0), where x is a saddle point of V , and the unique eigenvalue with negative real part of Λ(s) is where λ 1 is the unique negative eigenvalue of Hess x V (x).This yields an explicit expression of the prefactor z(m) in (1.22) and we observe that it has the same form as the one obtained in equation (6.67) of [13].Observe also that µ(s) < λ 1 if and only if γ > 1 + λ 1 .Thus, the rate of convergence to the equilibrium for the Langevin process (1.6) (whose generator is the Kramers-Fokker-Planck operator, see (1.7)) is smaller than for the overdamped Langevin process (1.4) (generated by the Witten Laplacian, see the lines above (1.5))if and only if this condition holds.Note that this discussion is more generally valid if W admits a unique critical point at v = v with Hess v W (v) = Id.In particular, it is easy to choose W such that P is not PT-symmetric, which provides a setting covered by Theorem 2 but which can not be treated using [13].
More sophisticated Langevin equations have been considered in the literature, as the generalized Langevin equation in [24] (see also references therein).Our results permit to compute in the low temperature regime the low-lying eigenvalues of the different generators treated in [24].
The rest of this paper is organized as follows.In Section 2, we derive algebraic and geometric results from our assumptions.This leads to the proofs of Lemmas 1.1 and 1.4, and of the rough localization of the spectrum of P stated in Proposition 1.2.Section 3 is devoted to the construction of new ansätze of the eigenmodes of P near the saddle points of f .These ansätze are then used in Section 4 to construct global quasimodes.Afterwards, in Section 5, we prove our main results, Theorem 2 and Corollaries 1.5 and 1.6.The aim of Section 6 is then to show that Theorem 2 can be generalized to an arbitrary Morse function f , that is without assuming (Gener).A vague statement is given in Theorem 3, while precise ones are given in Theorems 5 and 6.Let us also recall here that we are not working with symmetry assumptions such as supersymmetry or PT-symmetry.This prevents us from concluding as in [8] (and in many other works later such as, e.g., in [13,23]) after reduction of the problem to the computation of the eigenvalues of a square matrix of size n 0 .To handle this computation, we then use crucially the Schur complement method, as in [19], and refine [19,Theorem A.4] in Theorem 4. We believe that Theorem 4 has his own interest and may be used in other contexts.The authors are supported by the ANR project QuAMProcs 19-CE40-0010-01.

Preliminary results
In this section, we prove some preparatory geometric results and use them to show the rough localization of the spectrum of P stated in Proposition 1.2.For u a critical point of f , we use the shortcuts 2.1.Analysis of the critical set.The aim of this section is to prove Lemma 1.1 and to discuss the assumptions of Section 1.2.We start with a microlocal observation.ii) When (2.1) holds true, we have the identity ker(A 0 ) ∩ ker(B t − z) = {0} for every z ∈ C.
Proof.Since the Hamilton vector field where we used that (u, 0) ∈ C and thus c(u) = 0 to obtain the second equality.Since the relation (Harmo) implies ⟨p⟩ T (u, ξ) ≥ 1 C ξ 2 for some constant C > 0 and every ξ small enough, the first part of Lemma 2.1 is then an immediate consequence of (2.2).
To prove the second part of Lemma 2.1, let η ∈ C d belong to ker(A 0 ) ∩ ker(B t − z) for some z ∈ C. Then e −tB t η = e −tz η, and thus η ∈ ker ∫ T −T e −tB A 0 e −tB t dt = {0}.
Proof of Lemma 1.1.We can deduce Lemma 1.1 from Lemma 2.2.Indeed, the sets Ũ and U coincide when f is a Morse function.Then, Lemma 2.2 i) provides the first statement of Lemma 1.1.On the other hand, (Harmo) and (Gibbs) imply (2.1) from Lemma 2.1 i) and then (2.4).Thus, Lemma 2.2 gives C = U × {0} in that case, finishing the proof of Lemma 1.1.
Lemma 2.3.Assume (Gibbs) and let u ∈ U with b(u) = 0.Then, the matrix B t H is antisymmetric.If moreover the critical point u is non-degenerate (i.e.u ∈ Ũ), the matrix J ∶= H −1 B t is also antisymmetric.
Proof.As before, we assume for simplicity that u = 0. Performing a Taylor expansion of the identity b(x) ⋅ ∇f (x) = 0 (see (1.17)), we deduce Bx ⋅ Hx = 0 for every x ∈ R d .Therefore, x ⋅ B t Hx = 0 for every x ∈ R d , which implies that the matrix J = B t H is antisymmetric.
On the other hand, since 0 2.2.The spectrum of the matrix Λ.The aim of this section is to provide informations on the spectrum of the matrix Λ = 2HA 0 + B t and to prove Lemma 1.4.We start with the following result, where J = H −1 B t is antisymmetric according to Lemma 2.3.
Proof.Suppose that there exists v ∈ C d such that Λ r v = zv with Re z = 0.Then, using ⟨v, On the other hand, For r ∈ [0, 1[, (2.7) and (2.8) imply v = 0 and the lemma follows in that case.Assume now that r = 1.The relations (2.7) and (2.8) yield ⟨A 0 v, v⟩ = 0 and then A 0 v = 0 since A 0 is positive semidefinite.Thus, the eigenvalue equation Λv = zv writes B t v = zv.Hence, v ∈ ker(A 0 ) ∩ ker(B t − z) and thus v = 0 according to Lemma 2.1, which completes the proof in the case r = 1.
We now give the proof of Lemma 1.4.Let k ∈ {0, . . ., d} and let u ∈ U (k) .For r = 0, Λ 0 = H has exactly k eigenvalues in {Re z < 0} since u is a critical point of index k.Using that the eigenvalues of Λ r are continuous with respect to r and cannot cross the imaginary axis from Lemma 2.6, Λ 1 = Λ has exactly k eigenvalues in {Re z < 0} and no eigenvalue on the imaginary axis.This proves the i) of the lemma.Suppose now that k = 1 and let µ(u) denote the unique eigenvalue of Λ in {Re z < 0}.Since Λ is a real matrix, its set of eigenvalues is stable by complex conjugation and hence µ(u) ∈ R.This proves the ii).

2.3.
Rough localization of the spectrum.In this section, we prove Proposition 1.2.The arguments are close to those of [13, Section 2.2.2].Thanks to Lemma 1.1, one has C = U × {0} and hence it suffices to show that, for any ρ = (u, 0) ∈ U × {0}, the numbers µ 0 ρ,0 of Theorem 1 satisfy the following property (2.9) where we recall that µ 0 ρ,0 = 1 2 tr(p, ρ) with tr(p, ρ) = −i ∑ d =1 λ ρ, + 2c 1 (u), and ±λ ρ, , ∈ {1, . . ., d}, denote the eigenvalues of the fundamental matrix F p 0 of p 0 at ρ = (u, 0) with the convention Im λ ρ, > 0 (see the paragraph above Theorem 1).From now on, we take u ∈ U (k) , k ≥ 0, and we suppose again that u = 0 in order to lighten the notations.Thanks to (1.16), one has near (0, 0) Then, (2.10) Using the identity B t H = −HB which follows from Lemma 2.3, a direct computation shows that L ± = {(X, ±iHX); X ∈ C d } are vector spaces stable under F p 0 , that F p 0 restricted to L ± acts like In addition, using again B t H = −HB, we get On the other hand, let us recall that (Gibbs) implies (−h div ○A ○ h∇ + c)(e −f h ) = 0 (see the lines above (1.16)).Using the classical expansions of the coefficients and looking at the terms of order 1 then shows that Combining (2.11) and (2.12) yields By definition, this latter quantity vanishes when k = 0 and has a positive real part when k ≥ 1.This proves (2.9).

Quasimodal constructions near the saddle points
In this part, we assume the hypotheses of Theorem 1, (Gibbs) and (Morse).

3.1.
A first step in the construction of quasimodes.Given s ∈ U (1) , we look for an approximate solution to the equation P u = 0 in a neighborhood V of s under the form with a function v of the form where the function Here, ζ denotes a fixed smooth even function equal to 1 on [−1, 1] and supported in [−2, 2], and τ > 0 is a small parameter which will be fixed later.The object of this section is to construct the function .
Lemma 3.1.One has , the function r and all its derivatives are (locally) bounded, uniformly with respect to h, and supp(r) ⊂ { ≥ τ }.Here, we recall that all the functions above depend on x and h.Moreover, w admits a classical expansion w ∼ ∑ j≥1 h j w j with and, for all j ≥ 1, [P 0 , v] = 0. Using (3.1), one gets . In particular, r 1 and all its derivatives are (locally) bounded, uniformly with respect to h, and supp(r 1 ) ⊂ { ≥ τ }.On the other hand, since the matrix A is symmetric, one has for any smooth function ψ, Using again (3.1), this yields In particular, r 2 and all its derivatives are (locally) bounded, uniformly with respect to h, and supp(r 2 ) ⊂ { ≥ τ }.
Combining this identity with (3.2) and (3.3), and using the relation P (ve −f h ) = [P, v](e −f h ) implied by (Gibbs), we obtain the first part of the statement.Moreover, since the coefficients of P and the function have a classical expansion, so do w.Plugging the expansions of A, b, c, and into the expression of w and identifying the powers of h, we obtain the formulas for the w j .
In order to construct accurate quasimodes, we have to find smooth functions j , j ≥ 0, with 0 ≡ 0 and such that the above w j+1 vanish.The equation on 0 is and the equations on the j , j ≥ 1, are By analogy with the usual WKB method, we call (3.4) the eikonal equation and (3.5) the transport equations.
Lemma 3.2.There exists a neighborhood V of s and a smooth function Moreover, we have ∇ s,0 (s) ≠ 0.
Proof.This lemma comes from an observation of [12, (11.20)] and we follow this approach.
Let Λ f = {(x, ∇f (x))} ⊂ T * R d denote the Lagrangian manifold associated with the phase function f .The eikonal equations (1.16) and (1.17) imply q(x, ∇f (x)) = 0 and then Λ f is stable by the H q flow.In particular, its tangent space T ρs Λ f at ρ s is stable by F q , the linearization of H q at ρ s .Moreover, a direct computation and (2.10) show that In particular, the discussion below (2.10) implies that F q has no eigenvalue on the imaginary axis.Let k ± be the number of eigenvalues of F q restricted to T ρs Λ f with positive/negative real part.Then, we have k Let K ± be the stable outgoing/incoming submanifold of Λ f given by the Hamiltonian vector field H q restricted to Λ f .Then K ± has dimension k ± and K ± projects nicely on the x-space Using that ∇φ ± = ∇f on π x (K ± ), we get Since s is a saddle point of f , its Hessian has signature (d − 1, 1).Thus, (3.7) and (3.9) imply that k + = d − 1 and k − = 1.Eventually, using again (3.9), we have on T s π x (K − ) Thus, in a neighborhood of s, g ∶= φ + − f + f (s) is a nonnegative function which vanishes at order 2 on π x (K + ).
Since the quantity under the square root is positive when evaluated in z = 0, then s,0 is a smooth function in a vicinity of s and ∇ s,0 (s) ≠ 0.

Lemma 3.4.
There exists an open neighbourhood V of s and some smooth functions s,j ∈ C ∞ (V ) such that, for all j ≥ 1, s,j solves (3.5).
Proof.Since, for all j ≥ 1, R j only depends on the s,k with 0 ≤ k ≤ j − 1, we can solve the equations (3.5) by induction.It thus suffices to show that there exists an open neighbourhood V of s such that, for any smooth function f , there exists u ∈ C ∞ (V ) satisfying (3.16) Lu = f, where L is the transport operator defined by (3.17) Assume for simplicity that s = 0. We first look for a formal solution in powers of x.Given m ∈ N, we denote by P m hom the set of homogeneous polynomials of degree m and we write f (x) ≃ ∑ m∈N f m with f m ∈ P m hom .We recall that ∇ s,0 (s) = η(s) is an eigenvector of Λ(s) = 2H(s)A 0 (s) + B t (s) associated with its sole negative eigenvalue µ(s) (see Lemma 1.4).Then, L decomposes into where L > (p) = O(x m+1 ) for all p ∈ P m hom and L 0 ∶ P m hom → P m hom is given by that we can rewrite, since µ(s) = −A 0 (s)η(s) ⋅ η(s) by Lemma 3.3, as where Π η y ∶= ⟨y, η(s)⟩η(s).We shall prove that L 0 is invertible on P m hom for any m ≥ 0. Let us denote Υ = 2A 0 (s)H(s) + B(s) Choosing some vectors e 2 , . . ., e d such that B = (η(s), e 2 , . . ., e d ) is a basis of C d and the matrix M of Λ(s) in B is upper triangular, it follows that the matrix M ′ of Υ t in the basis B is also upper triangular, with the same diagonal entries as M , except that the first diagonal entry µ(s) is replaced by −µ(s) (actually, only the first line of M and M ′ differ).Since µ(s) is the only eigenvalue of Λ(s) with nonpositive real part, the spectrum of Υ t is contained in {Re z > 0}.Thanks to Lemma A.1 in the appendix, this implies that the spectrum of Υx ⋅ ∇ ∶ P m hom → P m hom is contained in {Re z > 0} for every m > 0 and hence L 0 = Υx ⋅ ∇ − µ(s) is invertible on P m hom for every m ≥ 0 (note that L 0 = −µ(s) on P 0 hom ).Using this property, we can solve the transport equation Lu = f following the method of [7, Chapter 3].We recall briefly the successive steps.We first find a formal solution ũ to the equation where f denotes the formal power expansion of f : f ≃ ∑ k fk with fk ∈ P k hom .We look for ũ under the form ũ ≃ ∑ k ũk with ũk ∈ P k hom .Since L 0 is invertible, there exists ũ0 ∈ P 0 hom solving L 0 ũ0 = f0 .Then we choose ũ1 ∈ P 1 hom solution of where r 1 denotes the homogeneous part of degree 1 of −L > (ũ 0 ).Iterating this procedure, we obtain a formal solution to (3.19).Using this formal solution and a Borel procedure, we construct a smooth function u such that u and ũ have the same Taylor expansion at the origin.As a consequence Lu = f + O(x ∞ ).The last step consists in showing that for every g = O(x ∞ ), there exists a solution v = O(x ∞ ) to Lv = g.This can be done by using the characteristic method and the fact that the spectrum of Υ is contained in {Re z > 0}; we refer to [7, proof of Proposition 3.5] for details.Then, taking u ∞ satisfying ) and [7,Proposition 3.5] shows that the neighborhood V of s where u is defined can be chosen independently of f .
Using the preceding result and a Borel procedure, we get the following.
Proposition 3.5.For any s ∈ U (1) , there exists a smooth function x ↦ s (x, h) defined in a neighborhood V s of s such that the following hold true Note that the function x ↦ − s (x, h) also satisfies Proposition 3.5.More precisely, i) and ii) hold true without modification while η(s) has to be replaced by −η(s) = −∇ s,0 (s) in iii).At this point, we do not specify which function ( s or − s ) will be used later.

Global construction of quasimodes
In this section, which is an adaptation of [19, Section 4A], we assume the hypotheses of Theorem 1, (Gibbs), (Confin), and (Morse).We send back the reader to the notations following Definition 1.3 and introduce some additional topological objects.Given m ∈ U (0) ∖ {m}, one has σ(m) = σ i for some i ≥ 2.Moreover, since σ i−1 > σ i , there exists a unique connected component of {f < σ i−1 } that contains m (observe that this component is not necessarily critical).We denote that component by E − (m), and by Let us consider some arbitrary m ∈ U (0) ∖ {m}.For every s ∈ j(m), one has f (s) = σ(m).For any τ, δ > 0, we define the sets B s,τ,δ and C s,τ,δ by

and (4.3)
C s,τ,δ is the connected component of B s,τ,δ containing s, where η(s) has been defined in Lemma 3.3.We recall that η(s) is an eigenvector of the matrix Λ(s) = 2H(s)A 0 (s) + B t (s) associated with its only negative eigenvalue µ(s), which has multiplicity one (see Lemma 1.4).Moreover, one has from (3.10) the normalization condition A 0 (s)η(s) ⋅ η(s) = −µ(s).Observe that this normalization condition is not the same as in [19], where it is imposed η(s) = 1.Let us also define where E − (m) is defined by (4.1).
According to the geometry of the Morse function f around ∂E(m) and to the lemmas of Section 3.2, we have the following result.
Proof of Lemma 4.1.Without loss of generality, we can assume that s = 0 and f (s) = 0. Thanks to Lemma 3.2 and Proposition 3.5, one has near s = 0.This implies that, for all x ∈ η(s) , we have ).Since Hess φ + (s) is positive definite by (3.7), the conclusion follows.

Consider now a smooth function θ
Definition 4.3.For τ 0 > 0 and then δ 0 , h > 0 small enough, let us define the functions We then define, for any m ∈ U (0) , the quasimode ϕ m,h by Note that, for every τ 0 , δ 0 , h > 0 so that the above definition makes sense, P ϕ m,h = 0 and, for every m ∈ U (0) ∖{m}, the quasimodes ψ m,h and ϕ m,h belong to C ∞ c (R d ; R + ) with supports included in E − (m) ∩ {f < σ(m) + 2δ 0 }.We have more precisely the following lemma resulting from the previous construction.Lemma 4.4.For every m ∈ U (0) and ε > 0, there exist τ 0 > 0 and then δ 0 > 0 small enough such that, for every h > 0 small enough, one has ii) When m ≠ m, there exists a neighborhood V τ 0 ,δ 0 of E(m) such that C s,3τ 0 ,3δ 0 .

Proof of the main results
We will use the following notation, here and in the sequel.For two families of numbers a = (a h ) h∈]0,h 0 ] and b = (b h ) h∈]0,h 0 ] , we say that a ∈ E cl (b) if there exists a family (c h ) h∈]0,h 0 ] such that, for every h ∈]0, h 0 ], We also write D u = det Hess(f )(u) 1 2 for any u ∈ U.

Computation of interaction coefficients.
Proposition 5.1.Under the assumptions of Theorem 2 but with only the first part of (Gener), we have the following estimates for τ 0 > 0 and then δ 0 > 0 small enough: there exists C > 0 such that, for every m, m ′ ∈ U (0) and h > 0 small enough, The results of this proposition are very close to those of [19,Propositions 4.4 to 4.6].The difference is that we have a classical expansion in ii) and that the multiplicative error in iii) is of order O(h ∞ ) instead of O(h 2 ).This is due to the fact that our quasimodal constructions are sharper.Remark 5.2.When, in comparison with the assumptions of Proposition 5.1, the first part of (Gener) is not satisfied neither, items iii) and iv) of Proposition 5.1 still hold, while item ii) becomes However, the quasi-orthonormality of the family (ϕ m,h ) m∈U (0) stated in i) is not satisfied anymore when the first part of (Gener) Working with similar arguments as in the proof below, one can then show that there exists c ∈]0, 1[ such that ⟨ϕ m,h , ϕ m ′ ,h ⟩ ∼ c when h → 0 + .
Proof.The proof of Proposition 5.1 follows very closely the proofs of Propositions 4.4 to 4.6 in [19].We just sketch it briefly and drop the index h in order to lighten the notation.
First, we recall that, for any m ∈ U (0) , ϕ m = ψ m ψ m with ψ m given by (4.9).Moreover, according to the first part of (Gener), f uniquely attains its absolute minimum at m on E(m), and then on supp(ψ m ) for every τ 0 , δ 0 > 0 small enough (see indeed i) in Lemma 4.4).By a standard Laplace method, we then easily get, for any m ∈ U (0) , (5.1) Let us now prove i).First, by definition, we have ⟨ϕ m , ϕ m ⟩ = 1 for every m ∈ U (0) .Moreover, for every m ≠ m ′ ∈ U (0) , we are in one of the three following cases.
Let us now prove iii).Since P (e −f h ) = 0, one has and, since f − f (m) > S(m) + δ 0 on supp(∇θ m ) and on supp(1 − θ m ) ∩ supp θ m , this implies On the other hand, on supp θ m , P (v m e −(f −f (m)) h ) is supported in ⋃ s∈j(m) C s,3τ 0 ,3δ 0 by (4.5) and, on any C s,3τ 0 ,3δ 0 , one has (see (4.6)) where w and r are given by Lemma 3.1.Since f + at s is positive definite (see (5.3)), and r is supported away from s, one has, for some δ > 0, where we used (5.4) to obtain the last equality.Moreover, thanks to Proposition 3.5 (and to Lemma 3.1), one also has These two estimates and (5.5) show that The proof of iv) is similar and left to the reader.
Then, (1.15) and (5.16) give that . Since this operator-valued function is holomorphic in {Re z < 2ε * h 3}, the maximum principle yields (5.17) The solution of (1.25) can be written (5.18) be the operator P restricted to the Hilbert space Im(1 − Π h ).Since P is maximal accretive, so is Q and thus e −tQ h ≤ 1.Moreover, (5.17) shows that (Q−z) −1 ≤ Ch −1 for Re z < 2ε * h 3. To estimate the last term in (5.18), we use a Gearhardt-Prüss type inequality with an explicit bound.More precisely, [11,Theorem 1.4] (see [10, Proposition 2.1] for more details) gives, for some C > 0 and all t ≥ 0, Combined with (5.16), it implies the existence of C > 0 such that, for all t ≥ 0, where ε = ε * 2. On the other hand, P restricted to Im Π h is a matrix of size n 0 whose eigenvalues are the λ(m, h)'s.Then, (5.18), (5.19), and the usual formula for the exponential of a matrix applied to e −tP h Π h u 0 provide (1.26).Moreover, using (5.19) instead of the argument of [19, page 43], the proof of (1.27) is similar to the one of [19,Theorem 1.11], except we have here to apply Theorem 4 instead of [19,Theorem A.4], and we omit the details.Summing up, we have just shown Corollary 1.5.
Let us now prove Corollary 1.6.For R > 1, we define the balls ∀k ∈ {1, . . ., K − 1}, and ).For R fixed large enough and every h small enough, each exponentially small eigenvalue of P belongs to exactly one of the disjoint sets D k from Theorem 2.Moreover, ∂D k is at distance of order he −2S k h (resp. he −2S K−1 h ) from the spectrum of P for k ∈ {1, . . ., K − 1} (resp.k = K).Using (6.19) to estimate the resolvent of P on the image of Π h and (5.17) to control the contribution on the image of 1 − Π h , we get In particular, the spectral projector associated with the eigenvalues of order he −2S k h , ( is well-defined and satisfies Πk ≤ C. We can now decompose For k ∈ {1, . . ., K − 1} and 0 ≤ t ≤ t − k , (5.20) and On the contrary, for t + k ≤ t, (5.20) and Lastly, e −tP h ΠK = ΠK since ΠK is the rank-one spectral projector associated with the eigenvalue 0. On the other hand, since e −εt + 0 = O(h ∞ ), ( (5.25) for t ≥ t + 0 .Summing up, Corollary 1.6 is a direct consequence of the formulas (5.18) and (5.22) with the relation Π ≤ k = ∑ k≤j≤K Πj and the estimates (5.23), (5.24), and (5.25).

Generalization
In this part, we briefly explain how one can drop the assumption (Gener) and treat the general case in the spirit of [23].This requires to introduce some additional material of [23].Definition 6.1.Let m ∈ U (0) ∖ {m}.We say that m is of type I when f ( m) < f (m) and that m is of type II when f ( m) = f (m).We will also denote ∖ {m}; m is of type I}, ∖ {m}; m is of type II}.
We have clearly the following disjoint union Definition 6.2.We define an equivalence relation R on U (0) by mRm ′ if and only if We denote by Cl(m) the equivalence class of m for the relation R. Observe that Cl(m) = {m} since m is the only m ∈ U (0) such that σ(m) = +∞.Let A denote the (finite) set made of the equivalence classes for R and, for α ∈ A, let U (0) α be the set of the elements of the class α.We have then evidently In the theorem below, we sum up in a rather vague way the description of the small eigenvalues of P .Precise statements are given in Theorems 5 and 6.
Theorem 3. Suppose that the assumptions of Theorem 1 are satisfied and that (Gibbs), (Confin), and (Morse) hold true.Let ε * be given by Proposition 1.2.Then, for h > 0 small enough, there exists a bijection, taking into account multiplicities, for some symmetric positive definite matrices M α,j having a classical expansion in powers of h with explicit invertible leading term and for some labeling (S j ) j∈{1,...,p} of S(U (0) ∖ {m}).
Here, the set he −2S j h σ(M α,j ) is empty whenever S j ∉ S(U α ).The general strategy to prove Theorem 3 (leading to the explicit expression of the matrices M α,j ) is to combine the quasimodal constructions near the saddle points developed in the preceding section together with the topological classification of the saddle points introduced in [23].In the latter work, the construction of the quasimode ϕ m depends on the fact that m is of type I or II.In order to lighten the presentation, we assume from now that every point m is of type I and will prove Theorem 3 under this assumption, i.e. when U (0),II = ∅.In that case, the leading term of M α,j is given in Theorem 6 below, which makes explicit the statement of Theorem 3 when U (0),II = ∅.The reader may check that the construction below can be adapted to the case of type II points as in [23].Remark 6.3.Note that U (0),II = ∅ if and only if f ( m) < f (m) for every m ∈ U (0) ∖ {m} (see Definition 6.1).It follows that U (0),II = ∅ if and only if the first part of (Gener) is satisfied.Indeed, if m is the unique global minimum of f E(m) for every m ∈ U (0) , then, for every In particular, the statement of Proposition 5.1 is valid when U (0),II = ∅.This will be used in the sequel.
6.1.Quasimodal constructions for type I minima.Let (ψ m ) m∈U (0) denote the family of quasimodes of Definition 4.3.We recall that, when m ≠ m, with θ m and v m defined by (4.5), (4.6), and (4.8) (here and throughout we dropped the subscript h to lighten the notation).In particular, near any point s ∈ j(m), one has 2h dr, where the function s is the function defined by Proposition 3.5 such that there exists a neighbourhood V of s such that E(m) ∩ V ⊂ {η(s) ⋅ (x − s) > 0} (see the lines below (4.6)).This choice of sign depends of course of m and in order to avoid any confusion, we shall denote by s,m the function s above.
Proof.Note that the function s,n only makes sense when n ∈ {m, m ′ }.We assume that s,m is given by Proposition 3.5 with the sign condition for m below (4.6).As explained at the end of Section 3.3, − s,m also satisfies Proposition 3.5 with the opposite sign condition.Thus, this function satisfies the sign condition for m ′ and can be chosen as the function s,m ′ .Proposition 6.5.Assume that the hypotheses of Theorem 3 hold true and that U (0),II = ∅.
ii) When U (0),II ≠ ∅, the family of quasimodes (ϕ m,h ) m∈U (0) is not quasi-orthonormal and thus does not satisfy the i) of Proposition 5.1 (see indeed Remarks 5.2 and 6.3).
Let us now prove (6.2) and take m, m ′ ∈ U (0) .When m ′ = m, this is exactly i) of Proposition 5.1 and we can hence assume that m ′ ≠ m.
Introduce also the matrix Υ ♯ = (υ ♯ i,j ) i,j=1,...,n 0 −1 defined by (6.11) υ ♯ i,j = ⟨P ♯ ϕ m i , ϕ m j ⟩ = ⟨(P 2 + P 0 )ϕ m i , ϕ m j ⟩.Note that, in these definitions, we do not consider the contribution of the vectors e n 0 and ϕ mn 0 which are collinear to e −f h and then belong to the kernels of P , P * , and P ♯ .If we had added these latter vectors in the definitions of Υ and of Υ ♯ , the last line and column of these matrices would have consisted of zeros.In particular, σ(P Ran Π h ) = σ(Υ) ∪ {0}.
In order to compute the spectrum of the matrix Υ and then the spectrum of P Ran Π h , we recall some tools from [23,Section 5B].In the sequel, we denote by S + (E) the set of symmetric positive definite matrices on a vector space E. We will denote by S + cl (E) the set of h-depending matrices M (h) ∈ S + (E) admitting a classical expansion M (h) ∼ ∑ j h j M j with M 0 ∈ S + (E).We will sometimes forget E and write for short S + , S + cl .
It just remains to show that h −1 e 2S 1 h Υ ♯ ∈ GS cl (E , τ ).Indeed, the fact that h −1 e 2S 1 h Υ ∈ GAS cl (E , τ ) will then follow from (6.14).Using (6.2), (6.3) and the fact that P ♯ is symmetric, we deduce that h −1 e 2S 1 h Υ ♯ is a graded matrix, say h −1 e 2S 1 h Υ ♯ = Ω(τ )M h Ω(τ ), where M h is a symmetric matrix having a classical expansion with leading term In order to show that M 0 is positive definite, it is sufficient to show that it has the form M 0 = L * L, where L is an injective matrix from R n 0 −1 to R ♯ V (1) .To this end, let us define, for every s k ∈ V (1) , G(s k ) ∶= {m ∈ U (0) ∖ {m}; s k ∈ j(m)}.For any s k ∈ V (1) , the set G(s k ) is non-empty.Moreover, from the structure of a Morse function near a (separating) saddle point, it has at most two elements and has only one element m if and only if If there are two minima m i ≠ m j ∈ U (0) ∖ {m} in G(s k ), we define We do not specify which index (i or j) receives a "−", since this choice is irrelevant in the sequel.The other coefficients of L are set to zero A direct computation gives M 0 = L * L.Moreover, it follows from [23, Lemma 5.1] that L is injective.Let us briefly explain the argument of [23,Lemma 5.1].Let X = (X m ) m∈U (0) ∖{m} ∈ R n 0 −1 be such that LX = 0.For any s k ∈ V (1) such that G(s k ) contains one unique element m(s k ) ∈ U (0) ∖ {m}, it then holds X m(s k ) = 0.It follows from the structure of L that X m = 0 for every m ∈ Cl(m(s k )).But, for every m ∈ U (0) ∖ {m}, there exists s k ∈ V (1) and m(s k ) ∈ Cl(m) such that G(s k ) = {m(s k )}.It thus holds X m = 0 for every m ∈ U (0) ∖ {m} and L is injective.Summing up, M 0 is positive definite and thus h −1 e 2S 1 h Υ ♯ ∈ GS cl (E , τ ), which concludes the proof of Proposition 6.8.
6.3.The spectrum of GAS matrices.The results stated here are variants of those of [23, Section 5] and of [19,Appendix].For p ∈ N * , a finite dimensional vector space E = E 1 ⊕⋯⊕E p , and j ∈ {1, . . ., p}, let us write a general matrix M ∈ M (E) by blocks ) is invertible, the Schur matrix of M (with respect to the vector space where by convention R 1 (M ) = M .By the Schur complement method, M is invertible if and only if R j (M ) is invertible.Moreover, the map R j sends S + (E) into S + (E j ⊕ ⋯ ⊕ E p ) and §r + cl (E) into S + cl (E j ⊕ ⋯ ⊕ E p ).We will also denote by J ∶ M (⊕ k=j,...,p E k ) → M (E j ) the restriction map to the first vector space E j of ⊕ k=j,...,p E k .More precisely, with the notations of (6.16), we will write J (M ) = A when j = 1.Of course, the map J depends on j ∈ {1, . . ., p}, but we omit this dependence since the set on which J is acting will be obvious in the sequel.
Let E be a finite dimensional vector space and M h ∈ S + cl (E).From a standard result of perturbation theory of symmetric matrices (see [16,Theorem 6.1 in Chapter II]), the eigenvalues of M h , after an appropriate labeling that we assume in the sequel, have an asymptotic expansion in power of h with a non-zero leading term.This justifies the following definition.Definition 6.9.For M h ∈ S + cl (E), we denote by σ ≡ (M h ) the set of asymptotic expansions (that is formal power series in h) of the eigenvalues of M h .Moreover, m ≡ (Λ, M h ) is defined as the multiplicity of Λ ∈ σ ≡ (M h ).
By an abuse of notation, if λ ∈ σ(M h ) has an asymptotic expansion Λ ∈ σ ≡ (M h ), we will sometimes write m ≡ (λ, M h ) instead of m ≡ (Λ, M h ).Roughly speaking, m ≡ (λ, M h ) can be seen as the multiplicity modulo O(h ∞ ) of the eigenvalue λ ∈ σ(M h ).Note that if λ, ν ∈ σ(M h ) do not have the same asymptotic expansion, there exists K 0 > 0 such that λ − ν ≥ h K 0 for h > 0 small enough.Theorem 4. Let M h = Ω(τ )M h Ω(τ ) ∈ GAS cl (E , τ ) and assume that τ j = τ j (h) = O(h ∞ ) for every j = 2, . . ., p.Then, we have for h > 0 small enough, (6.17) where we recall that M s h = 1 2 (M h + M * h ) denotes the real part of M h .Moreover, for all j = 1, . . ., p, K > 0 large enough and λ ∈ σ(J ○ R j (M s h )), one has (6.18) ) is the number of eigenvalues of M h in D K j,λ = D(ε 2 j λ, ε 2 j h K ) counted with their multiplicity.Eventually, for all K > 0 large enough, there exists C > 0 such that for h > 0 small enough.
Proof.For j = 1, . . ., p, we assume that the spectral parameter z belongs to the ring From (6.20), the matrix F can be written, uniformly for z ∈ C j , with J (D − CA −1 B) ∶ E j → E j and the shortcut Thus, We obtain an upper bound on the resolvent of M h away from its expected spectrum.Let and then F is invertible and (6.24) Combining the first equality in (6.20), (6.22), (6.24) and using ε k = O(ε h ∞ ) for k > , we get, for z ∈ Cj , (6.25) which implies that there exists C > 0 such that for z ∈ Cj We now compute the eigenvalues of M h .For λ ∈ σ(J ○R j (M s h )), we introduce the spectral projectors Using that the circumference of ∂D K j,λ is 2πε 2 j h K , (6.25) implies Since E is a finite dimensional space and the rank of a projector is an integer equal to its trace, we deduce that Thus, the number of eigenvalues of M h in D K j,λ counted with their multiplicity is equal to m the relations (6.17) and (6.18) of Theorem 4 follow.
To finish the proof of Theorem 4, we have to obtain the resolvent estimate (6.19).Since it follows from (6.26) in the Cj , j = 1, . . ., p , it remains to prove it in We only show it in D j , when 2 ≤ j ≤ p, since the two remaining situations can be treated in the same way.Mimicking the proof of (6.20), we have E for R large enough and z ∈ D j .Then, F defined in (6.21) satisfies, instead of (6.23), ) from (6.22).The last estimate gives (6.19).
6.4.The spectrum of the interaction matrix.Combining Proposition 6.8 and Theorem 4, we obtain the following result.
Note here that the matrix h −1 J ○ R j (e 2S j h Υ ♯ ) belongs to S + cl , which almost gives the statement of Theorem Applying again Proposition 6.8 and Theorem 4 but with the blocks Υ α and Υ ♯ α , α ∈ A ∖ {Cl(m)}, we obtain the following variant of Theorem 5, whose formulation is close to that of Theorem 5.8 in [23].Theorem 6. Suppose that the assumptions of Theorem 1 are satisfied.Assume also that (Gibbs), (Confin), and (Morse) hold true and that U (0),II = ∅.Let Υ ♯ α be defined by (6.27), then Moreover, for all α ∈ A, j = 1, . . ., p, K > 0 large enough and λ ∈ σ(J ○ R j (e 2S j h Υ ♯ α )), one has n(P, D K j,λ ) = β∈A m ≡ λ, J ○ R j e 2S j h Υ ♯ β , where n(P, D K j,λ ) is the number of eigenvalues of P in D K j,λ = D(e −2S j h λ, e −2S j h h K ) counted with their multiplicity and with the convention that m ≡ (λ, M ) = 0 if λ ∉ σ ≡ (M ).
In the previous result, if e −S j h does not appear in the graded writing of Υ α , the matrix J ○ R j (e 2S j h Υ ♯ α ) acts on the trivial vector space {0} and its spectrum is empty.In order to emphasize the connection with the formulation of Theorem 5.8 in [23], let us note, with a slight abuse of notation, R k = R 2 ○ ⋯ ○ R 2 (k times), with the convention R 0 = Id.It then follows from Appendix C that R j−1 = R j for every j ∈ {1, . . ., p}.Thus, for every j ∈ {1, . . ., p}, the term J ○ R j (e 2S j h Υ ♯ α ) is nothing but J ○ R j−1 (e 2S j h Υ ♯ α ), which is the writing appearing in [23,Theorem 5.8].Remark 6.10.A similar result holds true without the assumption U (0),II = ∅.This requires to construct adapted quasimodes as in [23, Section 3] (see for example formula (3.14) there) with cut-off functions as above.Appendix B.About the supersymmetric structure We now give an example showing that all the operators considered in this paper do not have a nice supersymmetric structure.We send the reader to Hérau, Hitrik, and Sjöstrand [14] and to the last author [22] for general discussions about supersymmetric structure for differential operators of second order.We say that P as in (1.8) admits a temperate supersymmetric structure if there exist a smooth d × d matrix G(x, h) and M > 0 such that (B.1) (h∂ and G(x, h) ≲ h −M locally in x.Note that (B.1) implies P (e −f h ) = P † (e −f h ) = 0 and that the present definition of temperate supersymmetric structure is weaker than that of [22].
Proposition B.1.In dimension d = 2, there exists an operator P satisfying the assumptions of Theorem 2 and having no temperate supersymmetric structure.
The counterexample constructed in the proof also shows that the determination of the supersymmetric structure (that is the matrix G) of an operator having a temperate supersymmetric structure is an instable question.On the contrary, since all the closed forms on R d are exact, all the operators P satisfying the assumptions of Theorem 2 have a supersymmetric structure which may not be temperate (see Theorem 1.2 of [14]).
Proof.First, we consider an operator P 0 satisfying all the assumptions of Theorem 2 and having a temperate supersymmetric structure (for instance, the Witten Laplacian described in (1.23)).Of course, P 0 is of the form (1.8).We assume in addition that the Morse function f is such that there exist two points ρ 1 , ρ 2 ∈ R 2 , a simple smooth loop γ around ρ 1 but not ρ 2 , and C 0 > 0 such that (B.2) max f γ < C 0 < min(f (ρ 1 ), f (ρ 2 )), ]) be such that χ(ρ 1 ) = 1 and supp ∇χ is sufficiently close to γ (so, in particular, χ(ρ 2 ) = 0).We define P = P 0 + P per with the perturbation operator If the support of ∇χ is close enough to γ, the function b per and all its derivatives are exponentially small from (B.2).In particular, b per satisfies (1.9) with a null classical expansion and P per does not change the principal symbol of P 0 .Moreover, a direct computation gives (B.4) P per (e −f h ) = −(P per ) † (e −f h ) = e f h h div b per e −2f h = 0. Thus, P (as P 0 ) satisfies the assumptions of Theorem 2. It remains to show that P has no temperate supersymmetric structure.Since P 0 has such a structure, it is equivalent to show that P per has no temperate supersymmetric structure.We prove this point by contradiction and assume that P per can be written as in (B.1) for some polynomially locally bounded matrix G(x, h).Since P per = −(P per ) † , the matrix G must be antisymmetric, say for some constant C(h) ∈ R. Computing g at x = ρ 2 where χ = 0, (B.2), (B.6) and g(ρ 2 , h) ≲ h −M imply that C(h) must be exponentially small.On the other hand, Computing g at x = ρ 1 where χ = 1 leads to g(ρ 1 , h) ≥ h −1 e 2(f (ρ 1 )−C 0 ) h for h small enough.Then, (B.2) shows that g(ρ 1 , h) is exponentially large, in contradiction with g(ρ 1 , h) ≲ h −M .Summing up, P per and then P have no temperate supersymmetric structure and the proposition follows.

Appendix C. Iteration of the Schur complement
Let us conclude the appendix with a lemma about the Schur complement.We recall that, for a matrix M ∈ M d+d
)), denoted by u(t) = e −tP h u 0 .Modulo the conjugation by e −f h , (1.25) is the Fokker-Planck equation(1.3)  in the case of the general Langevin process (1.2).

Figure 4 . 1 .
Figure 4.1.Representation of the Morse function f near a point s ∈ j(m) ∩ ∂ Ê(m) when the latter set is nonempty.Here, η 1 (s) denotes an eigenvector of Hess f (s) associated with its negative eigenvalue.

Figure 4 . 2 .
Figure 4.2.The support of the function v m,h

Proof.
Points i), ii) and iii) of Lemma 4.4 follow from the construction of the quasimodes ϕ m,h in Definition 4.3 for m ∈ U (0) , see indeed (4.5), (4.6) and (4.8).Let us then prove the two last points of Lemma 4.4.When σ(m) = σ(m ′ ) and m ≠ m ′ , note first that m and m ′ differ from m since σ(m) = +∞ if and only if m = m.Moreover, σ(m) = σ(m ′ ) and m ≠ m ′ imply E(m)∩E(m ′ ) = ∅.Indeed, the relation E(m) ∩ E(m ′ ) ≠ ∅ would imply that E(m) and E(m ′ ) are the same connected component of {f < σ(m) = σ(m ′ )}, in contradiction with the construction of E. When in addition j(m) ∩ j(m ′ ) = ∅, we then have

Figure B. 1 .
Figure B.1.The geometrical setting in the proof of Proposition B.1.

1 = A − 1 + 1 .=
′ (C) with d, d ′ ∈ N * written by blocks M = A B C D with A ∈ M d (C) invertible, the Schur complement of the block D ∈ M d ′ (C) of M is the matrix defined byR(M ) = D − CA −1 B ∈ M d ′ (C).Moreover, by the Schur complement method, M is invertible if and only if R(M ) is invertible.Lemma C.1.For d 1 , d 2 , d 3 ∈ N * and matrices M ∈ M d 1 +d 2 +d 3 (C) and M ′ ∈ M d 2 +d 3 (C) ′ C ′ D ′ ,we denote respectively, when they make sense, byR 1 (M ), R 1,2 (M ) and R 2 (M ′ ) the Schur complements of the blocks E F H I ∈ M d 2 +d 3 (C) of M , I ∈ M d 3 (C) of M and D ′ ∈ M d 3 (C) of M ′ .Then, when M has the previous form with A and A B D E invertible, the Schur complements R 1,2 (M ), R 1 (M ) and R 2 (R 1 (M )) make sense and satisfyR 2 (R 1 (M )) = R 1,2 (M ).Proof.First, since A and A B D E are invertible, the respective Schur complements R 1 (M ) and R 1,2 (M ) of the blocks E F H I and I of M make sense.Moreover, by the Schur complement method, the invertibility of A and A B D E imply that the Schur complement E − DA −1 B of the block E of A B D E is invertible, and a straightforward computation shows thatA −1 B(E − DA −1 B) −1 DA −1 −A −1 B(E − DA −1 B) −1 −(E − DA −1 B) −1 DA −1 (E − DA −1 B) −I − GA −1 C + (H − GA −1 B)(E − DA −1 B) −1 (DA −1 C − F ). B C = E − DA −1 B F − DA −1 C H − GA −1 B I − GA −1 C .Since E − DA −1 B is invertible, the Schur complement R 2 (R 1 (M )) of the block I − GA −1 C of R 1(M ) makes thus sense and satisfies (C.2) R 2 (R 1 (M )) = I − GA −1 C − (H − GA −1 B)(E − DA −1 B) −1 (F − DA −1 C).The statement of Lemma C.1 then follows from (C.1) and (C.2).