Multivariate Gaussian approximations on Markov chaoses

We prove a version of the multidimensional Fourth Moment Theorem for chaotic random vectors, in the general context of diffusion Markov generators. In addition to the usual componentwise convergence and unlike the infinite-dimensional Ornstein-Uhlenbeck generator case, another moment-type condition is required to imply joint convergence of of a given sequence of vectors.


Introduction
The Fourth Moment Theorem (discovered by Nualart and Peccati in [13] and later extended by Nualart and Ortiz-Latorre in [12]) states that, inside a fixed Wiener chaos, a sequence of random variables F n , n 1, converges in distribution towards a standard Gaussian random variable if and only if E[F 2 n ] → 1 and E[F 4 n ] → 3. Recently, in the pathbreaking contribution [5], Ledoux approached this Fourth Moment Phenomenon in the more general context of diffusion Markov generators, and was able to provide a new proof of such a result adopting a purely spectral point of view. Later on, in [1], Azmoodeh, Campese and Poly generalized the concept of chaosoriginally introduced in [5] and were able not only to obtain a more transparent proof of the classical Fourth Moment Theorem, but also to exhibit many new situations where the Fourth Moment Phenomenon occurs (e.g. the Laguerre or Jacobi chaoses). One should notice that the collection of techniques introduced in [1] have also been successfully applied in other contexts, e.g. for deducing moment conditions in limit theorems (see [2]), or in the study of the so-called real Gaussian product conjecture (see [6]).
In this paper, we investigate a multidimensional counterpart of the Fourth Moment Theorem by using the aforementioned approach based on Markov semigroup. In the case of Wiener chaos, the multidimensional version of the Fourth Moment Theorem is due to Peccati and Tudor [14], and is given by the following statement. For the rest of the paper, the symbol ' d − →' indicates convergence in distribution. [14]). Let p 1 , ..., p d 1 be fixed integers and F n = (F 1,n , . . . , F d,n ), n 1, be a sequence of vectors such that F i,n belongs to the p i th Wiener chaos of some Gaussian field, for all i = 1, . . . , d and all n. Furthermore, assume that lim n→∞ Cov F n = C, and denote by Z = (Z 1 , . . . , Z d ) a centered Gaussian random vector with covariance matrix C. Then, the following two assertions are equivalent, as n → ∞: In other words, for sequences of random vectors living inside a fixed system of Wiener chaoses, componentwise convergence implies joint convergence. Our main result is the following analogue of Theorem 1.2 in the abstract Markov generator framework (unexplained notation and definitions -in particular the notion of a chaotic vector -will be formally introduced in the sequel).
Theorem 1.2. Let L be a diffusion Markov generator with discrete spectrum 0 < λ 0 < λ 1 < · · · , fix integers k 1 , ..., k d 1, and let F n = (F 1,n , . . . , F d,n ), n 1, be a sequence of chaotic vectors such that F i,n ∈ ker(L +λ k i Id), for 1 i d and all n. Furthermore, assume that lim n→∞ Cov F n = C and denote by Z = (Z 1 , . . . , Z d ) a centered Gaussian random vector with covariance matrix C (defined on some probability space (Ω, F , P)). Consider the following asymptotic relations (i) and (ii), for n → ∞: Then, (ii) implies (i), and the converse implication (i) ⇒ (ii) holds whenever the sequence {F 2 i,n F 2 j,n : n 1} is uniformly integrable for every 1 i, j d.
Remark 1.3. The additional mixed moment condition (ii) b has no counterpart in the statement of Theorem 1.1. In Section 3, we will explain in detail why such a relation is automatically satisfied whenever the components of the vectors F n belong to the Wiener chaos of some Gaussian field. We also observe that a sufficient condition, in order for the class F 2 i,n F 2 j,n , n 1, to be uniformly integrable for every i, j, is that, for some ǫ > 0, Finally, if the sequence F n , n 1, lives in a fixed sum of Gaussian Wiener chaoses, then (1.1) is automatically implied by the relation lim n→∞ Cov F n = C, by virtue of a standard hypercontractivity argument -see e.g. [8,Section 2.8.3]. General sufficient conditions in order for the semigroup associated with L to be hypercontactive can be found e.g. in [3].
As in the one-dimensional case, it is possible in our abstract framework to provide a proof of Theorem 1.2 that is not based on the use of product formulae, and that exploits instead the spectral information embedded into the underlying generator L.
The rest of this paper is organized as follows. In Section 2, we will introduce the abstract Markov generator setting and recall the main one-dimensional findings from [1]. In Section 3, we will define multidimensional chaos and present the proof of Theorem 1.2; we also provide a careful analysis of the additional condition (ii) b appearing in our Theorem 1.2.

Preliminaries
In this section, we introduce the general diffusion Markov generator setting. For a detailed treatment, we refer to the monograph [4].
Throughout the rest of the paper, we fix a probability space (E, F , µ) and a symmetric Markov generator L with state space E and invariant measure µ. We assume that L has discrete spectrum S = {−λ k : k 0} and order its eigenvalues by magnitude, i.e. 0 = λ 0 < λ 1 < λ 2 < . . . . In the language of functional analysis, L is a self-adjoint, linear operator on L 2 (E, µ) with the property that L 1 = 0. By standard spectral theory, L is diagonalizable and we have that We denote by L −1 the pseudo-inverse of L, defined on L 2 (E, µ) by L −1 1 = 0 and L −1 F = − 1 λ F for any F ∈ ker(L +λ Id) such that λ = 0. It is immediate to check that L L −1 F = F − E F d µ for every F ∈ L 2 (E, µ). The associated bilinear carré du champ operator Γ is defined by whenever the right-hand side exists. As L is self-adjoint and L 1 = 0, we immediately deduce the integration by parts formula A symmetric Markov generator is called diffusive, if it satisfies the diffusion property for all smooth test functions φ : R → R and any F ∈ L 2 (E, µ). Equivalently, Γ is a derivation i.e.
Considering vectors F = (F 1 , . . . , F d ) and test functions ϕ : R d → R, the diffusion and derivation properties yield that respectively. In [1], the following definition of chaos was given.
Definition 2.1 (See [1]). Let L be a symmetric Markov generator with discrete spectrum S, and let F ∈ ker(L +λ p Id) be an eigenfunction of L (with eigenvalue λ p ). We say that F is chaotic, or a chaos eigenfunction, if ker(L +λ k Id).
Condition (2.4) means that, if F is a chaos eigenfunction of L with eigenvalue λ p (say), then the (orthogonal) decomposition of its square along the spectrum of L only contains eigenfunctions associated with eigenvalues that are less than or equal to twice λ p . Note that this property is satisfied by all eigenfunctions of L in many crucial instances, e.g. when L is the generator of the Ornstein-Uhlenbeck semigroup, the Laguerre generator or the Jacobi generator (see [1]). Starting from this definition, and by only using the spectral information embedded into the generator L, the authors of [1] were able to deduce Fourth Moment Theorems for sequences of chaotic eigenfunctions and several target distributions, drastically simplifying all known proofs. The analogue in this framework of the classical Fourth Moment Theorem reads as follows.
Theorem 2.2 (Abstract Fourth Moment Theorem, see [1]). Let L be a symmetric diffusion Markov generator with discrete spectrum S and let {F n : n 1} be a sequence of chaotic eigenfunctions of L with respect to the same (fixed) eigenvalue, such that E F 2 n d µ → 1, n → ∞. Consider the following three conditions, as n → ∞: The next section is devoted to the proof of our main results.

Main results
Let us begin by noting the general fact that (under standard regularity assumptions) the distance between the distribution of a random vector F = (F 1 , . . . , F d ) and a multivariate Gaussian law is controlled by the expression d i,j=1 Var Γ(F i , −L −1 F j ). This fact can either be shown by using the characteristic function method (see [12]), quantitatively by Stein's method (see [9]) or by means of the so-called "smart path" technique (see [7]). The proofs carry over to our setting almost verbatim, by replacing the integration by parts formula of Malliavin calculus with the analogous relation (2.2) for the carré du champ operator Γ.
In order to keep our paper as self-contained as possible, instead of using the above mentioned bounds, in the sequel we shall exploit the following estimate involving characteristic functions: the proof is a Fourier-type variation of the smart path method.
To avoid technicalities, assume furthermore the existence of a finite N 1 such that F j ∈ N k=0 ker(L +λ k Id) for all 1 j d. Let γ d be the law of a d-dimensional centered Gaussian random variable with covariance matrix C. Then, for any t ∈ R d it holds that and that the derivative of Ψ is given by Using both the integration by parts formula (2.2) and the diffusion property (2.3) for Γ yields that The conclusion follows from an application of the Cauchy-Schwarz inequality.
Definition 3.2 (Jointly chaotic eigenfunction). 1. Let F i ∈ ker(L +λ k i Id) and F j ∈ ker(L +λ k j Id) be two eigenfunctions of L. We say that F i and F j are jointly chaotic, if r: λr λ k i +λ k j ker(L + λ r Id).

2.
Let F = (F 1 , . . . , F d ) be a vector of eigenfunctions of L, such that F i ∈ ker(L +λ k i Id) for 1 i d. Whenever any two of its components (possibly the same) are jointly chaotic, we say that F is chaotic. In particular, this implies that each component of a chaotic vector is chaotic in the sense of Definition 2.1.
We observe that, in many important examples, all vectors of eigenfunctions are chaotic. In particular, this is the case for Wiener, Laguerre and Jacobi chaos (see [1]). A crucial ingredient for the proof of our main result is the following result, whose short proof can be found in [1]. Thm. 2.1]). Fix an eigenvalue −λ = −λ n ∈ S and assume that F ∈ n k=0 ker (L +λ k Id). Then it holds that, for any η λ n , We are now ready to prove our main Theorem 1.2.
Proof of Theorem 1.2. Since the remaining parts of the statement are straightforward, we will only prove the implication (ii) ⇒ (i). According to Proposition 3.1, it suffices to show that, if (ii) is satisfied then, for 1 i, j d, as n → ∞. For i = j, this follows from the one-dimensional Fourth Moment Theorem 2.2. Let us thus assume that i = j. For the sake of readability, we temporarily suppress the index n from F i,n and F j,n . It holds that Γ( and, by definition of Γ, we can write . Inserting the definition of the carré du champ operator, we get that Therefore, by Theorem 3.3,

Now, by the diffusion property (2.3) and the integration by parts formula (2.2), we have
Plugging such an estimate into (3.8) and reintroducing the index n, we can thus write By virtue of Theorem 2.2, we are now left to show that R ij (n) → 0, as n → ∞. If λ k i = λ k j , we have that a ij = 1 and thus , we see that due to assumption (ii) b , R ij (n) → 0 as n → ∞. In the case where λ k i = λ k j , it necessarily holds that C ij,n = 0 (by orthogonality of eigenfunctions of different orders) and thus which, again due to assumption (ii) b , vanishes in the limit as well.
Remark 3.4. In the framework of the classical Theorem 1.1, where L is the infinitedimensional Ornstein-Uhlenbeck generator and its eigenfunctions are multiple Wiener-Itô integrals, condition (ii) actually reduces to (ii) a , since (ii) a ⇒ (ii) b . In other words, componentwise Gaussian convergence in distribution always yields joint Gaussian convergence in this case. From our abstract point of view, we can explain (and generalize) this phenomenon by means of the next result.
Proposition 3.5. In the setting and with the notation of Theorem 1.2, assume in addition that 1. L is ergodic, i.e. its kernel only consists of constant functions, 2. For 1 i, j d such that λ k i = λ k j , it holds that where π λ denotes the orthogonal projection onto ker(L +λ Id).
Then, the following two assertions are equivalent.
Proof. In view of Theorem 1.2, we have only to show that condition (ii) a therein implies that To do so, we first note that, by the definition of Γ and the chaotic property of F i,n , it holds that Therefore, by orthogonality of the projections corresponding to different eigenvalues and by the one-dimensional Fourth Moment Theorem 2.2, for all r such that 0 < λ r < 2λ k i .
We exploit this fact by writing Assume without loss of generality that λ k j λ k i . The ergodicity assumption on L forces π 0 (F 2 i,n ) to be constant and thus E π 0 (F 2 i,n )F 2 j,n d µ = π 0 (F 2 i,n ) E F 2 j,n d µ = E F 2 i,n d µ E F 2 j,n d µ.
By Cauchy-Schwarz and (3.10), all integrals E π λr (F 2 i,n )F 2 j,n d µ inside the sum in the middle vanish in the limit. Finally, assumption (3.9) ensures that the third term (which is zero if λ k i = λ k j ), exhibits the wanted asymptotic behaviour. Remark 3.6. As already mentioned, both of these additional assumptions are always verified in the case of the infinite-dimensional Ornstein-Uhlenbeck generator (see for example [8] or [11] for any unexplained notation). While the ergodicity is immediate, more effort is needed to show (3.9). For a multiple integral I p (f ) with f ∈ H ⊙p , the well known product formula yields that π 2p (I p (f ) 2 ) = I 2p (f ⊗f ). If I p (g), g ∈ H ⊙p is another multiple integral, one can show (see for example [10, Lemma 2.2(2)]) that Replacing the kernels f and g by two sequences (f n ) and (g n ), the classical Fourth Moment Theorem implies that scalar products inside the sum vanish in the limit if (at least one of) the two sequences I p (f n ) and I p (g n ) converges in distribution towards a Gaussian.