Absolute continuity and convergence of densities for random vectors on Wiener chaos

The aim of this paper is to establish some new results on the absolute continuity and the convergence in total variation for a sequence of d-dimensional vectors whose components belong to a finite sum of Wiener chaoses. First we show that the probability that the determinant of the Malliavin matrix of such vectors vanishes is zero or one, and this probability equals to one is equivalent to say that the vector takes values in the set of zeros of a polynomial. We provide a bound for the degree of this annihilating polynomial improving a result by Kusuoka. On the other hand, we show that the convergence in law implies the convergence in total variation, extending to the multivariate case a recent result by Nourdin and Poly. This follows from an inequality relating the total variation distance with the Fortet-Mourier distance. Finally, applications to some particular cases are discussed.


Introduction
The purpose of this paper is to establish some new results on the absolute continuity and the convergence of the densities in some L p (R d ) for a sequence of d-dimensional random vectors whose components belong to finite sum of Wiener chaos. These result generalize previous works by Kusuoka [8] and by Nourdin and Poly [11], and are based on a combination of the techniques of Malliavin calculus, the Carbery-Wright inequality and some recent work on algebraic dependence for a family of polynomials.
Let us describe our main results. Given two d-dimensional random vectors F and G, we denote by d T V (F, G) the total variation distance between the laws of F and G, defined by where the supremum is taken over all Borel sets A of R d . There is an equivalent formulation for d T V , which is often useful: where the supremum is taken over all measurable functions φ : R d → R which are bounded by 1. It is also well-known (Scheffé's Theorem) that, when F and G both have a law which is absolutely continuous with respect to the Lebesgue measure on R d , then with f and g the densities of F and G respectively. On the other hand, we denote by d F M (F, G) the Fortet-Mourier distance, given by where the supremum is taken over all 1-Lipschitz functions φ : R d → R which are bounded by 1. It is well-known that d F M metrizes the convergence in distribution. Consider a sequence of random vectors F n = (F 1,n , . . . , F d,n ) whose components belong to ⊕ q k=0 H k , where H k stands for the kth Wiener chaos, and assume that F n converges in distribution towards a random variable F ∞ . Denote by Γ(F n ) the Malliavin matrix of F n , and assume that E[det Γ(F n )] is bounded away from zero. Then we prove that there exist constants c, γ > 0 (depending on d and q) such that, for any n 1, (1.1) So, our result implies that the sequence F n converges not only in law but also in total variation. In [11] this result has been proved for d = 1. In this case γ = 1 2q+1 , and one only needs that F ∞ is not identically zero, which turns out to be equivalent to the fact that the law of F ∞ is absolutely continuous. This equivalence is not true for d 2. The proof of this result is based on the Carbery-Wright inequality for the law of a polynomial on Gaussian random variables and also on the integration-by-parts formula of Malliavin calculus. In the multidimensional case we make use of the integration-by-parts formula based on the Poisson kernel developed by Bally and Caramelino in [1].
The convergence in total variation is very strong, and should not be expected from the mere convergence in law without some additional structure. For instance, there is a celebrated theorem of Ibragimov (see, e.g., Reiss [16]) according to which, if F n , F ∞ are continuous random variables with densities f n , f ∞ that are unimodal, then F n → F ∞ in law if and only if d T V (F n , F ∞ ) → 0. Somehow, our inequality (1.1) thus appears as unexpected. Several consequences are detailed in Section 5. Furthermore, bearing in mind that the convergence in total variation is equivalent to the convergence of the densities in L 1 (R d ), we improve this results by proving that under the above assumptions on the sequence F n , the densities converge in L p (R d ) for some explicit p > 1 depending solely on d and q.
Motivated by the above inequality (1.1), in the first part of the paper we discuss the absolute continuity of the law of a d-dimensional random vector F = (F 1 , . . . , F d ) whose components belong to a finite sum of Wiener chaoses ⊕ q k=1 H k . Our main result says that the three following conditions are equivalent: 1. The law of F is not absolutely continuous with respect to the Lebesgue measure on R d .
2. There exists a nonzero polynomial H in d variables of degree at most dq d−1 such that Notice that the criterion of the Malliavin calculus for the absolute continuity of the law of a random vector F says that det Γ(F ) > 0 almost surely implies the absolute continuity of the law of F . We prove the stronger result that P (det Γ(F ) = 0) is zero or one; as a consequence, P (det Γ(F ) > 0) = 1 turns out to be equivalent to the absolute continuity. The equivalence with condition 2 improves a classical result by Kusuoka ([8]), in the sense that we provide a simple proof of the existence of the annihilating polynomial based on a recent result by Kayal [7] and we give an upper bound for the degree of this polynomial. Also, it is worthwhile noting that, compared to condition 2, condition 3 is often easier to check in practical situations, see also the end of Section 3. The paper is organized as follows. Section 2 contains some preliminary material on Malliavin calculus, the Carbery-Wright inequality and the results on algebraic dependence that will be used in the paper. In Section 3 we provide equivalent conditions for absolute continuity in the case of a random vector in a sum of Wiener chaoses. Section 4 is devoted to establish the inequality (1.1), and also the convergence in L p (R d ) for some p. Section 5 contains applications of these results in some particular cases. Finally, we list two open questions in Section 6.

Preliminaries
This section contains some basic elements on Gaussian analysis that will be used throughout this paper. We refer the reader to the books [10,13] for further details.

Multiple stochastic integrals
Let H be a real separable Hilbert space. We denote by X = {X(h), h ∈ H} an isonormal Gaussian process over H. That means, X is a centered Gaussian family of random variables defined in some probability space (Ω, F , P ), with covariance given by for any h, g ∈ H. We also assume that F is generated by X.
For every k ≥ 1, we denote by H k the kth Wiener chaos of X defined as the closed linear subspace of L 2 (Ω) generated by the family of random variables {H k (X(h)), h ∈ H, h H = 1}, where H k is the kth Hermite polynomial given by We write by convention H 0 = R. For any k 1, we denote by H ⊗k the kth tensor product of H. Then, the mapping I k (h ⊗k ) = H k (X(h)) can be extended to a linear isometry between the symmetric tensor product H ⊙k (equipped with the modified norm √ k! · H ⊗k ) and the kth Wiener chaos H k . For k = 0 we write I 0 (x) = c, c ∈ R. In the particular case H = L 2 (A, A, µ), where µ is a σ-finite measure without atoms, then H ⊙k coincides with the space L 2 s (µ k ) of symmetric functions which are square integrable with respect to the product measure µ k , and for any f ∈ H ⊙k the random variable I k (f ) is the multiple stochastic integral of f with respect to the centered Gaussian measure generated by X.
Any random variable F ∈ L 2 (Ω) admits an orthogonal decomposition of the form , and the kernels f k ∈ H ⊙k are uniquely determined by F .
Let {e i , i 1} be a complete orthonormal system in H. Given f ∈ H ⊙k and g ∈ H ⊙j , for every r = 0, . . . , k ∧ j, the contraction of f and g of order r is the element of H ⊗(k+j−2r) defined by f, e i 1 ⊗ · · · ⊗ e ir H ⊗r ⊗ g, e i 1 ⊗ · · · ⊗ e ir H ⊗r .
The contraction f ⊗ r g is not necessarily symmetric, and we denote by f ⊗ r g its symmetrization.

Malliavin calculus
Let S be the set of all cylindrical random variables of the form where n 1, h i ∈ H, and g is infinitely differentiable such that all its partial derivatives have polynomial growth. The Malliavin derivative of F is the element of L 2 (Ω; H) defined By iteration, for every m 2, we define the mth derivative D m F which is an element of L 2 (Ω; H ⊙m ). For m 1 and p 1, D m,p denote the closure of S with respect to the norm · m,p defined by We also set D ∞ = ∩ m 1 ∩ p 1 D m,p . As a consequence of the hypercontractivity property of the Ornstein-Uhlenbeck semigroup, all the · m,p -norms are equivalent in a finite Wiener chaos. This is a basic result that will be used along the paper.
We denote by δ the adjoint of the operator D, also called the divergence operator. An element u ∈ L 2 (Ω; H) belongs to the domain of δ, denoted Domδ, if |E DF, u H | c u F L 2 (Ω) for any F ∈ D 1,2 , where c u is a constant depending only on u. Then, the random variable δ(u) is defined by the duality relationship Given a random vector F = (F 1 , . . . , F d ) such that F i ∈ D 1,2 , we denote Γ(F ) the Malliavin matrix of F , which is a random nonnegative definite matrix defined by If F i,n ∈ D 1,p for some p > 1 and any i = 1, . . . , d, and if det Γ(F ) > 0 almost surely, then the law of F is absolutely continuous with respect to the Lebesgue measure on R d (see, for instance, [13,Theorem 2.1.2]). This is our basic criterion for absolute continuity in this paper.

Carbery-Wright inequality
Along the paper we will make use of the following inequality due to Carbery and Wright [4]: there is a universal constant c > 0 such that, for any polynomial Q : R n → R of degree at most d and any α > 0 we have where X 1 , . . . , X n are independent random variables with law N(0, 1).

Algebraic dependence
Let F be a field and f = (f 1 , . . . , f k ) ∈ F[x 1 , . . . , x n ] be a set of k polynomials of degree at most d in n variables in the field F. These polynomials are said to be algebraically dependent if there exists a nonzero k-variate polynomial A(t 1 , . . . , t k ) ∈ F[t 1 , . . . , t k ] such that A(f 1 , . . . , f k ) = 0. The polynomial A is then called an (f 1 , . . . , f k )-annihilating polynomial.
Denote by the Jacobian matrix of the set of polynomials in f. A classical result (see, e.g., Ehrenborg and Rota [6] for a proof) says that f 1 , . . . , f k are algebraically independent if and only if the Jacobian matrix J f has rank k.
Suppose that the polynomials f = (f 1 , . . . , f k ) are algebraically dependent. Then the set of f-annihilating polynomials forms an ideal in the polynomial ring F[t 1 , . . . , t k ]. In a recent work Kayal (see [7]) has established some properties of this ideal. In particular (see [7], Lemma 7) he has proved that if no proper subset of f is algebraically dependent, then the ideal of f-annihilating polynomials is generated by a single irreducible polynomial. On the other hand (see [7], Theorem 11) the degree of this generator is at most kq k−1 .

Absolute continuity of the law of a system of multiple stochastic integrals
The purpose of this section is to extend a result by Kusuoka [8] on the characterization of the absolute continuity of a vector whose components are finite sums of multiple stochastic integrals, using techniques of Malliavin calculus. In what follows, the notation R[X 1 , . . . , X d ] stands for the set of d-variate polynomials over R.
Theorem 3.1 Fix q, d 1, and let F = (F 1 , . . . , F d ) be a random vector such that F i ∈ q k=1 H k for any i = 1, . . . , d. Let Γ := Γ(F ) be the Malliavin matrix of F . Then the following assertions are equivalent: (a) The law of F is not absolutely continuous with respect to the Lebesgue measure on R d .
Proof of (a)⇒(c). Let us prove ¬(c) ⇒ ¬(a). Set N = 2d(q − 1) and let {e k , k 1} be an orthonormal basis of H. Since det Γ ∈ N k=0 H k , there exists a sequence {Q n , n 1} of realvalued polynomials of degree at most N such that the random variables Q n (I 1 (e 1 ), . . . , I 1 (e n )) converge in L 2 (Ω) and almost surely to det Γ as n tends to infinity (see [11,Theorem 3.1, first step of the proof] for an explicit construction). Assume now that E[det Γ] > 0. Then for n n 0 , E[|Q n (I 1 (e 1 ), . . . , I 1 (e n ))|] > 0. We deduce from the Carbery-Wright's inequality (2.3) the existence of a universal constant c > 0 such that, for any n 1, Using the property and letting n tend to infinity we get Letting λ → 0, we get that P (det Γ = 0) = 0. As an immediate consequence of absolute continuity criterion, (see, for instance, [13, Theorem 2.1.1]) we get the absolute continuity of the law of F , and assertion (a) does not hold. It is worthwhile noting that, in passing, we have proved that P (det Γ = 0) is zero or one.
Proof of (b)⇒(a). Assume the existence of H ∈ R[X 1 , · · · , X d ] \ {0} such that, almost surely, H(F 1 , . . . , F d ) = 0. Since H ≡ 0, the zeros of H constitute a closed subset of R d with Lebesgue measure 0. As a result, the vector F cannot have a density with respect to the Lebesgue measure.
Proof of (c)⇒(b). Let {e k , k 1} be an orthonormal basis of H, and set G k = I 1 (e k ) for any k 1. In order to illustrate the method of proof, we are going to deal first with the finite dimensional case, that is, when is a polynomial of degree at most q. In that case, and the Malliavin matrix Γ of F can be written as Γ = AA T , where As a consequence, taking into account that the support of the law of (G 1 , . . . , G n ) is R n , if det Γ = 0 almost surely, then the Jacobian ∂P i ∂x j (y 1 , . . . , y n ) d×n has rank strictly less than d for all (y 1 , . . . , y n ) ∈ R n . Statement (b) is then a consequence of Theorem 2 and Theorem 11 in [7].
Consider now the general case. Any symmetric element f ∈ H ⊗k can be written as Setting k l = #{j : l j = l}, the multiple stochastic integral of e l 1 ⊗ . . . ⊗ e l k can be written in terms of Hermite polynomials as where the above product is finite. Thus, where the series converges in L 2 . This implies that we can write where P : R N → R is a function defined ν ⊗N -almost everywhere, with ν the standard normal distribution. In other words, we can consider I k (f ) as a random variable defined in the probability space (R N , ν ⊗N ). On the other hand, for any n 1 and for almost all y n+1 , y n+2 , . . . in R, the function (y 1 , . . . , y n ) → P (y 1 , y 2 , . . .) is a polynomial of degree at most p. By linearity, from the representation (3.5) we deduce the existence of mappings P 1 , . . . , P d : R N → R, defined ν ⊗N almost everywhere, such that for all i = 1, . . . , d, and such that for all n 1 and almost all y n+1 , y n+2 , . . . in R, the mapping (y 1 , . . . , y n ) → P i (y 1 , y 2 , . . .) is a polynomial of degree at most q. With this notation, the Malliavin matrix Γ can be expressed as Γ = AA T , where Consider the truncated Malliavin matrix Γ n = A n A T n , where From the Cauchy-Binet formula we deduce that det Γ n is increasing and it converges to det Γ. Therefore, if det Γ = 0 almost surely, then for each n 1, det Γ n = 0 almost surely.
Suppose that E[det Γ] = 0, which implies that det Γ = 0 almost surely. Then, for all n 1, det Γ n = 0 almost surely. We can assume that for any subset because otherwise we will work with a proper subset of this family. This implies that for n n 0 , and for any subset where Γ n denotes the truncated Malliavin matrix defined above. Then, applying the Carbery-Wright inequality we can show that the probability P (det Γ n (F i 1 , . . . , F ir ) = 0) is zero or one, so we deduce det Γ n (F i 1 , . . . , F ir ) > 0 almost surely.
We can consider these polynomials as elements of the ring of polynomials K[y 1 , . . . , y n ], where K is the field generated by all multiple stochastic integrals. This field is well defined because by a result of Shigekawa [17] if F and G are finite sums of multiple stochastic integrals and G ≡ 0, then G is different from zero almost surely and F G is well defined. The Jacobian of this set of polynomials J(y 1 , . . . , y n ) = ∂P (i) The coefficients of H n are random variables measurable with respect to the σ-field σ{G n+1 , G n+2 , . . .}.
(ii) The coefficient of the largest monomial in antilexicographic order occurring in H n is 1.
(iii) For all y 1 , . . . , y n ∈ R, H n (P If we apply property (iii) to n + 1 and substitute y n+1 by G n+1 we obtain H n+1 (P Notice that when k = 1 the above formula for E[det Γ] reduces to E[det Γ] = det C, where C is the covariance matrix of (F, G).
Example 2. Let (F, G) = I 2 (f ), I k (g)), with k 2. Let Γ be the Malliavin matrix of (F, G). Let us compute E[det Γ]. We have We deduce Therefore, E[det Γ] > 0 if and only if the right hand side of (3.7) is strictly positive.
Consider the particular case k = 2, that is, F = (I 2 (f ), I 2 (g)) and let C be the covariance matrix of F . By specializing (3.7) to k = 2, we get that We deduce an interesting result, that generalizes a well-known criterion for Gaussian pairs. Proposition 3.2 Let F = (I 2 (f ), I 2 (g)) and let C be the covariance matrix of F . Then, the law of F has a density if and only if det C > 0.
Proof. If det C > 0 then E[det Γ] > 0 by (3.8); we deduce from Theorem 3.1 that the law of F has a density. Conversely, if det C = 0 then I 2 (f ) and I 2 (g) are proportional; this prevents F to have a density.

Convergence in law and total variation distance
In this section we first prove an inequality between the total variation distance and the Fortet-Mourier distance for vectors in a finite sum of Wiener chaoses.
Theorem 4.1 Fix q, d 2, and let F n = (F 1,n , . . . , F d,n ) be a sequence such that F i,n ∈ q k=1 H k for any i = 1, . . . , d and n 1. Let Γ n := Γ(F n ) be the Malliavin matrix of F n . Assume that F n law → F ∞ as n → ∞ and that there exists β > 0 such that E[det Γ n ] β for all n. Then F ∞ has a density and, for any γ < 1 (d+1)(4d(q−1)+3)+1 , there exists c > 0 such that (4.9) In particular, F n → F ∞ in total variation as n → ∞.
Proof. The proof is divided into several steps.
Step 1. Since F i,n law → F i,∞ with F i,n ∈ q k=1 H k , it follows from [11, Lemma 2.4] that for any i = 1, . . . , d, the sequence (F i,n ) satisfies sup n E|F i,n | p < ∞ for all p 1. Let φ : R d → R ∈ C ∞ be such that φ ∞ 1. We can write, for any n, m, p, M 1, Therefore, since sup n 1 E [max 1 i d |F i,n | p ] is finite, there exists a constant c > 0 (depending on p) satisfying, for all n 1, As in [11], now the idea to bound the first term in the right-hand side of (4.10) is to regularize the function φ by means of an approximation of the identity and then to control the error term using the integration by parts of Malliavin calculus. Let φ : R d → R ∈ C ∞ with compact support in [−M, M] d and satisfying φ ∞ 1. Let n, m 1 be integers. Let 0 < α 1 and let ρ : . By [11, (3.26)], we have that φ * ρ α is bounded by 1 and is Lipschitz continuous with constant 1/α. We can thus write, 11) where d F M is the Forter-Mourier distance and In order to estimate the term R α we decompose the expectation into two parts using the identity Step 2. We claim that there exists c > 0 such that, for all ε > 0 and all n 1, Indeed, for any λ > 0 and by using (3.4) together with the assumption E[det Γ n ] β, Choosing λ = ε 2(q−1)d 2(q−1)d+1 proves the claim (4.12). As a consequence, the estimate (4.12) implies Step 3. In this step we will derive the integration by parts formula that will be useful for our purposes. The method is based on the representation of the density of a Wiener functional using the Poisson kernel obtained by Malliavin and Thalmaier in [9], and it has been further developed by Bally and Caramellino in the works [1] and [2].
Let h : R d → R be a function in C ∞ with compact support, and consider a random variable W ∈ D ∞ . Consider the Poisson kernel in R d (d 2), defined as the solution to the equation ∆Q d = δ 0 . We know that Q 2 (x) = c 2 log |x| and that Q d (x) = c d |x| 2−d for d 3. Then, we have the following identity (4.14) As a consequence, we can write We claim that where δ is the divergence operator, and Com(·) stands for the usual comatrice operator. The equality (4.15) follows easily from the relation y)), DF a,n H , multiplying by W det Γ n , taking the mathematical expectation, and applying the duality relationship between the derivative and the divergence operator. The random variable satisfies A i,n (W ) ∈ D ∞ , and we can write Step 4. We are going to apply the identity (4.16) to the function h = φ − φ * ρ α and to the random variable W = W n,ε = 1 det Γn+ε . In this way we obtain (4.17) We claim that, for any p 1, there exists a constant c > 0 such that Indeed, this follows immediately from the fact that the sequence (F i,n ) is uniformly bounded in L p for each i = 1, . . . , d and that we can write On the other hand, we have Taking into account that for some constant k d , we can write We can assume that the support of ρ is the unit ball {|z| 1}. Then for any (y, z) ∈ (B R ) c with |z| α, both |y| and |y − z| are bounded by R + α, and we obtain On the other hand, There exists a constant c > 0 such that, for |y| R, |y − z| R and |z| α, for some constant c > 0. Substituting this estimate into (4.17) and assuming that M 1, yields for some constant c > 0. Choosing R = α 1 d+1 M d d+1 and assuming α 1, we obtain for some constant c > 0.
Step 5. From (4.11), (4.13) and (4.19) we obtain By letting m → ∞, we get (4.20) Finally, by plugging (4.20) into (4.10) we obtain the following inequality, valid for every M 1, p 1, n 1, ε > 0 and 0 < α 1: where the constant c depends on p. (4.22) Notice that d F M (F n , F ∞ ) 1 for n large enough (n n 0 say). So, assuming that n n 0 and choosing where D = (d + 1)(4(q − 1)d + 3) + 1. Notice that α 1 provided M 1 and n n 0 . Optimizing with respect to M yields and taking into account that p can be chosen arbitrarily large, we have proved that for any γ < 1 D there exists c > 0 such that (4.9) holds true.
Step 6. Finally, let us prove that the law of F ∞ is absolutely continuous with respect to the Lebesgue measure. Let A ⊂ R d be a Borel set of Lebesgue measure zero. By Lemma

and because E[det Γ n ]
β > 0, we have P (F n ∈ A) = 0. Since d T V (F n , F ∞ ) → 0 as n → ∞, we deduce that P (F ∞ ∈ A) = 0, proving that F ∞ has a density by the Radon-Nikodym theorem. The proof of the theorem is now complete.
Under the assumptions of Theorem 4.1, if we denote by ρ n (resp. ρ ∞ ) the density of F n (resp. F ∞ ), then the convergence in total variation is equivalent to as n tends to infinity. We are going to show that this convergence actually holds in L p (R d ) for any 1 p < 1 + 1 2d 2 (q−1)+d−1 .
Proposition 4.2 Suppose that F n is a sequence of d-dimensional random vectors satisfying the conditions of Theorem 4.1. Denote by ρ n (resp. ρ ∞ ) the density of F n (resp. F ∞ ). Then, for any 1 p < 1 + 1 2d 2 (q−1)+d−1 , we have Proof. The proof will be done in several steps. We set N = 2d(q − 1) and we fix p such that 1 < p < 1 + 1 2d 2 (q−1)+d−1 . 1) Denote by Γ n the Malliavin matrix of F n . Using Carbery-Wright's inequality (2.3), we have, for any γ < 1 N , 2) Fix a real number M > 0. For any α < 1 N +1 and 1 + α where Applying the identity (4.16) to h = ρ p−1 α n 1 {|ρn(·)| M } and W = 1 and taking into account that (4.18) holds, yields For any x ∈ R d and any function f : can be decomposed into the regions {y : |y| 1} and {y : |y| > 1}. Then, using Hölder's inequality, for any exponents β > d and γ < d, there exist a constant C β,γ such that We are going to apply this estimate to the function f = ρ p−1 α n 1 {|ρn(·)| M } and to the exponents β = pα p−1 > d and γ = α p−1 < d. In this way we obtain from (4.25) From (4.24) and (4.26) we deduce the existence of a constant K, independent of M and n, such that implying in turn that 3) Let n, m 1. By applying Hölder to we obtain, for any 0 < ǫ < 1, (4.28) We can choose ǫ > 0 small enough such that Then, from Part 2) we deduce As a result, {ρ n } converges in L p (R d ), which is the desired conclusion.

Some applications
In this section we present some consequences of Theorems 3.1 and 4.1. We start with a straightforward consequence of Theorem 4.1.
Proposition 5.1 Fix q, d 2, and let F n = (F 1,n , . . . , F d,n ) be a sequence such that F i,n ∈ q k=1 H k for any i = 1, . . . , d and n 1. Let Γ n := Γ(F n ) be the Malliavin matrix of F n . As n → ∞, assume that F n → F ∞ in law and that Γ n → M ∞ in law, with E[det M ∞ ] > 0. Then F n converges to F ∞ in total variation. Proof. We set N = 2d(q − 1). Since Γ n (i, j) law → M ∞ (i, j) with Γ n (i, j) ∈ N k=0 H k , it follows from [11,Lemma 2.4] that for any i, j = 1, . . . , d, the sequence Γ n (i, j), n 1, satisfies sup n E|Γ n (i, j)| p < ∞ for all p 1. As a result, E[det Γ n ] → E[det M ∞ ] > 0 and the desired conclusion follows from Theorem 4.1.
Our first application of Proposition 5.1 consists in strengthening the celebrated Peccati-Tudor [15] criterion of asymptotic normality.
Proof. Since F i,n law → F i,∞ with F i,n ∈ H k i , it follows from [11,Lemma 2.4] that for any i = 1, . . . , d, the sequence (F i,n ) satisfies sup n E|F i,n | p < ∞ for all p 1. In particular, one has that E[F i,n F j,n ] → C(i, j) as n → ∞ for any i, j = 1, . . . , d. Denote by Γ n the Malliavin matrix of F n . As a consequence of the main result in Nualart and Ortiz-Latorre [14], we deduce that Γ n → C in L 2 (Ω) as n → ∞. Finally, the desired conclusion follows from Proposition 5.1.
The next result is a corollary of Proposition 5.1 and Theorem 3.1, and improves substantially Theorem 4 of Breton [3]. It represents a multidimensional version of a result by Davydov and Martynova [5]. 1. Suppose that F n converges in L 2 (Ω) to F ∞ and the law of F ∞ is absolutely continuous with respect to the Lebesgue measure. Then, F n converges to F ∞ in total variation.
Proof. By the isometry of multiple stochastic integrals, for any i = 1, . . . , d, the sequence f i,n ∈ H ⊙k i converges as n tends to infinity to an element f i,∞ ∈ H ⊙k i , and we can write F ∞ = (I k 1 (f 1,∞ ), . . . , I k d (f d,∞ )).
Since the law of F ∞ is absolutely continuous with respect to the Lebesgue measure, we deduce from Theorem 3.1 that E[det Γ(F ∞ )] > 0, where Γ(F ∞ ) is the Malliavin matrix of F ∞ . On the other hand, taking into account that all the norms · m,p are equivalent in a fixed Wiener chaos, we deduce that for all 1 i, j d in L p (Ω) as n tends to infinity, for all p 2. Therefore, we can conclude the proof using Proposition 5.1.
In the case of a sequence of 2-dimensional vectors in the second chaos, it suffices to assume that the covariance of the limit is non singular. In fact, we have the following result.
Corollary 5.4 Let (F n , G n ) = (I 2 (f n ), I 2 (g n )) be a pair converging in law to (F ∞ , G ∞ ) as n tends to ∞. Let C ∞ be the covariance matrix of (F ∞ , G ∞ ) and assume that det C ∞ > 0. Then (F n , G n ) converges to (F ∞ , G ∞ ) in total variation.
Proof. Let Γ n (resp. C n ) be the Malliavin (resp. covariance) matrix of (F n , G n ). Taking into account that all p-norms are equivalent in a fixed Wiener chaos, we deduce that that both {F n , n 1} and {G n , n 1} are uniformly bounded with respect to n in all the L p (Ω). Thus, one has det C n → det C ∞ as n tends to ∞. On the other hand, we have by (3.8) that E[det Γ n ] 4 det C n . By letting n tend to ∞, we deduce that E[det Γ n ] 1 2 det C ∞ > 0 for n large enough. Theorem 4.1 allows us to conclude.
Another situation where we only need the limit to be non degenerate in order to obtain the convergence in total variation, is the case where the limit has pairwise independent components.
Proof. The proof is divided into several steps.
As a consequence, for 0 < α <  Step 2. We claim that E[det Γ(F n )] − E[ DF 1,n 2 . . . DF d,n 2 ] → 0 as n → ∞. To prove the claim it suffices to show that, for any 1 i = j d, one has DF i,n , DF j,n H → 0