Stable limit theorems on the Poisson space

We prove limit theorems for functionals of a Poisson point process using the Malliavin calculus on the Poisson space. The target distribution is conditionally either a Gaussian vector or a Poisson random variable. The convergence is stable and our conditions are expressed in terms of the Malliavin operators. For conditionally Gaussian limits, we also obtain quantitative bounds, given for the Monge-Kantorovich transport distance in the univariate case; and for another probabilistic variational distance in higher dimension. Our work generalizes several limit theorems on the Poisson space, including the seminal works by Peccati, Sol\'e, Taqqu&Utzet for Gaussian approximations; and by Peccati for Poisson approximations; as well as the recently established fourth-moment theorem on the Poisson space of D\"obler&Peccati. We give an application to stochastic processes.


Introduction
One of the celebrated contributions of Rényi [36,37] is a refinement of the notion of convergence in law, commonly referred to as stable convergence.Stable convergence is tailored for studying conditional limits of sequences of random variables.Thus, a stable limit is, typically, a mixture, that is, in our terminology: a random variable whose law depends on a random parameter; for instance, a centered Gaussian random variable with random variance, or a Poisson random variable with random mean.In the setting of semi-martingales, one book by Jacod & Shiryaev [13] summarizes archetypal stable convergence results involving such mixtures.More recently, results by Nourdin & Nualart [22]; Harnett & Nualart [10]; and Nourdin, Nualart & Peccati [23] give sufficient conditions and quantitative bounds for the stable convergence of functionals of an isonormal Gaussian process to a Gaussian mixture.Typically, applications of such results study the limit of a sequence of quadratic functionals of a fractional Brownian motion.The three references [22,10,23] make a pervasive use of the Malliavin calculus to prove such limit theorems.Earlier works by Nualart & Ortiz-Latorre [26] and by Nourdin & Peccati [25] initiate this approach: they use Malliavin calculus in order to prove central limit theorems for iterated Itô integrals initially obtained by Nualart & Peccati [27] with different tools.These far-reaching contributions form a milestone in the theory of limit theorems and inaugurate an independent field of research, known as the Malliavin-Stein approach (see the webpage of Nourdin [21] for a comprehensive list of contributions on the subject).
The trendsetting work of Peccati, Solé, Taqqu & Utzet [28] extends the Malliavin-Stein approach beyond the scope of Gaussian fields to Poisson point processes.Despite being a very active field of research, the considered limit distributions are, most of the time, Gaussian [15,14,17,35,30,33,38,6,7,3] or, sometimes, Poisson [29] or Gamma [31]; to the best of our knowledge, prior to the present work, mixtures, have not been considered as limit distributions.The aim of this paper is to tackle this problem, by proving an array of new quantitative and stable limit theorems on the Poisson space, with a target distribution given either by a Gaussian mixture, that is the distribution of a centered Gaussian variable with random covariance; or a Poisson mixture, that is the distribution of a Poisson variable with random mean.We rely on two standard techniques to obtain our limit theorems: the characteristic functional method, to obtain qualitative results; and an interpolation approach, known as smart path, for the quantitative results.In the two cases, we build upon various tools from stochastic analysis for Poisson point processes, such as the Malliavin calculus, integration by parts for Poisson functionals, and a representation of the carré du champ associated to the generator of the Ornstein-Uhlenbeck semi-group on the Poisson space.Provided mild regularity assumptions on the functional under study, our approach allows us to deal, in Theorems 2.1 and 2.3, with any target distribution of the form SN , where S is a matrix-valued random variable (measurable with respect to the underlying Poisson point process) and N is a Gaussian vector independent of the underlying Poisson point process.In the same way, in Theorem 2.2, we can consider any target distribution of the form of a Poisson mixture, whose precise definition is given below.
Let us now give a more detailed sample of the main results.Throughout the paper, we study the asymptotic behaviour of a sequence {F n = f n (η)} of square-integrable functionals of a Poisson point process η.Here, η is a Poisson point process on an arbitrary σ-finite measured space (Z, Z, ν) (for the moment, we simply recall that η is a random integer-valued measure on Z satisfying some strong independence properties and such that Eη = ν).Moreover, we assume that the F n 's are of the form F n = δu n , where δ is the Kabanov stochastic integral and u n = {u n (z); z ∈ Z} is a random function on Z (for the moment, one can think of the slightly abusive definition of δ as the following pathwise stochastic integral δu = ´u(z)(η − ν)(dz)).As we will see, assuming that F n = δu n is not restrictive, as, provided EF n = 0, this equation always admits infinitely many solutions.An important object in our study is the Malliavin derivative of F n given by D z F n = f n (η + δ z ) − f n (η).The crucial tool to establish our results is a duality relation (also referred to as integration by parts) between the operators D and δ: EF δu = Eν(uDF ).This relation is at the heart of the Malliavin-Stein approach to obtain limit theorems both in a Gaussian [24,Chapter 5] and in a Poisson setting [28].For instance, we have the following result in our Poisson setting.Theorem 0.1 ([28,Theorem 3.1]).Let the previous notation prevails, and assume that: and Then1 , we have that F n law −−−→ n→∞ N(0, σ 2 ).
By integration by parts, we see that Eν(u n DF n ) = EF 2 n and, at the heuristic level, the quantity ν(u n DF n ) controls the asymptotic variance of F n .The condition (0.2) arises from the nondiffusive nature of the Poisson process.Following our heuristic, it is very natural to ask what happens to the conclusions of Theorem 0.1 when ν(u n DF n ) converges to a non-negative random variable S 2 .Theorem 2.1 states that, in this case, provided (0.2) and a condition of asymptotic independence hold, (F n ) converges stably to the Gaussian mixture N(0, S 2 ).In fact, in Theorem 2.1, we are also able to deal with vector-valued random variables.In the same fashion, Theorem 2.2 gives sufficient conditions involving u n and DF n to ensure the convergence of (F n ) to a Poisson mixture (thus generalizing a result by Peccati [29] for convergence to Poisson random variables).When targeting Gaussian mixtures, we are also able to provide quantitative bounds in a variational distance between probability laws (Theorem 2.3 for the multivariate case, and Theorem 2.5 for the univariate case).
Following a recent contribution by Döbler & Peccati [6], we derive from our analysis a stable fourth moment theorem: a sequence of iterated Itô-Poisson integrals converges stably to a Gaussian (with deterministic variance) if and only if its second and fourth moment converge to those of Gaussian (Proposition 3.1).For the limit of a sequence of order 2 Itô-Poisson stochastic integrals to be a Gaussian or Poisson mixture, we obtain sufficient conditions expressed in terms of analytical conditions on the integrands (Theorems 3.2 and 3.3).We also apply our results to study the limit of a sequence of quadratic functionals of a rescaled Poisson process on the line (Theorem 4.2); hence, adapting to the Poisson setting a theorem of Peccati & Yor [32] for a standard Brownian motion (generalized by [23] to the setting of a sufficiently regular fractional Brownian motion using Malliavin-Stein techniques; and generalized to any fractional Brownian motion by [34] using ad-hoc computations).
The paper is organized as follows.Section 1 fixes the notations for the rest of the paper; recalls the definitions of probabilistic distances and of the Poisson point process; gives more information on Gaussian and Poisson mixtures that serve as target distributions in our limit theorems; and gives a brief review on stochastic analysis for Poisson point processes with a focus Malliavin operators that are at the hearth of our method.We present in Section 2 the main results of this paper: Theorems 2.1, 2.2 and 2.3, they contain stable and quantitative limit theorems for Poisson functionals.A detailed comparison of these results with the aforementioned works on the Gaussian space of [22,10,23], as well as with limit theorems on the Poisson space [15,14,28,29], follows in Section 2.3.All the proofs are postponed to Section 2.4.Special attention is paid to stochastic integrals in Section 3. From our main results, we deduce: Proposition 3.1, a stable version of the recently proved fourth moment theorem on the Poisson space of [6,7]; Theorems 3.2 and 3.3, giving analytical criteria for conditionally normal or Poisson limit for order 2 Itô-Wiener stochastic integrals.Section 4 contains the application to quadratic functionals of rescaled Poisson processes on the line.In Section 2.2, we show that, when the limit is a Gaussian mixture, we can adapt our strategy to establish a quantitative bound, Section 2.2.2, we refine our results when the F n 's are univariate, and we establish, in Theorem 2.5, a bound in the Wasserstein transport distance.We end the paper with some open questions. 1 Preliminaries

Notations
In all this paper, the random variables are defined on a sufficiently big probability space (Ω, O, P).We also fix a measurable space (Z, Z) equipped with a σ-measure ν.For q ∈ N, we write ν q for the q-fold product measure of ν, and, for p ∈ [0, ∞], we write L p (ν) = L p (Z, Z, ν) for the Lebesgue space of p-integrable (equivalence classes) of functions.

Probabilistic approximations and limit theorems
Stable convergence This convergence is denoted by Of course, stable convergence implies convergence in law but the reverse implication does not hold.In practice, we use the following characterisation of stable convergence.
Proposition 1.1.Let (F n ) be a sequence of W-measurable random variables, and F ∞ be O-measurable.
Let I ⊂ L 1 (W) be a linear space, and G ⊂ L ∞ (W).Assume that σ(I ) = σ(G ) = W.The following are equivalent: (ii) for all φ continuous and bounded, and all Z ∈ L ∞ (W): (iii) for all G ∈ G and for all λ ∈ R d : E e i λ,Fn G −−−→ n→∞ E e λ,F∞ G; (iv) for all I ∈ I d and for all λ ∈ R d : E e i λ,Fn+I −−−→ n→∞ E e i λ,F∞+I .
Proof.Stable convergence is equivalent to (ii) by [13,Proposition VIII.5.33.v].Thus, stable convergence is also equivalent with (iii) since G generates W. By linearity of I , (iv) implies that for all J ∈ I , all t ∈ R, and all λ ∈ R d , as n → ∞: E e itJ e i λ,Fn → E e itJ e i λ,F∞ .Letting t → 0 in (1 − e itJ )t −1 → iJ, shows that EJ e i λ,Fn → EJ e i λ,F∞ , when n → ∞.Since I generates W, we conclude that (iv) implies stable convergence.The converse implication is immediate.

Probabilistic variational distances
The Wasserstein distance between two R d random variables X and Y is defined by where | • | is the Euclidean norm, and the infimum runs over all couple of random variables ( X, Ỹ ) such that X has the same law as X and Ỹ has the same law as Y .Due to the Kantorovich duality, the Wasserstein distance (see [9,Theorem 2.1]) between the laws of two integrable R dvalued random variables X and Y can be rewritten: where the supremum runs over all function φ : R d → R with Lipschitz constant not greater than 1.In this paper, as it is common when working with Stein's method, we consider a distance, whose variational formulation for two integrable R d -valued random variables X and Y is given by where F 3 if the set of all φ : R d → R, thrice continuously differentiable with the second and third derivatives bounded by 1.

Link with the convergence in law
These two distances depend on X and Y only through their laws.If Y ∼ ν, we sometimes write ).The Wasserstein distance induces a topology on the space of probability measures that corresponds to the convergence in law together with the convergence of the first moment [40,Theorem 6.9].The distance d 3 induces a topology on the space of probability measures which is strictly stronger than the topology of the convergence in law.

Definition of Poisson point processes
We define MN(Z) to be the space of all countable sums of N-valued measures on (Z, Z).The space MN(Z) is endowed with the σ-algebra MN(Z), generated by the cylindrical mappings A Since this paper is concerned only with distributional properties of Poisson point processes, we always assume that We let W be the σ-algebra generated by η.Our definition of η implies that W ⊂ O, and we often tacitly assume that (Ω, O, P) also supports random objects (such as a Brownian motion) independent of η.We always look at stable convergence with respect to W. However, for simplicity, unless otherwise specified, we assume that random variables are W-measurable.In particular, we write L 2 (P) for L 2 (Ω, W, P).

Gaussian and Poisson mixtures
As anticipated, we shall be interested in the stable convergence (with respect to W) of a sequence of Poisson functionals (F n ) to conditionally Gaussian and Poisson random variables.Informally, we refer to such objects as Gaussian mixture and Poisson mixture.Let N be a standard Gaussian vector independent of η and S ∈ L 2 (W).We denote by N(0, S 2 ) the law of the Gaussian mixture SN .Similarly, for N a Poisson process on R + (with intensity the Lebesgue measure) independent of η and M ∈ L 2 (W) non-negative, we write Po(M ) for the law of the (compensated) Poisson mixture N (1 [0,M ] ) − M .We have a characterisation of these two laws in term of their conditional Fourier transforms: F ∼ N(0, S 2 ) if and only if

Stochastic analysis for Poisson point processes
The Mecke formula According to [18, Theorem 4.1], we have for all measurable f : MN(Z) × Z → [0, ∞]: If f is replaced by a measurable function with value in R the previous formula still holds provided both sides of the identity are finite when we replace f by |f |.

The representative of a functional
For every random variable F measurable with respect to η we can write F = f (η), for some measurable f : MN(Z) → R uniquely defined P • η −1 -almost surely on (MN(Z), MN(Z)).We call such f a representative of F .In this section, F denotes a random variable, measurable with respect to σ(η), and f denotes one of its representatives.

The add and drop operators
Given z ∈ Z, we let The operator D + (resp.D − ) is called the add operator (resp.drop operator).Due to the Mecke formula (1.3), these operations are well-defined on random variables (that is, D + and D − do not depend on the choice of the representative of F ).
By assumption U = ∅, and we want to show that V = ∅.Take t ∈ U , by the Mecke formula (1.3), we have that

Malliavin derivative
For a random variable F , we write F ∈ Dom D whenever: F ∈ L 2 (P) and Given F ∈ Dom D, we write DF to denote the random mapping DF : Z z → D + z F .We regard D as an unbounded operator L 2 (P) → L 2 (P ⊗ ν) with domain Dom D.

The divergence operator
We consider the divergence operator δ = D * : For u ∈ Dom δ, the quantity δu ∈ L 2 (P) is completely characterised by the duality relation If h ∈ L 2 (ν), then h ∈ Dom δ and δh = I 1 (h).From [16, Theorem 5], we have the following Skorokhod isometry.For u ∞ and, in that case: (1.7) The Skorokhod isometry implies the following Heisenberg commutation relation.For all u ∈ Dom δ, and all z ∈ Z such that z → D + z u(z ) ∈ Dom δ: D z δu = u(z) + δD + z u.From [16, Theorem 6], we have the following pathwise representation of the divergence: if

The Ornstein-Uhlenbeck generator
The Ornstein-Uhlenbeck generator L is the unbounded self-adjoint operator on L 2 (P) verifying Classically, Dom L is endowed with the Hilbert norm EF 2 + E(LF ) 2 .The eigenvalues of L are the non-positive integers and for q ∈ N the eigenvectors associated are the so-called Wiener-Itô multiple integrals of order q.The kernel of L coincides with the set of constants and the pseudo-inverse of L is defined on the quotient L 2 (P) \ ker L, that is the space of centered square integrable random variables.For F ∈ L 2 (P) with EF = 0, we have

The energy bracket
As anticipated, a key object to consider in our study is the quantity ν(uDF ), which is just the scalar product in L 2 (ν) of the two random functions u and DF ∈ L 2 (ν ⊗ P).However, it turns out that the mapping (F, G) → ν(DF DG) is not the carré du champ associate with L (see, [1] for definitions).Consequently, limit theorems formulated using the scalar product are not well-adapted to obtain convergence of stochastic integrals: this crucial observation allows [6] to derive a fourth moment theorem in full generality on the Poisson space.Given two elements u ∈ L 2 (ν ⊗ P) and v ∈ L 2 (ν ⊗ P) (possibly vector valued), we define the energy bracket of u and v: it is the random matrix In the paper, we also consider the related object: If u and v are real-valued, then [u, v] ν is simply the scalar product of u and v in L 2 (ν).By the Cauchy-Schwarz inequality [u, v] ν ∈ L 1 (P), and by the Mecke formula: (1.9) Moreover, if F and G ∈ Dom D, we write In [11], we prove that Γ is indeed the carré du champ associated with the operator L on the Poisson; this identity is our main motivation for introducing the energy bracket.We also prove in [11] that We denote by [u, v] β the symmetrization of the matrix [u, v] β (β ∈ {Γ, ν}).

Test functions
We say that a measurable function ψ : Z → R + such that ν(ψ > 0) < ∞ is a test function.We let G ⊂ L ∞ (P) be the linear span of the random variables of the form e −η(ψ) , where ψ is a test function.Observe that G is a sub-algebra of A and that Dom D is stable by multiplication by elements of G .In view of [19, Lemma 2.2] and its proof, we have that Proposition 1.3.The set G is dense in L 2 (P) (and in fact in every L p (P), 1 ≤ p < ∞).Moreover, the σ-algebra generated by G coincides with W.

Extended Malliavin operators
As mentioned above, we assume that O is bigger than W.However, every O-measurable random variable F can be written F = f (η, Ξ), where Ξ is an additional randomness independent of η.We define for every such F the quantity It is an (easy) exercise to check that we can accordingly modify all the operators and functional spaces defined above, and that their properties are left unchanged.Remark that our definition implies that, if F is independent of η, then D + F = 0, and that, if F = ab with a independent of W and b measurable with respect to W, D + F = aD + b.

Outline
Theorem 2.1 gives sufficient conditions for the stable convergence of a sequence of Poisson functionals to a Gaussian mixture.While Theorem 2.2 gives sufficient conditions for the stable convergence of a sequence of Poisson functionals to a Poisson mixture.In Section 2.2, we derive quantitative bounds for the convergence to a Gaussian mixture only.However, in the case of Gaussian mixture, obtaining quantitative estimates requires to control additional terms and is quite technical.This is why we treat first the simple qualitative bound both for Gaussian and Poisson mixtures and present the quantitative bound at the end.Theorem 2.3 is the quantitative counterpart of Theorem 2.1 and provides bounds on the distance d 3 between the distribution of a Poisson functional and that of a Gaussian mixture.We are not able to obtain quantitative estimates for the convergence to a Poisson mixture.Theorem 2.5 is an improvement of our bound from the d 3 distance to the d 1 distance, when (F n ) is a sequence of univariate random variables.An extended comparison of those results with the existing literature is carried out in Section 2.3.All the proofs are given in Section 2.4.

Convergence to a Gaussian mixture
Recall that we study asymptotic for (possibly multivariate) random variables of the form F n = δu n .In this setting, let us state the multivariate equivalent of (0.2): We also consider Remark that provided (u n ) is bounded in L 2 (P ⊗ ν), by the Cauchy-Schwarz inequality (R 4 ) implies (R 3 ).Several works about normal approximation of Poisson functionals (for instance, [28,15,14,35]) also consider conditions such as (R 3 ) and (R 4 ).The random variable u n = −DL −1 F n is always a solution of the equation δu n = F n (other choices are possible).Following [28, Theorem 3.1] or [6, Theorem 4.1], let us consider Then, we have that (R 3 ) with either (2.1) or (2.2) imply that F n law In our setting of random variance it is thus very natural to consider one of the following conditions: for some S ∈ L 2 (P).When dealing with stable convergence, either of the following conditions would guarantee asymptotic independence: Our first statement regarding stable limit theorems on the Poisson space is the following qualitative generalization of the results of [28,6] to consider Gaussian mixtures in the limit.
n , . . ., F n ); n ∈ N} ⊂ Dom D. Assume that, for all n ∈ N, there exists u n ∈ Dom δ such that F n = δu n and that (R 3 ) holds.Let S = (S 1 , . . ., S d ) ∈ L 2 (P).Assume that either (W ν ) and (S ν ) holds; or (W Γ ) and (S Γ ) holds.Then F n stably Remark 1.The condition (S Γ ) is a priori more involved than (S ν ): indeed integrating with respect to η adds some randomness to the object.However, in Section 3.1 we need the result involving [•, •] Γ in order to obtain a stable version of the fourth moment theorem of [6].On the other hand, we use conditions of type (S ν ) and (W ν ) to derive Theorems 3.2 and 3.3.

Convergence to a Poisson mixture
Here we only consider univariate random variables.Convergence in law of Poisson functionals to a Poisson distribution represents another archetypal limit theorem.In the setting of the Malliavin-Stein method, [29] proves that the two conditions: It is thus very natural to replace (R 3 ) by the following asymptotic conditions for F n = δu n (here we only considered scalar-valued random variables): We also consider the Poisson version of (R 4 ): Again, provided (u n ) is bounded in L2 (ν ⊗P), we see that (P 4 ) implies (P 3 ).With this notation, we have the following qualitative result for convergence to a Poisson mixture.
Assume that, for all n ∈ N, there exists u n ∈ Dom δ such that F n = δu n and that (P 3 ) and (W ν ) hold, and moreover assume that Remark 2. (M ν ) is formally equivalent to (S ν ) (we can always write S 2 = M ).However, it is important to note that our theorem cannot be true if we replace the scalar product by the energy bracket in (M ν ), that is that we work with the condition: Indeed take F = η(A) − ν(A), with A ∈ Z, ν(A) < ∞.We can write F = δ1 A , and DF = 1 A , hence (P 3 ) is satisfies, since we have On the other hand, we have that Hence, we see that the law of F is not the one of η(A) − ν(A).Remark that (S Γ ) and (M Γ ) are also formally equivalent.At a more structural level, [4] proves that if a sequence (F n ) of Poisson stochastic integrals satisfies a deterministic reinforcement of (S Γ ), that is: then, without further assumptions, the sequence converges in law to a Gaussian.Since the condition of [4] implies (M Γ ) for the particular choice u n = −DL −1 F n when F n is a stochastic integral, we see that (M Γ ) could not enforce convergence to a Poisson mixture.

Main quantitative results for Gaussian mixtures 2.2.1 General results in any dimension
In this section, we obtain quantitative Malliavin-Stein bounds between the law of a Poisson functional and that of a Gaussian mixture.As for Theorem 2.
In Theorem 2.1, (S ν ) enforces that the asymptotic covariance S is measurable with respect to η. Thanks to Proposition 2.10, when S 2 n = [u n , DF n ] ν is non-negative, we can deduce sufficient conditions for the stable convergence of a Poisson functional that involves stable convergence of S n to some S (not necessarily measurable with respect to η).This weaker form of convergence can allow, for instance, S to be independent of η.
Remark 4. We formulated our result with [•, •] ν ; we could do the same for [•, •] Γ .Details are left to the reader.

Bounds in the Wasserstein distance for the one-dimensional case
The results of the previous section are stated in the rather abstract distance d 3 .When F is univariate, one can use a regularization lemma in order to turn the estimates for the d 3 into estimates for the Wasserstein distance d 1 .In this section, all the random variables are implicitly univariate.
Theorem 2.5.Let F ∈ Dom D such that F = δu for some u ∈ Dom δ, and let S ∈ C ov.Consider (2 + E|S|) + E|F |; E|ν(uDF Then, we have that This theorem allows us to prove a quantitative version of Theorem 2.4 in the univariate case.
Theorem 2.6.Let the assumptions and the notations of Theorem 2.4 prevail.Consider Then, d 1 (S n , S).

Comparison with existing results
First, on the Gaussian space, the authors of [22,10,23] work with iterated Skorokhod integrals of any order q ∈ N.That is, given a Gaussian functional F and given u such that F = δ q u, they give probabilistic conditions in terms of u and F for stable convergence of F to a Gaussian mixture.Theorems 2.1 and 2.3 are the Poisson version of their results for the case q = 1.Due to the lack of diffusiveness on the Poisson space, it does not seem possible to reach a result involving iterated Kabanov integrals, via our method of proof, that is, via integration by parts.Second, (S Γ ) enforces that the convergence of C Γ = [u, DF ] (or its symmetrized version) determines the asymptotic covariance.The comparison of C Γ and SS T is similar in the Gaussian case [22]: the quantity DF, u (where D is the Malliavin derivative on the Gaussian space) controls the asymptotic variance of the functional F = δu.In this respect, let us refer to [24, Theorem 5.3.1]for deterministic variance (for the choice u = −DL −1 F ), to [22,Theorem 3.1], to [10,Theorem 3.2] and to [23,Theorem 5.1] for random asymptotic variances.However, we see from (S ν ) that another relevant quantity to consider is C ν = [DF n , u n ] ν .The matrix C ν would also correspond in the Gaussian setting to u, DF since Γ(F ) and |DF | 2 coincide on the Gaussian space.As already observed by [6], working with C Γ rather than C ν is critical in obtaining a fourth moment theorem.We also work with C Γ to obtain our stable version of their fourth moment theorem.When working with deterministic covariances one can choose C ν and still obtain sufficient conditions for convergence of Poisson functionals to a Gaussian (see, for instance [15,14,35]).
Our condition (W ν ) is the exact counterpart of the condition u n , h → 0 (see [22,Remark 3.2]) in the Gaussian setting, enforcing some asymptotic independence.When working with the energy bracket, we have (W Γ ) that we can also regard as an asymptotic independence condition.(RS) plays the same role, in our setting, as u, DS 2 → 0 in [23].On the Gaussian space, by the chain rule, DS 2 = 2SDS.In our case we cannot have this simplification, which implies that we have to formulate our condition in terms of SDS.This adds an extra difficulty since, in practice, the convergence of C ν or C Γ only provides information on SS T but not on S. As the condition with DS 2 is already present in the Gaussian setting [23], we do not expect that the condition (RS) could disappear in general.The condition (R 3 ) is specific to the Poisson setting.Controlling quantities of the form term is also the result of the lack of a chain rule and we do not expect we could remove it.
Furthermore, the authors of [22,10,23] only consider results involving the convergence in L 1 (P) of the Stein matrix C ν , thus imposing measurability with respect to the underlying Gaussian process on the limit covariance.In our case, when the limiting covariance is nonnegative, we can replace the condition of convergence in L 1 (P) by the weaker form of stable convergence to obtain Theorem 2.4.This modification relies on our quantitative bounds, which is why, in this case we need to check (RS) while Theorem 2.1 does not need to enforce this condition.Being quantitative, the results of [23] could also be modified in order to obtain a result similar to Theorem 2.4 with the same proof as the one we gave in the Poisson setting.
Lastly, in the multidimensional case, our bound in Theorem 2.3 holds for every symmetric covariance random matrix C = SS T , while the results of [23] are limited to the case of a diagonal matrix.[10] also deals with generic matrices but relies on the so-called method of the characteristic function that is not known to provide quantitative bounds.
On the other hand, the convergence to Poisson mixtures was not considered for Gaussian functionals (recall that by [24, Theorem 2.10.1]random variables in a fixed Wiener chaos are absolutely continuous with respect to the Lebesgue measure).Several authors have applied the Malliavin-Stein approach on the Poisson space to consider convergence to a Poisson random variable with deterministic mean.The work of Peccati [29] is the first result in that direction.Selecting u n = −DL −1 F n and M = EM = c in (M ν ) exactly yields the condition of [29, Proposition 3.3]: −DL −1 F n , DF n L 2 (ν) → c (remark that [29] works with non-centered random variables).For Poisson approximation, the above discussion on the difference between S D and S Γ does not apply as we only obtain a condition involving S D (see Remark 2).Our condition (P 3 ) is similar to the one in [29].
Contrary to [29], we cannot obtain quantitative bounds for Poisson approximation.In fact, we do not know how to adapt the methods in Section 2.2 to reach estimates for the distance of a Poisson functional to a Poisson mixture.Indeed, our approach towards quantitative estimates relies on the computability of the Malliavin derivative of a Gaussian mixture, since they always can be written SN with N independent of η, and in this case D(SN ) = (DS)N .However, if N (M ) is a Poisson mixture directed by M , we have: The computations with this quantity seem not tractable, and we need new techniques to tackle this problem; we reserve exploring this direction of research for future works.

General strategy
Since Stein equations for Gaussian or Poisson mixtures are not available, we use an interpolation method (employed by [23] in the Gaussian setting) or a characteristic function method (used by [22] in the Gaussian setting) that consists of obtaining a differential equation for the conditional Fourier transform.As it is common in the Malliavin-Stein setting, regarding both methods, we obtain the convergence of (F n ) by controlling quantities of the form EF n φ(F n ), where φ varies within a class of smooth functions.Since we assume that F n = δu n , our strategy exploits the duality between δ and D to write EF n φ(F n ) = E[u n , Dφ(F n )] β for β ∈ {ν, Γ}.We, then, develop Dφ(F ) using a discrete equivalent of the chain rule at the level of the Poisson space.We will use this strategy of integration by parts several times; the structure of the argument being the same every time, we first state and prove generic lemmas before proceeding to the proof of our main results.

Substitute for the chain rule
The Markov generator L is not a diffusion (see [20,Equation 1.3]).Likewise, the add operator D + and drop operator D − are not derivations (see [2, Chapter III, Section 10] for details on derivations).In particular, the classical chain rule does not apply, that is, for a generic smooth function φ : R d → R and a random variable F : , and applying the fundamental theorem of calculus we obtain that A similar formula holds for D − z , that is with Let us also observe that the definitions of R + and R − still make sense when φ is R q -valued.In this case, ∇φ is the Jacobian matrix of φ and ∇φ(F ), D ± F is replaced with the product ∇φ(F )D ± F .

Taylor formula for difference operators
Another possible approach to substitute to the chain rule is to use finite differences that will be useful when targeting Poisson mixtures.This yields the following quantity: , where φ : R → R is smooth and F is a random variable.Another application of Taylor's formula gives us the following discrete counterpart of the chain rule. (2.5) note that Remark 5.It is possible to obtain a similar formula for D − or on R d but we have no use for it.

Integration by parts formulae
Those integration by parts formulae at the level of the Poisson space are obtained with Malliavin calculus.For short, let us also write u Lemma 2.7.Let F = (F 1 , . . ., F q ) ∈ Dom D, u = (u 1 , . . ., u d ) ∈ Dom δ, and G ∈ G .Let φ : R q → R d , twice continuously differentiable and with bounded derivatives.Assume that, for l ∈ {1, 2}, ´|u(z)||D + z F | l ν(dz) < ∞.Then: (2.6) First, let us check that every term is well defined.Since the derivatives of φ are bounded, φ is Lipschitz.Since F ∈ Dom D, we find that φ(F ) ∈ Dom D and Gφ(F ) ∈ Dom D. Since u ∈ Dom δ, we have that δu ∈ L 2 (P) and, then, A ∈ L 1 (P).Applying the Cauchy-Schwarz inequality and the Mecke formula, we find, in view of the assumptions and of (2.3) and (2.4): These estimates also justify the use of the Mecke formula on non-necessarily non-negative quantities that we do in the rest of the proof.Now, we prove the equality (2.6).Let D = B + C + R + − R − .By integration by parts (1.6), we find By the Mecke formula (1.3), by (1.10), and by the fact that all the terms are integrable, we get We conclude the proves by definition of the energy bracket and of R + and R − .
When G = 1, we can directly use the definition of R + in (2.7), this yields the following integration by parts involving Lemma 2.8.Under the same assumptions as for Lemma 2.7, it holds A similar formula holds for P : Lemma 2.9.Under the same assumptions as for Lemma 2.7 with q = d = 1, it holds

Proofs of the qualitative results
Proof of Theorem 2.1.We first prove the theorem under (W Γ ) and (S Γ ).By (S Γ ), we have that = EG e i λ,Fn , and ψ ∞ (λ) = EG e i λ,F∞ .By convergence in law, we have that, as n → ∞, (ψ n ) converges uniformly to ψ ∞ .Since (ξ n ) is bounded in L 2 (P) it is also uniformly integrable, and we find that By Lemma 2.7, ( , we can find (R n ) such that and We thus see that (S Γ ), (R 3 ) and (W Γ ) imply that All in all, we have proved that Thus, we obtain the following differential equation for the conditional characteristic function: The only solution of this equation with ψ(0) = 1 is the one given in (1.1) and this concludes the proof in view of (iii) in Proposition 1.1.For the proof under (W ν ) and (S ν ), we only briefly explain what to modify; the details can be found below, in the proof of Theorem 2.2, where we use this strategy to obtain convergence to a Poisson mixture.To work with (W ν ) and (S ν ), we rather introduce ψ n (λ) = E e i λ,Fn+I 1 (h) for some h ∈ L 2 (ν).Instead of Lemma 2.7, we have to use Lemma 2.8, and we can use (2.3) directly without invoking the Mecke formula (1.3).One concludes with (iv) in Proposition 1.1.The rest of the proof is similar.
Proof of Theorem 2.2.Let h ∈ L 2 (ν).Let λ ∈ R, and consider ψ n (λ) = E e iλ(Fn+I 1 (h)) , and ψ ∞ (λ) = E e iλ(F∞+I 1 (h)) .Since EF 2 n = E u n , DF n , using (M ν ), we see that F n + I 1 (h) is tight and uniformly integrable.Up to extraction, we can find some F ∞ , such that and that (2.8) On the other hand, by Lemma 2.9 and (2.5), we have that where Thus, under (M ν ), (P 3 ) and (W Γ ): (2.9) Equating, (2.8) and (2.9), we obtain that Arguing, by linearity of I 1 , as in the proof of (iv) of Proposition 1.1, we find that: That is to say, we have proved the following differential equation for the conditional characteristic function: The unique solution of this equation satisfying ψ(0) = 1 is the function given in (1.2).This concludes the proof by (iv) in Proposition 1.1.

Proofs of the quantitative results in the multivariate case
Theorem 2.3 follows from one of the two following generic bounds, namely either Proposition 2.10 with h = 0 and φ ∈ F 3 for the case of [•, •] ν ; or Proposition 2.11 with G = 1 and φ ∈ F 3 for the case of [•, •] Γ .We obtain these bounds via the so-called Talagrand's smart path interpolation method.For shot, given φ : R d → R smooth, we write (2.10) Proposition 2.11.Let F = (F 1 , . . ., F d ) ∈ Dom D, S ∈ C ov, and N be a standard d-dimensional Gaussian vector independent of η.Assume that there exists u ∈ Dom δ such that F = δu.Then, for all where for short, we write We start by proving in details the bounds involving [•, •] Γ that is more involved, then we explain how to adapt the proof for [•, •] ν .
Let (s t ) t∈[0,1] be a smooth [0, 1]-valued path such that s 0 = 0 and s 1 = 1, and define An explicit computation yields (2.12) Since Dom D is a linear space, in view of the assumptions, F t ∈ Dom D. Since ∇φ is Lipschitz, ∇φ(F t ) ∈ Dom D. Using the integration by part formula Lemma 2.7, we find that Recall that, by integration by parts, EN ψ(N ) = E∇ψ(N ), for all smooth ψ.Let Then, As a consequence, by the previous Gaussian integration by parts: Furthermore, by Gaussian integration by parts, we obtain that Combining (2.12), (2.13), (2.14) and (2.15), we find that (2.16) Hence, by the Cauchy-Schwarz inequality, we find that By the Mecke formula (1.3), the last two lines are equal.By expending the square in and using that N is centered and independent of η, the cross term vanishes in the expectation.By the fact that N is a normal vector independent of η, we also find that Following these observations, the results is obtained by selecting s t = t 1 2 (other choices of s could possibly yield better constants).The reader can immediately verify that with this choice for s, we have that This concludes the proof.
Proof of Proposition 2.10.The strategy of proof is the same and we simply highlight the differences with the previous proof.We have to consider instead g(t) = Eφ(F t + I 1 (h)) for some h ∈ L 2 (ν).Then, using Lemma 2.8, we find that The rest of the proof is identical to the previous one.

Proofs of the quantitative result in the univariate case
In order to deduce Theorem 2.5 from Proposition 2.10, we need a regularization lemma.Results comparing the Wasserstein distance with an other variational distance are well-known to the experts, for completeness we state and prove a result here.Theorem 2.5 is immediately deduced from Proposition 2.10 (with h = 0) and the following lemma.
Lemma 2.12.Let F and F ∈ L 1 (P) such that there exists a, b, and c ≥ 0 such that for all φ ∈ C 3 b (R): Then, (2.17) where Proof.This result is well-known at different levels of generality, and we follow here the proof of [23,Theorem 3.4] (where the reader is referred to for details).For t ∈ (0, 1), we define φ t (x) = ´φ(t x)γ(dy), with γ = N(0, 1).Then, we have that On the other hand, we have that Combining all the estimates and optimizing in t yields the desired result.
Remark 6.From the proof, we see that we do not expect that (2.17) is optimal.Consequently, all estimates deduced from Lemma 2.12 are sub-optimal, in particular, Theorem 2.5 is a priori sub-optimal.
Proof of Theorem 2.6.By the triangle inequality, we write By Theorem 2.5, we have that Thus, to conclude the proof we need to prove that )).Hence, by the formulation of the Wasserstein distance as an infimum over couplings, we find that: Minimizing over all couplings (A, A n ) proves the claim.This completes the proof.
Remark 7. From the proof, we see that working with the Wasserstein distance is crucial.For instance, we do not know if d 3 (N(0, S 2 ), N(0, T 2 )) ≤ cd 3 (S, T ), for some c > 0.

Convergence of stochastic integrals Outline
We apply the results of Section 2 to stochastic integrals.In particular, we deduce Proposition 3.1, that is a stable version of the fourth moment theorem of Döbler & Peccati [6], and Döbler, Vidotto & Zheng [7]; and Theorems 3.2 and 3.3 that give sufficient conditions for a sequence of Itô-Poisson integrals of order 2 to converge to a Gaussian or Poisson mixture.We recall that a stochastic integral of order q is simply an eigenvector of L associated with the eigenvalues q.More precisely, it is possible to construct a bijective isometry I q from the symmetric functions of L 2 (ν q ) to ker(L + q).Hence every stochastic integral can be written I q (h) for some h ∈ L 2 (ν q ) symmetric.Moreover the mapping I q is extended to a continuous mapping from L 2 (ν q ) to L 2 (P) (by setting I q (h) = I q ( h) where h is the symmetruzation of h).
Here we only use three properties: that ker(L + q) ⊂ Dom D and that D z I q (h) = qI q−1 (h(z, •)); that EI q (h) 2 = q!ν q (h 2 ); as well as a product formula for stochastic integrals that expresses the product I q (h)I q (h ) as a linear combination of stochastic integrals of order no greater than p + q and whose itegrands can be written explicitely in terms of the so-called star-contractions of h and h (see [16,Proposition 5] as well as [4]).More details can be found on these stochastic integrals in [12,39,16,18].

A stable fourth-moment theorem for normal approximation
In a recent reference, [7] proves a multidimensional fourth-moment theorem on the Poisson space, thus refining and generalizing the previous findings of [6].It is worth noting that taking G = 0 and S deterministic in (2.11) yields the same bound as [6, Equation 4.2].In fact, as a first application of Theorem 2.4, we deduce a stable fourth-moment theorem on the Poisson space.
Then, the following are equivalent: (i) F n converges stably to a Gaussian vector.
(ii) For all i ∈ [d], F i n converges in law to a Gaussian random variable.
, where σ is some deterministic matrix.The covariance of the limit Gaussian vector is σσ T .
Remark 9. Proposition 3.1 is very close to [3,Theorem 2.22].However, one condition of their theorem requires that the norms of each of the individual star-contractions vanish.This is strictly stronger than a vanishing fourth-moment as, by the product formula, this condition translates in vanishing properly chosen linear combinations of the star-contractions (see [4]).
The quantity is bounded by assumption.Hence it is sufficient to show that under (iv), This follows from [6, Lemma 3.2] and [7, Remark 5.2].The proof is complete.
Proof of Theorems 3.2 and 3.3.We prove the two theorems at once.We simply apply Theorems 2.1 and 2.2 to our data.For simplicity, we drop the dependence in n.Let u = I 1 (ĝ).
Let us compute [DF, u] ν = ν(DF u) in that case.By the product formula [16, Proposition 5], we have that By linearity of I 1 and I 2 , we thus find By [5, Lemma 2.4 (vi)] (which according to the proof holds for any σ-finite measure ν), we have . Hence, we see that (KS) and (KR ) implies, by the continuity of I 2 : L 2 (ν 2 ) → L 2 (P) either (S ν ) or (M ν ) with S 2 or M as given in the statement.On the one hand, we have that = 3ν (g 1 2 g) 2 + ν 2 (g 4 ).
To obtain the second equality, we use that I 1 (g(•, z)) has the same law as Po(ν(g(•, z))); and we obtain the last equality by some easy algebraic manipulations.So that (KR 4 ) and (KR ) readily implies (R 4 ).On the other hand: 2 + 3ν (g 1 2 g) 2 .
Consequently, using the continuity of I 1 , we find that (KW) implies (W ν ).

Convergence of a quadratic functional of a Poisson process on the line
In this section, we apply our abstract result to show a limit theorem for a particular quadratic functional.Let us recall one of the main applications of [22,23], refining a result of [32].
Moreover, there exists c > 0 such that, for all n ∈ N: Let η be a Poisson point process on R + with intensity the Lebesgue measure; and Nt = η([0, t]) − t, for t ∈ R. The process N is a martingale called a compensated Poisson process on the line.Recall that from Dynkin & Mandelbaum [8], we have that where the convergence holds in the sense of finite-dimensional distributions and in a stronger sense that we do not detail here.Having made this remark the following thermo-dynamical limit appears as a natural generalization of Theorem 4.1.
Theorem 4.2.Let Recalling that N is a non-decreasing process and that EN t = t, we find that Consequently, in order to obtain the conclusions of the theorem for (Q n ) it suffices to obtain them for (F n ).By inverting the order of integration, we find: We do not know if (R 3 ) holds, so we do not know if we could use Theorem 2.4 directly (or even invoke Theorem 2.6 to get a quantitative estimate).

Some open questions
• As already mentioned, we are interested in understanding which techniques we should consider to reach quantitative estimates for the convergence to a Poisson mixture.
• According to [23,Remark 3.3 (b)], the results of [22] can be understood as a variant of the asymptotic Knight theorem about the convergence of Brownian martingales.In the Poisson setting, it would be interesting to know if our results can be put in contrast with a corresponding martingale result.
• Very commonly, quantitative limit theorems in stochastic geometry rely on Malliavin-Stein bounds on the Poisson space (see among others [35,15,14,29]).In particular, counting statistics of a nice class of rescaled geometric random graphs constructed from a Poisson point process exhibit a Gaussian or Poisson asymptotic behaviour depending on the regime of the rescaling.In view of our results, we ask whether it is possible to consider a wider class of geometric random graphs (including the previous one) whose counting statistics exhibit a convergence to a mixture.

Theorem 2 . 3 .
1 we can either work with [•, •] ν or with [•, •] Γ yielding to different bounds.Results involving [•, •] ν are a priori easier to handle in applications.However, we state the two bounds for completeness.For short, for φ ∈ C k (R d ), let us write Φ k = sup x∈R d |∇ k φ|(x), and S ∈ C ov whenever S ∈ Dom D with SS T ∈ Dom D. Also recall that we write [•, •] β for the symmetrization of the random matrix [•, •] β , β ∈ {ν, Γ}, defined in Section 1.5.We are now in position to state our bound in the d 3 distance of a Poisson functional to a Gaussian mixture.Let β ∈ {ν, Γ}.Let F ∈ Dom D, and S ∈ C ov.Then,

Theorem 2 . 4 .
Let (F n ) n∈N ⊂ Dom D, and S ∈ L 2 (Ω) (not necessarily measurable with respect to η).Let (u n ) n∈N ⊂ Dom δ such that F n = δu n for all n ∈ N, and (W ν ) and (R 3 ) holds.Assume, moreover, that for n sufficiently big [u n , DF n ] ν = C n + n , where C n = S n S T n is a symmetric non-negative random matrix, and: standard in the theory of limit theorems for Poisson functionals and already appeared in the first result on the Malliavin-Stein method on the Poisson space [28, Theorem 3.1], as well as in the proof of the fourth moment theorem on the Poisson space [6, Equation4.2].These correspond to the choice u = −DL −1 F in (R 3 ).In our case we have an extra term of the form ´|u(z)||D + z S| 2 ν(dz).This

dF n = n − 1 2 6 )Remark 12 .
Ns = n − 1 2 −n ˆn 0 Ns − s n d Ns = δu n , where u n (s) = n − 1 2 Ns 1 [0,n] (s) Nn n −n ˆn 0 s n d Ns + n − 1 2 −n ˆn 0 ( Ns − − Nn )s n d Ns .Now observe that, by Skorokhod's isometry: By our stable fourth moment theorem Proposition 3.1, we immediately find that: Rather than studying δu n with Theorem 2.4, we simplify the problem by studying the convergence of two Itô-Wiener integrals.In fact, in our example, (R 4 ) is not satisfied.With the notations of the proof, we have that D s F n = n −n− 1 2 ´n 0 (s ∨ t) n d Nt .An easy computation yields that ˆn 0 E(D s F n ) 4 ds −−−→

4 Convergence of a quadratic functional of a Poisson process on the line 25 5 Some open questions 27
random variable η with values in MN(Z) is a Poisson point process (or Poisson random measure) with intensity ν if the following two properties are satisfied:1.for all B 1 , . . ., B n ∈ Z pairwise disjoint, η(B 1 ), . . ., η(B n ) are independent; 2. for B ∈ Z with ν(B) < ∞, η(B) is a Poisson random variable with mean ν(B).
s≤t Ns − Ns − 2 .Since, a Poisson process only has jumps of size 1, we find that