Stable convergence of generalized $L^2$ stochastic integrals and the principle of conditioning

We consider generalized adapted stochastic integrals with respect to independently scattered random measures with second moments, and use a decoupling technique, formulated as a «principle of conditioning», to study their stable convergence towards mixtures of infinitely divisible distributions. The goal of this paper is to develop the theory. Our results apply, in particular, to Skorohod integrals on abstract Wiener spaces, and to multiple integrals with respect to independently scattered and finite variance random measures. The first application is discussed in some detail in the final sectionof the present work, and further extended in a companion paper (Peccati and Taqqu (2006b)). Applications to the stable convergence (in particular, central limit theorems) of multiple Wiener-Ito integrals with respect to independently scattered (and not necessarily Gaussian) random measures are developed in Peccati and Taqqu (2006a, 2007). The present work concludes with an example involving quadratic Brownian functionals.


Introduction
In this paper we establish several criteria, ensuring the stable convergence of sequences of "generalized integrals" with respect to independently scattered random measures over abstract Hilbert spaces. The notion of generalized integral is understood in a very wide sense, and includes for example Skorohod integrals with respect to isonormal Gaussian processes (see e.g. [17]), multiple Wiener-Itô integrals associated to general Poisson measures (see [21], or [13]), or the class of iterated integrals with respect to orthogonalized Teugels martingales introduced in [20]. All these random objects can be represented as appropriate generalized "adapted stochastic integrals" with respect to a (possibly infinite) family of Lévy processes, constructed by means of a well-chosen increasing family of orthogonal projections. These adapted integrals are also the limit of sums of arrays of random variables with a special dependence structure. We shall show, in particular, that their asymptotic behavior can be naturally studied by means of a decoupling technique, known as the "principle of conditioning" (see e.g. [12] and [37]), that we use in the framework of stable convergence (see [11,Chapter 4]).
Our setup is roughly the following. We shall consider a centered and square integrable random field X = {X (h) : h ∈ H}, indexed by a separable Hilbert space H, and verifying the isomorphic relation E [X (h) X (h ′ )] = (h, h ′ ) H , where (·, ·) H is the inner product on H. There is no time involved. To introduce time, endow the space H with an increasing family of orthogonal projections, say π t , t ∈ [0, 1], such that π 0 = 0 and π 1 = id.. Such projections operators induce the (canonical) filtration F π = {F π t : t ∈ [0, 1]}, where each F π t is generated by random variables of the type X (π t h), and one can define (e.g., as in [35] for Gaussian processes) a class of F π -adapted and H-valued random variables. If for every h ∈ H the application t → X (π t h) is also a F π -Lévy process, then there exists a natural Itô type stochastic integral, of adapted and H-valued variables, with respect to the infinite dimensional process t → {X (π t h) : h ∈ H}. Denote by J X (u) the integral of an adapted random variable u with respect to X. As will be made clear in the subsequent discussion, several random objects appearing in stochastic analysis (such as Skorohod integrals, or the multiple Poisson integrals quoted above) are in fact generalized adapted integrals of the type J X (u), for some well chosen random field X. Moreover, the definition of J X (u) mimics in many instances the usual construction of adapted stochastic integrals with respect to real-valued martingales. In particular: (i) each stochastic integral J X (u) is associated to a F π -martingale, namely the process t → J X (π t u) and (ii) J X (u) is the limit (in L 2 ) of finite "adapted Riemann sums" of the kind S (u) = j=1,...,n F j X π tj+1 − π tj h j , where h j ∈ H, t n > t n−1 > · · · > t 1 and F j ∈ F π tj . We show that, by using a decoupling result known as "principle of conditioning" (see Theorem 1 in [37], and Section 2 below, for a very general form of such principle), the stable and, in particular, the weak convergence of sequences of sums such as S (u) is completely determined by the asymptotic behavior of random variables of the type S (u) = j=1,...,n where X is an independent copy of X. Note that the vector V = F 1 X ((π t2 − π t1 ) h 1 ) , ..., F n X π tn+1 − π tn h n , enjoys the specific property of being decoupled (i.e., conditionally on the F j 's, its components are independent) and tangent to the "original" vector V = F 1 X ((π t2 − π t1 ) h 1 ) , ..., F n X π tn+1 − π tn h n , in the sense that for every j, and conditionally on the r.v.'s F k , k ≤ j, F j X π tj+1 − π tj h j and F j X π tj+1 − π tj h j have the same law (the reader is referred to [10] or [14] for a discussion of the general theory of tangent processes). The convergence of sequences such as J X (u n ), n ≥ 1, where each u n is adapted, can therefore be studied by means of simpler random variables J X (u n ), obtained from a decoupled and tangent version of the martingale t → J X (π t u n ). In particular (see Theorem 7 below, as well as its consequences) we shall prove that, since such decoupled processes can be shown to have conditionally independent increments, the problem of the stable convergence of J X (u n ) can be reduced to the study of the convergence in probability of sequences of random Lévy-Khinchine exponents. This represents an extension of the techniques initiated in [19] and [24] where, in a purely Gaussian context, the CLTs for multiple Wiener-Itô integrals are characterized by means of the convergence in probability of the quadratic variation of Brownian martingales. We remark that the extensions of [19] and [24] achieved in this paper go in two directions: (a) we consider general (not necessarily Gaussian) square integrable and independently scattered random measures, (b) we study stable convergence, instead of weak convergence, so that, for instance, our results can be used in the Gaussian case to obtain non-central limit theorems (see e.g. Section 6 below, as well as [23]).
When studying the stable convergence of random variables that are terminal values of continuous-time martingales, one could alternatively use the general criteria for the stable convergence of semimartingales, as developed e.g. in [16], [5] or [11,Chapter 4], instead of the above decoupling techniques. However, the principle of conditioning (which is in some sense the discrete-time skeleton of the general semimartingale results), as formulated in the present paper, often requires less stringent assumptions. For instance, conditions (7) and (37) below are weak versions of the nesting condition introduced by Feigin in the classic reference [5].
The paper is organized as follows. In Section 2, we discuss a general version of the principle of conditioning. In Section 3 we present a general setup to which such decoupling techniques can be applied, and in Section 4 the above mentioned convergence results are established. In Section 5.1 and 5.2, we apply our techniques to sequences of multiple stochastic integrals with respect to independently scattered random measures with second moments, whereas in Section 3.3 we give a specific application to Central Limit Theorems for double Poisson integrals. Finally, in Section 6 our results are applied to study the stable convergence of Skorohod integrals with respect to a general isonormal Gaussian process.

The principle of conditioning
We shall present a general version of the principle of conditioning (POC in the sequel) for arrays of real valued random variables. Our discussion is mainly inspired by a remarkable paper by X.-H. Xue [37], generalizing the classic results by Jakubowski [12] to the framework of stable convergence. Note that the results discussed below refer to a discrete time setting. However, thanks to some density arguments, we will be able to apply most of the POC techniques to general stochastic measures on abstract Hilbert spaces.
Instead of adopting the formalism of [37] we choose, for the sake of clarity, to rely in part on the slightly different language of [6,Ch. 6 and 7]. To this end, we shall recall some notions concerning stable convergence, conditional independence and decoupled sequences of random variables. From now on, all random objects are supposed to be defined on an adequate probability space (Ω, F , P), and all σ-fields introduced below will be tacitly assumed to be complete; P → means convergence in probability; R stands for the set of real numbers; denotes a new definition.
We start by defining the class M of random probability measures, and the class M (resp. M 0 ) of random (resp. non-vanishing and random) characteristic functions.
If X n converges F * -stably, then the conditional distributions L (X n | A) converge for any A ∈ F * such that P (A) > 0. (see e.g. [11,Section 5,§5c] for further characterizations of stable convergence). Note that, by setting Z = 1, we obtain that if X n → (s,F * ) Eµ (·), then the law of the X n 's converges weakly to Eµ (·). Moreover, by a monotone class argument, X n → (s,F * ) Eµ (·) if, and only if, (4) holds for random variables with the form Z = exp (iγY ), where γ ∈ R and Y is F * -measurable. Eventually, we note that, if a sequence of random variables {U n : n ≥ 0} is such that (U n − X n ) → 0 in L 1 (P) and X n → (s,F * ) Eµ (·), then U n → (s,F * ) Eµ (·). The following definition shows how to replace an array X (1) of real-valued random variables by a simpler, decoupled array X (2) .
Definition C (see [6,Chapter 7]) -Let {N n : n ≥ 1} be a sequence of positive natural numbers, and let n,0 = 0, i = 1, 2, be two arrays of real valued r.v.'s, such that, for i = 1, 2 and for each n, the sequence . For a given n ≥ 1, we say that X (2) n is a decoupled tangent sequence to X (1) n if the following two conditions are verified: n,j | F n,j−1 = E exp iλX (2) n,j | F n,j−1 (5) for each λ ∈ R, a.s.-P; ⋆ (Conditional independence) there exists a σ-field G n ⊆ F such that, for each j = 1, ..., N n , E exp iλX (2) n,j | F n,j−1 = E exp iλX (2) n,j | G n for each λ ∈ R, a.s.-P, and the random variables X n,1 , ..., X n,Nn are conditionally independent given G n .
Observe that, in (5), F n,j−1 depends on j, but G n does not. The array X (2) is said to be a decoupled tangent array to X (1) if X (2) n is a decoupled tangent sequence to X (1) n for each n ≥ 1.
Remark -In general, given X (1) as above, there exists a canonical way to construct an array X (2) , which is decoupled and tangent to X (1) . The reader is referred to [14, Section 2 and 3] for a detailed discussion of this point, as well as other relevant properties of decoupled tangent sequences.
The following result is essentially a translation of Theorem 2.1 in [37] into the language of this section. It is a "stable convergence generalization" of the results obtained by Jakubowski in [12].
Remarks -(a) Condition (7) says that V n , n ≥ 1, must be an increasing sequence of σ-fields, whose nth term is contained in F n,rn , for every n ≥ 1. Condition (8) ensures that, for i = 1, 2, the sum of the first r n terms of the vector X (i) n is asymptotically negligeable. (b) There are some differences between the statement of Theorem 1 above, and the original result presented in [37]. On the one hand, in [37] the sequence {N n : n ≥ 1} is such that each N n is a F n,·stopping time (but we do not need such a generality). On the other hand, in [37] one considers only the case of the family of σ-fields V * n = ∩ j≥n F j,rn , n ≥ 1, where r n is non decreasing (note that, due to the monotonicity of r n , the V * n 's satisfy automatically (7)). However, by inspection of the proof of [37, Theorem 2.1 and Lemma 2.1], one sees immediately that all is needed to prove Theorem 1 is that the V n 's verify condition (7). For instance, if r n is a general sequence of natural numbers such that F n,rn ⊆ F n+1,rn+1 for each n ≥ 1, then the sequence V n = F n,rn , n ≥ 1, trivially satisfies (7), even if it does not fit Xue's original assumptions.
The next proposition will be used in Section 5. Proposition 2 Let the notation of Theorem 1 prevail, suppose that the sequence S n,Nn verifies (10) for some φ ∈ M 0 , and assume moreover that there exists a finite random variable C (ω) > 0 such that, for some η > 0, Then, there exists a subsequence {n (k) : k ≥ 1} such that, a.s. -P, for every real λ.
Proof. Combining (10) and (12), we deduce the existence of a set Ω * of probability one, as well as of a subsequence n (k), such that, for every ω ∈ Ω * , relation (12) is satisfied and (13) holds for every rational λ. We now fix ω ∈ Ω * , and show that (13) holds for all real λ. Relations (10) and (12) is tight and hence relatively compact: every sequence of n (k) has a further subsequence {n (k r ) : r ≥ 1} such that P ω kr [·] is weakly convergent, so that the corresponding characteristic function converges. In view of (13), such characteristic function must also satisfy the asymptotic relation for every rational λ, hence for every real λ, because φ (λ) (ω) is continuous in λ.

General framework for applications of the POC
We now present a general framework in which the POC techniques discussed in the previous paragraph can be applied. The main result of this section turns out to be the key tool to obtain stable convergence results for multiple stochastic integrals with respect to independently scattered random measures.
Our first goal is to define an Itô type stochastic integral with respect to a real valued and square integrable stochastic process X (not necessarily Gaussian) verifying the following three conditions: (i) X is indexed by the elements f of a real separable Hilbert space H, (ii) X satisfies the isomorphic relation and (iii) X has independent increments (the notion of "increment", in this context, is defined through orthogonal projections-see below). We shall then show that the asymptotic behavior of such integrals can be studied by means of arrays of random variables, to which the POC applies quite naturally. Note that the elements of H need not be functions -they may be e.g. distributions on R d , d ≥ 1. Our construction is inspired by the theory initiated by L. Wu (see [36]) and A.S.Üstünel and M. Zakai (see [35]), concerning Skorohod integrals and filtrations on abstract Wiener spaces. These author have introduced the notion of time in the context of abstract Wiener spaces by using resolutions of the identity.
Definition D (see e.g. [2], [38] and [35]) -Let H be a separable real Hilbert space, endowed with an inner product (·, ·) H ( · H is the corresponding norm). A (continuous) resolution of the identity, is a family π = {π t : t ∈ [0, 1]} of orthogonal projections satisfying: (D-i) π 0 = 0, and π 1 = id.; A subset F (not necessarily closed, nor linear) of H is said to be π-reproducing, and is denoted F π , if the linear span of the set {π t f : f ∈ F π , t ∈ [0, 1]} is dense in H (in which case we say that such a set is total in H). The rank of π is the smallest of the dimensions of all the closed subspaces generated by the π-reproducing subsets of H. A π-reproducing subset F π of H is called fully orthogonal if (π t f, g) H = 0 for every t ∈ [0, 1] and every f, g ∈ F π . The class of all resolutions of the identity satisfying conditions (D-i)-(D-iii) is denoted R (H).
Remarks -(a) Since H is separable, for every resolution of the identity π there always exists a countable π-reproducing subset of H.
(b) Let π be a resolution of the identity, and note v.s. (A) the closure of the vector space generated by some A ⊆ H. By a standard Gram-Schmidt orthogonalization procedure, it is easy to prove that for every π-reproducing subset F π of H such that dim (v.s. (F π )) = rank (π), there exists a π-reproducing and fully orthogonal subset F ′ π of H, such that dim (v.s. (F ′ π )) = dim (v.s. (F π )) (see e.g. [2, Lemma 23.2], or [35, p. 27]).

Examples -
The following examples are related to the content of Section 5 and Section 6.
It is easily seen that this family π = {π t : t ∈ [0, 1]} is a resolution of the identity verifying conditions (Di)-(Diii) in Definition D. Also, rank (π) = 1, since the linear span of the projections of the function f (x) ≡ 1 generates H.
The family π = {π t : t ∈ [0, 1]} appearing in (16) is a resolution of the identity as in Definition D. However, in this case rank (π) = +∞. Other choices of π t are also possible, for instance which expands from the center of the square [0, 1] 2 .
Now fix a real separable Hilbert space H, as well as a probability space (Ω, F , P). In what follows, we will write to denote a collection of centered random variables defined on (Ω, F , P), indexed by the elements of H and satisfying the isomorphic relation (14) (we use the notation X (H) when the role of the space H is relevant to the discussion). Note that relation (14) implies that, for every f, g ∈ H, X (f + g) = X (f ) + X (g), a.s.-P.
Let X (H) be defined as in (17). Then, for every resolution π = {π t : t ∈ [0, 1]} ∈ R (H), the following property is verified: ∀m ≥ 2, ∀h 1 , ..., h m ∈ H and ∀0 ≤ t 0 < t 1 < ... < t m ≤ 1, the vector is composed of uncorrelated random variables, because the π t 's are orthogonal projections. We stress that the class R (H) depends only on the Hilbert space H, and not on X. Now define R X (H) to be the subset of R (H) containing those π such that the vector (18) is composed of jointly independent random variables, for any choice of m ≥ 2, h 1 , ..., h m ∈ H and 0 ≤ t 0 < t 1 < ... < t m ≤ 1. The set R X (H) depends in general of X. Note that, if X (H) is a Gaussian family, then R X (H) = R (H) (see Section 3 below). To every π ∈ R X (H) we associate the filtration so that, for instance, F π 1 (X) = σ (X) .
Remark -Note that, for every h ∈ H and every π ∈ R X (H), the stochastic process t → X (π t h) is a centered, square integrable F π t (X)-martingale with independent increments. Moreover, since π is continuous and (14) holds, X (π s h) P → X (π t h) whenever s → t. In the terminology of [31, p. 3], this implies that {X (π t h) : t ∈ [0, 1]} is an additive process in law. In particular, if R X (H) is not empty, for every h ∈ H the law of X (h) is infinitely divisible (see e.g. [31, Theorem 9.1]). As a consequence (see [31,Theorem 8.1 and formula (8.8), p. 39]), for every h ∈ H there exists a unique pair c 2 (h) , ν h such that c 2 (h) ∈ [0, +∞) and ν h is a measure on R satisfying (the last relation follows from the fact that X (h) is square integrable (see [31,Section 5.25])), and moreover, for every λ ∈ R, Observe that, since the Lévy-Khintchine representation of an infinitely divisible distribution is unique, the pair c 2 (h) , ν h does not depend on the choice of π ∈ R X (H). In what follows, when R X (H) = ∅, we will use the notation: for every λ ∈ R and every h ∈ H, where the pair c 2 (h) , ν h , characterizing the law of the random variable X (h), is given by (21). Note that, if h n → h in H, then X (h n ) → X (h) in L 2 (P), and therefore ψ H (h n ; λ) → ψ H (h; λ) for every λ ∈ R (uniformly on compacts). We shall always endow H with the σ-field B (H), generated by the open sets with respect to the distance induced by the norm · H . Since, for every real λ, the complex-valued application h → ψ H (h; λ) is continuous, it is also B (H)-measurable.
Examples -(a) Take H = L 2 ([0, 1] , dx), suppose that X (H) = {X (h) : h ∈ H} is a centered Gaussian family verifying (14), and define the resolution of the identity π = {π t : t ∈ [0, 1]} according to (15). Then, if 1 indicates the function which is constantly equal to one, the process is a standard Brownian motion started from zero, where the stochastic integration is in the usual Wiener-Itô sense. Of course, X (π t f ) is a Gaussian F π t (X) -martingale with independent increments, and also, by using the notation (22), for every f ∈ 2 , dxdy and define the resolution π = {π t : t ∈ [0, 1]} as in (16). We consider where N (C) is a Poisson random variable with parameter Leb (C) (i.e., the Lebesgue measure of C), and (2) N (C 1 ) and N (C 2 ) are stochastically independent whenever C 1 ∩ C 2 = ∅. Then, the family satisfies the isomorphic relation (14). Moreover and for every h ∈ H, the process is a F π t (X) -martingale with independent increments, and hence π ∈ R X (H). Moreover, for every h ∈ L 2 [0, 1] 2 , dxdy and λ ∈ R the exponent ψ H (h; λ) in (22) verifies the relation (see e.g. [31, We now want to consider random variables with values in H, and define an Itô type stochastic integral with respect to X. To do so, we let L 2 (P, H, X) = L 2 (H, X) be the space of σ (X)-measurable and . Following for instance [35] (which concerns uniquely the Gaussian case), we associate to every π ∈ R X (H) the subspace L 2 π (H, X) of the π-adapted elements of For any resolution π ∈ R X (H), π (H, X). We will occasionally write (u, z) L 2 π (H) instead of (u, z) L 2 (H) , when both u and z are in L 2 π (H, X). Now define, for π ∈ R X (H), E π (H, X) to be the space of (π-adapted) elementary elements of L 2 π (H, X), that is, E π (H, X) is the collection of those elements of L 2 π (H, X) that are linear combinations of H-valued random variables of the type where t 2 > t 1 , f ∈ H and Φ (t 1 ) is a random variable which is square-integrable and F π t1 (X) -measurable.
Lemma 3 For every π ∈ R X (H), the set E π (H, X), of adapted elementary elements, is total (i.e., its span is dense) in L 2 π (H, X).
We now want to introduce, for every π ∈ R X (H), an Itô type stochastic integral with respect to X. To this end, we fix π ∈ R X (H) and first consider simple integrands of the form h = with t Proof. It is sufficient to prove (29) when h and h ′ are simple adapted elements of the kind (25), and in this case the result follows from elementary computations. Since, according to Lemma 3, E π (H, X) is dense in L 2 π (H, X), the result is obtained from a standard density argument.
The following property, which is a consequence of the above discussion, follows immediately.
Corollary 5 For every f ∈ L 2 π (H, X), the process is a real valued F π t -martingale initialized at zero.
Observe that the process t → J π X (π t f ), t ∈ [0, 1], need not have independent (nor conditionally independent) increments. On the other hand, due to the independence between X and X, and to (18), conditionally on the σ-field σ (X), the increments of the process t → J π X (π t f ) are independent (to see this, just consider the process J π X (π t f ) for an elementary f as in (28), and observe that, in this case, conditioning on σ (X) is equivalent to conditioning on the Φ i 's; the general case is obtained once again by a density argument). It follows that the random process J π X (π · f ) can be regarded as being decoupled and tangent to J π X (π · f ), in a spirit similar to [14, Definition 4.1], [8] or [7]. We stress, however, that J π X (π · f ) need not meet the definition of a tangent process given in such references, which is based on a notion of convergence in the Skorohod topology, rather than on the L 2 -convergence adopted in the present paper. The reader is referred to [8] for an exhaustive characterization of processes with conditionally independent increments. Now, for h ∈ H and λ ∈ R, define the exponent ψ H (h; λ) according to (22), and observe that every f ∈ L 2 π (H, X) is a random element with values in H. It follows that the quantity ψ H (f (ω) ; λ) is well defined for every ω ∈ Ω and every λ ∈ R, and moreover, since ψ H (·; λ) is B (H)-measurable, for every f ∈ L 2 π (H, X) and every λ ∈ R, the complex-valued application ω → ψ H (f (ω) ; λ) is F -measurable.

Proposition 6
For every λ ∈ R and every f ∈ L 2 π (H, X), Proof. For f ∈ E π (H, X), formula (31) follows immediately from the independence of X and X. Now fix f ∈ L 2 π (H, X), and select a sequence (f n ) ⊂ E π (H, X) such that (such a sequence f n always exists, due to Lemma 3). Since (32) On the other hand, where the equality follows from (30), thus yielding and the desired conclusion is therefore obtained.
Examples -(a) Take H = L 2 ([0, 1] , dx) and suppose that X (H) = {X (h) : h ∈ H} is a centered Gaussian family verifying (14). Define also π = {π t : t ∈ [0, 1]} ∈ R (H) according to (15), and write W to denote the Brownian motion introduced in (23). The subsequent discussion will make clear that L 2 π (H, X) is, in this case, the space of square integrable processes that are adapted to the Brownian where the stochastic integration is in the Itô sense, and W t X 1 [0,t] is a standard Brownian motion independent of X.
(b) (Orthogonalized Teugels martingales, see [20]) Let Z = {Z t : t ∈ [0, 1]} be a real-valued and centered Lévy process, initialized at zero and endowed with a Lévy measure ν satisfying the condition: for some ε, λ > 0 Then, for every i ≥ 2, R |x| i ν (dx) < +∞, and Z t has moments of all orders. Starting from Z, for every i ≥ 1 one can therefore define the compensated power jump process (or Teugel martingale) of order i, Plainly, each Y (i) is a centered Lévy process. Moreover, according to [20, pp. 111-112], for every i ≥ 1 it is possible to find (unique) real coefficients a i,1 , ..., a i,i , such that a i,i = 1 and the stochastic processes are strongly orthogonal centered martingales (in the sense of [26, p.148 where δ ij is the Kronecker symbol. Observe that H (i) is again a Lévy process, and that, for every deterministic g, f ∈ L 2 ([0, 1] , ds), the integrals are well defined and such that is the counting measure, and define, for It is clear that π = {π t : t ∈ [0, 1]} ∈ R (H). Moreover, for every h (·, ·) ∈ H, we define where the series is convergent in L 2 (P), since EX (h) 2 = 1 0 h (m, s) 2 ds < +∞, due to (33) and the fact that h ∈ H. Since the H (m) are strongly orthogonal and (33) holds, one sees immediately that, for every h, h ′ ∈ H, E [X (h) X (h ′ )] = (h, h ′ ) H , and moreover, since for every m and every h the has independent increments, π ∈ R X (H). We can also consider random h, and, by using [20], give the following characterization of random variables h ∈ L 2 π (H, X), and the corresponding integrals J π X (h) and J π X (h): where the series is convergent in L 2 (P); (iii) for every h ∈ L 2 π (H, X), where the series is convergent in L 2 (P), and the sequence H (m) : m ≥ 1 is an independent copy of Note that by using [20, Theorem 1], one would obtain an analogous characterization in terms of iterated stochastic integrals of deterministic kernels.

Stable convergence
We shall now apply Theorem 1 to the setup outlined in the previous paragraph. Let H n , n ≥ 1, be a sequence of real separable Hilbert spaces, and, for each n ≥ 1, let be a centered, real-valued stochastic process, indexed by the elements of H n and such that E [X n (f ) X n (g)] = (f, g) Hn . The processes X n are not necessarily Gaussian. As before, X n indicates an independent copy of X n , for every n ≥ 1.
Theorem 7 Let the previous notation prevail, and suppose that the processes X n , n ≥ 1, appearing in (36) (along with the independent copies X n ) are all defined on the same probability space (Ω, F , P). For every n ≥ 1, let π (n) ∈ R Xn (H n ) and u n ∈ L 2 π (n) (H n , X n ). Suppose also that there exists a sequence {t n : n ≥ 1} ⊂ [0, 1] and a collection of σ-fields {U n : n ≥ 1}, such that If where ψ Hn (u n ; λ) is defined according to (22), φ ∈ M 0 and, ∀λ ∈ R, then, as n → +∞, and where µ ∈ M verifies (2).
Remarks -(1) The proof of Theorem 7 uses Theorem 1, which assumes φ ∈ M 0 , that is, φ is non-vanishing. If φ ∈ M (instead of M 0 ) and if, for example, there exists a subsequence n k such that, then, given the nature of ψ Hn k , φ (λ, ω) is necessarily, for P-a.e. ω, the Fourier transform of an infinitely divisible distribution (see e.g. [31, Lemma 7.5]), and therefore φ ∈ M 0 . A similar remark applies to Theorem 12 below.
(2) For n ≥ 1, the process t → J π (n) Xn π (n) t u n is a martingale and hence admits a càdlàg modification. Then, an alternative approach to obtain results for stable convergence is to use the well-known criteria for the stable convergence of continuous-time càdlàg semimartingales, as stated e.g. in [5, Proposition 1 and Theorems 1 and 2 ] or [11,Chapter 4]. However, the formulation in terms of "principle of conditioning" yields, in our setting, more precise results, by using less stringent assumptions. For instance, (37) can be regarded as a weak version of the "nesting condition" used in [5, p. 126 ], whereas (39) is a refinement of the conclusions that can be obtained by means of [5, Proposition 1].
(3) Suppose that, under the assumptions of Theorem 7, there exists a càdlàg process Y = {Y t : t ∈ [0, 1]} such that, conditionally on U * , Y has independent increments and φ (λ) = E [exp (iλY 1 ) | U * ]. In this case, formula (40) is equivalent to saying that J π (n) Xn (u n ) converges U * -stably to Y 1 . See [8,Section 4] for several results concerning the stable convergence (for instance, in the sense of finite dimensional distributions) of semimartingales towards processes with conditionally independent increments.
Before proving Theorem 7, we consider the important case of a nested sequence of resolutions. More precisely, assume that H n = H, X n = X, for every n ≥ 1, and that the sequence π (n) ∈ R X (H), n ≥ 1, is nested in the following sense: for every t ∈ [0, 1] and every n ≥ 1, (note that if π (n) = π for every n, then (41) is trivially satisfied); in this case, if t n is non decreasing, the sequence U n = F π (n) tn (X), n ≥ 1, automatically satisfies (37). We therefore have the following consequence of Theorem 7.
Corollary 8 Under the above notation and assumptions, suppose that the sequence π (n) ∈ R X (H), n ≥ 1, is nested in the sense of (41), and let u n ∈ L 2 π (n) (H, X), n ≥ 1. Suppose also that there exists a non-decreasing sequence {t n : and where µ ∈ M verifies (2).
In the next result {u n } may still be random, but φ (λ) is non-random. It follows from Corollary 8 by taking t n = 0 for every n, so that (42) is immaterial, and F * becomes the trivial σ-field. Corollary 9 Keep the notation of Corollary 8, and consider a (not necessarily nested) sequence π (n) ∈ R X (H), n ≥ 1. If where φ is the Fourier transform of some non-random measure µ such that φ (λ) = 0 for every λ ∈ R, then, as n → +∞, that is, the law of J π (n) X (u n ) converges weakly to µ.
Proof of Theorem 7 -Since u n ∈ L 2 π (n) (H n , X n ), there exists, thanks to Lemma 3 a sequence u e n ∈ E π (n) (H n , X n ), n ≥ 1, such that (by using the isometry properties of J π (n) Xn and J π (n) Xn , as stated in Without loss of generality, we can always suppose that u e n has the form Nn equals t n . Moreover, we have Now define for n ≥ 1 and i = 1, ..., N n as well as X (ℓ) n,0 = 0, ℓ = 1, 2; introduce moreover the filtration and let G n = σ (X n ), n ≥ 1. We shall verify that the array X (2) = X n,i : 0 ≤ i ≤ N n , n ≥ 1 is decoupled and tangent to X (1) = X (1) n,i : 0 ≤ i ≤ N n , n ≥ 1 , in the sense of Definition C of Section 2. Indeed, for ℓ = 1, 2, the sequence X (ℓ) n,i : 0 ≤ i ≤ N n is adapted to the discrete filtration also (5) is satisfied, since, for every j and every i = 1, ..., N n , Since G n = σ (X n ), we obtain immediately (6), because X n is an independent copy of X n . We now want to apply Theorem 1 with where r n is the element of {1, ..., N n } such that t (n) rn = t n . To do so, we need to verify the remaining conditions of that theorem. To prove (7), use (45), (46) and (37), to obtain and hence (7) holds with V n = U n . To prove (8), observe that the asymptotic relation in (44) can be rewritten as which immediately yields, as n → +∞, n,rn P → 0 and E exp iλS (2) n,rn | G n P → 1 for every λ ∈ R. To justify the last relation, just observe that (48) implies that E S (2) n,rn 2 | G n → 0 in L 1 (P), and hence, for every diverging sequence n k , there exists a subsequence n ′ k such that, a.s.-P, which in turn yields that, a.s.-P, To prove (9), observe that by (43). Hence, since (38) holds for u n , it also holds when u n is replaced by the elementary sequence n,Nn and G n = σ (X n ), relation (9) holds. It follows that the assumptions of Theorem 1 are satisfied, and we deduce that necessarily, as n → +∞, (the equality follows from the fact that X n and X n are independent). Theorem 1 also yields To go back from u e n to u n , we use which follows again from (43), and we deduce that Finally, by combining (49) and (50), we obtain By using the same approximation procedure as in the preceding proof, we may use Proposition 2 to prove the following refinement of Theorem 7. Proposition 10 With the notation of Theorem 7, suppose that the sequence J π (n) Xn (u n ) verifies (39), and that there exists a finite random variable C (ω) > 0 such that, for some η > 0, Then, there is a subsequence {n (k) : k ≥ 1} such that, a.s. -P, Theorem 7 can also be extended to a slightly more general framework. To this end, we introduce some further notation. Fix a closed subspace H * ⊆ H. For every t ∈ [0, 1], we denote by π s≤t H * the closed linear subspace of H, generated by the set {π s f : f ∈ H * , s ≤ t}. Of course, π ≤t H * ⊆ π t H = π ≤t H. For a fixed π ∈ R X (H), we set E π (H, H * , X) to be the subset of E π (H, X) composed of H-valued random variables of the kind where t 2 > t 1 , g ∈ H * and Ψ * (t 1 ) is a square integrable random variable verifying the measurability condition whereas L 2 π (H, H * , X) is defined as the closure of E π (H, H * , X) in L 2 π (H, X). Note that, plainly, E π (H, X) = E π (H, H, X) and L 2 π (H, X) = L 2 π (H, H, X). Moreover, for every Y ∈ L 2 π (H, H * , X) and every t ∈ [0, 1], the following two poperties are verified: (i) the random element π t Y takes values in π ≤t H * , a.s.-P, and (ii) the random variable J π X (π t h) is measurable with respect to the σ-field σ {X (f ) : f ∈ π ≤t H * } (such claims are easily verified for h as in (51), and the general results follow once again by standard density arguments).
Remark -Note that, in general, even when rank (π) = 1 as in (15), and H * is non-trivial, for 0 < t ≤ 1 the set π ≤t H * may be strictly contained in π t H. It follows that the σ-field σ {X (f ) : f ∈ π ≤t H * } can be strictly contained in F π t (X), as defined in (19). To see this, just consider the case H = L 2 ([0, 1] , dx), The following result can be proved along the lines of Lemma 3.

Lemma 11
For every closed subspace H * of H, a random element Y is in L 2 π (H, H * , X) if, and only if, Y ∈ L 2 (H, X) and, for every t ∈ [0, 1], The next theorem can be proved by using arguments analogous to the ones in the proof of Theorem 7. Here, H n = H and X n (H n ) = X (H) for every n.

Stable limit theorems for multiple integrals with respect independently scattered measures
This section concerns multiple integrals with respect to independently scattered random measures (not necessarily Gaussian) and corresponding limit theorems. In particular, we will use Theorem 7 to obtain new central and non-central limit theorems for these multiple integrals, extending part of the results proved in [19] and [24] in the framework of multiple Wiener-Itô integrals with respect to Gaussian processes. A specific application is described in Section 3.3, where we deal with sequences of double integrals with respect to Poisson random measures. For further applications of the theory developed in Section 2 to the asymptotic analysis of Gaussian fields, the reader is referred to Section 6, as well as to the companion paper [23]. For a general discussion concerning multiple integrals with respect to random measures, see [4] and [29]. For limit theorems involving multiple stochastic integrals (and other related classes of random variables), see the two surveys by Surgailis [33] and [34], and the references therein.

Independently scattered random measures and multiple integrals
From now on (Z, Z, µ) stands for a standard Borel space, with µ a positive, non-atomic and σ-finite measure on (Z, Z). We denote by Z µ the subset of Z composed of sets of finite µ-measure. Observe that the σ-finiteness of µ implies that Z = σ (Z µ ).
Let H µ = L 2 (Z, Z, µ) be the Hilbert space of real-valued and square-integrable functions on (Z, Z) (with respect to µ). Since relation (52) holds, it is easily seen that there exists a unique collection of centered and square-integrable random variables such that the following two properties are verified: (a) for every elementary function h ∈ H µ with the form h (z) = i=1,...,n c i 1 Bi (z), where n = 1, 2, ..., c i ∈ R and B i ∈ Z µ are disjoint, Property (a) implies in particular that, ∀B ∈ Z µ , M (B) = X M (1 B ). Note that X M is a collection of random variables of the kind defined in formula (17) of Section 3. Moreover, for every h ∈ H µ , the random variable X M (h) has an infinitely divisible law. It follows that, for every h ∈ H µ , there exists a unique pair c 2 (h) , ν h such that c 2 (h) ∈ [0, +∞) and ν h is a (Lévy) measure on R satisfying the three properties in (20), so that, for every λ ∈ R, where the Lévy-Khinchine exponent ψ Hµ (h; λ) is defined by (22).

For every
where σ 2 µ (z) = dc 2 dµ (z), then, for every h ∈ H µ = L 2 (Z, Z, µ), Z |K µ (λh (z) , z)| µ (dz) < +∞ and the exponent ψ Hµ in (55) is given by Proof whenever A ∈ Z µ , and observe (see [27,Definition 2.2]) that γ (·) can be canonically extended to a σ-finite and positive measure on (Z, Z). Moreover, since µ (B) = 0 implies M (B) = 0 a.s.-P, the uniqueness of the Lévy-Khinchine characteristics implies as before γ (A) = 0, and therefore γ (dz) ≪ µ (dz). Observe also that, by standard arguments, one can select a version of the density (dγ/dµ) (z) such that (dγ/dµ) (z) < +∞ for every z ∈ Z. According to [27,Lemma 2.3], there exists a function ρ : Z × B (R) → [0, +∞], such that: (a) ρ (z, ·) is a Lévy measure on B (R) for every z ∈ Z, (b) ρ (·, C) is a Borel measurable function for every C ∈ B (R), (c) for every positive function g (z, x) ∈ Z ⊗ B (R), In particular, by using (60) in the case g (z, since M (A) ∈ L 2 (P), and we deduce that ρ can be chosen in such a way that, for every z ∈ Z, R x 2 ρ (z, dx) < +∞. Now define, for every z ∈ Z and C ∈ B (R), and observe that, due to the previous discussion, the application ρ µ : Z × B (R) → [0, +∞] trivially satisfies properties (i)-(iii) in the statement of Point 3, which is therefore proved. To Prove point 4, first define a function h ∈ H µ to be simple if h (z) = n i=1 a i 1 Ai (z), where a i ∈ R, and (A 1 , ..., A n ) is a finite collection of disjoints elements of Z µ . Of course, the class of simple functions (which is a linear space) is dense in H µ , and therefore for every h ∈ H µ there exists a sequence h n , n ≥ 1, of simple functions such that Z (h n (z) − h (z)) 2 µ (dz) → 0. As a consequence, since µ is σ-finite there exists a subsequence n k such that h n k (z) → h (z) for µ-a.e. z ∈ Z (and therefore for γ-a.e. z ∈ Z) and moreover, for every A ∈ Z, the random sequence X M (1 A h n ) (where we use the notation (53)) is a Cauchy sequence in L 2 (P), and hence it converges in probability. In the terminology of [27, p. 460], this implies that every h ∈ H µ is M -integrable, and that, for every A ∈ Z, the random variable X M (h1 A ), defined according to (53), coincides with A h (z) M (dz), i.e. the integral of h with respect to the restriction of M (·) to A, as defined in [27, p. 460]. As a consequence, by using a slight modification of [27, Proposition 2.6] 2 , the function K 0 on R × Z given by where σ 2 0 (z) = dc 2 /dγ (z), is such that Z |K 0 (λh (z) , z)| γ (dz) < +∞ for every h ∈ H µ , and also Relation (55) and the fact that, by definition, Examples -(a) If M is a centered Gaussian measure with control µ, then ν = 0 and, for h ∈ H µ , (b) If M is a centered Poisson measure with control µ, then c 2 (·) = 0 and ρ µ (z, dx) = δ 1 (dx) for all z ∈ Z, where δ 1 is the Dirac mass at x, and therefore, for h ∈ H µ , For instance, one can take Z = [0, +∞) × R × R, and µ (dx, du, dw) = dxduν (dw), where ν (dw) = 1 |w|<1 |w| −(1+α) dw and α ∈ (0, 2). In this case, the centered Poisson measure M generates the (standard) Poissonized Telecom process {Y P,α (t) : t ≥ 0}, defined in [3, Section 4.1] as The difference lies in the choice of the truncation.
We now want to define multiple integrals, of functions vanishing on diagonals, with respect to the random measure M . To this end, fix d ≥ 2 and set µ d to be the canonical product measure on Z d , Z d induced by µ. We introduce the following standard notation: to be the multiple integral, of order d, of f with respect to M . It is well known (see for instance [29,Theorem 5]) that there exists a unique linear extension of I M d , from S s,0 µ d to L 2 s,0 µ d , satisfying the following: (a) for every f ∈ L 2 s,0 µ d , I M d (f ) is a centered and square-integrable random variable, and (b) for every f, g ∈ L 2 s,0 µ d where so that (since µ is non atomic, and therefore the product measures do not charge diagonals), for every In what follows, we shall show that, for some well chosen resolutions π ∈ R XM (H µ ), every multiple integral of the type I M d (f ), f ∈ L 2 s,0 µ d , can be represented in the form of a generalized adapted integral of the kind introduced in Section 3. As a consequence, the asymptotic behavior of I M d (f ) can be studied by means of Theorem 7.

Representation of multiple integrals and limit theorems
Under the notation and assumptions of this section, consider a "continuous" increasing family {Z t : t ∈ [0, 1]} of elements of Z, such that Z 0 = ∅, Z 1 = Z, Z s ⊆ Z t for s < t, and, for every g ∈ L 1 (µ) and every t ∈ [0, 1], To each t ∈ [0, 1], we associate the following projection operator π t : so that, since M is independently scattered, the continuous resolution of the identity π = {π t : t ∈ [0, 1]} is such that, π ∈ R XM (H µ ). Note also that, thanks to (22) and by uniform continuity, for every f ∈ H µ , every t ∈ (0, 1] and every sequence of partitions of [0, t], and in particular, for every B ∈ Z µ , max i=0,...,rn−1 The following result contains the key of the subsequent discussion. , for some f ∈ L 2 s,0 µ d and t ∈ (0, 1], can be approximated in L 2 (P) by linear combinations of random variables of the type where the t 1 , ..., t d are rational, 0 ≤ t 1 < t 2 < · · · < t d ≤ t and B 1 , ..., B d ∈ Z µ are disjoint. In particular, where the filtration F π t , t ∈ [0, 1], is defined as in (19).
Remark -Observe that, if f ∈ S s,0 µ d is such that then, by (62), Proof. Observe first that, for every f ∈ L 2 s,0 µ d , every t ∈ (0, 1] and every sequence of rational numbers t n → t, in L 2 (P). By density, it is therefore sufficient to prove the statement for multiple integrals of the type where t ∈ Q∩ (0, 1] and f ∈ S s,0 µ d is as in (61). Start with d = 2. In this case, with B 1 , B 2 disjoints, and also, for every partition {0 = t 0 < t 1 < ... < t r = t} (with r ≥ 1) of [0, t], The summands in the first sum Σ 1 have the desired form (70). It is therefore sufficient to prove that for every sequence of partitions t (n) , n ≥ 1, as in (67) and such that mesh t (n) → 0 and the t Since B 1 and B 2 are disjoint, and thanks to the isometric properties of M , thanks to (69). Now fix d ≥ 3, and consider a random variable of the type where B 1 , ..., B d ∈ Z µ are disjoint. The above discussion yields that F can be approximated by linear combinations of random variables of the type where r < s < u < v ≤ t are rational. We will proceed by induction focusing first on the terms in the brackets in (75). Express Z t as the union of five disjoint Observe that the last three summands involve disjoint subsets of Z and hence are of the form (70). Since each of the first two summands involve two identical subsets of Z (e.ġ. (Z s \Z r )) and a disjoint subset (e.ġ. (Z s \Z r )), they can be dealt with in the same way as (73) above. Thus, linear combinations of the five summands on the RHS of (76) can be approximated by linear combinations of random variables of the type where C 1 , C 2 , C 3 ∈ Z µ are disjoints, and t 1 < t 2 < t 3 ≤ t are rational. The general result is obtained by recurrence.
Proposition 14 will be used to prove that, whenever there exists π ∈ R XM (H µ ) defined as in formula (66), multiple integrals can be represented as generalized adapted integrals of the kind described in Section 3. To do this, we introduce a partial ordering on Z as follows: for every z, z ′ ∈ Z, if, and only if, there exists t ∈ Q∩ (0, 1) such that z ∈ Z t and z ′ ∈ Z c t , where Z c t stands for the complement of Z t . For a fixed d ≥ 2, we define the π-purely non-diagonal subset of Z d as Note that Z d π,0 ∈ Z d , and also that not every pair of distinct points of Z can be ordered, that is, 1]; indeed ((1/8, 1/4) , (1/4, 1/4)) ∈ Z 2 0 , but (1/4, 1/4) and (1/8, 1/4) cannot be ordered). However, because of the continuity condition (65) and for every d ≥ 2, the class of the elements of Z d 0 whose components cannot be ordered has measure µ d equal to zero, as indicated by the following corollary.
Corollary 15 For every d ≥ 2 and every f ∈ L 2 s,0 µ d , Proof. First observe that the class of r.v.'s of the type For π ∈ R XM (H µ ) as in formula (66), the vector spaces L 2 π (H µ , X M ) and E π (H µ , X M ), composed respectively of adapted and elementary adapted elements of L 2 (H µ , X M ), are defined as in Section 3 (in particular, via formulae (24) and (25)). Recall that, according to Lemma 3, the closure of E π (H µ , X M ) coincides with L 2 π (H µ , X M ). For every h ∈ L 2 π (H µ , X M ), the random variable J π XM (h) is defined by means of Proposition 4 and formula (27). The following result states that every multiple integral with respect to M is indeed a generalized adapted integral of the form J π XM (h), for some h ∈ L 2 π (H µ , X M ). In what follows, for every d ≥ 1, every f ∈ L 2 s,0 µ d and every fixed z ∈ Z, the symbol f (z, ·) 1 (· ≺ π z) stands for the element of L 2 s,0 µ d−1 , given by Proposition 16 Fix d ≥ 2, and let f ∈ L 2 s,0 µ d . Then,

the random function
is an element of L 2 π (H µ , X M ); where h π (f ) is defined as in (79).
Moreover, if a random variable F ∈ L 2 (P) has the form F = where f (d) ∈ L 2 s,0 µ d for d ≥ 1 and the series is convergent in L 2 (P), then where and the series in (81) is convergent in L 2 π (H µ , X M ).
Proof. It is sufficient to observe that, thanks to Proposition 16, I Mn dn f (n) dn = J π (n) XM n h π (n) f (n) dn , n ≥ 1. As a consequence, by using (82), Moreover, according to Proposition 13, The conclusion is now a direct consequence of Theorem 7.
Remark -Starting from Theorem 17, one can prove an analogous of Corollary 8 (for nested resolutions) and Corollary 9 (for non random φ (λ)). Moreover, Theorem 17 can be immediately extended to sequences of random variables of the type , n ≥ 1, by using the last part of with h π (n) (F n )).
Condition (86) can be difficult to verify, since it involves the sequence of random integrands h π (n) f (n) dn , which may be complex functionals of the kernels f (n) dn . In the next section, we will show that, in the specific framework of double Poisson integrals, one can establish neat sufficient conditions for (86), with a deterministic φ (λ), by using a version of the multiplication formula for multiple stochastic integrals. The techniques developed below can be extended to integrals of higher orders, to a random φ (λ), and even to non-Poissonian random measures, as long as a version of the multiplication formula is available (one might use, for instance, the general theory of "diagonal measures" developed in [4] and [29]). These extensions will be discussed in a separate paper.

Application: CLTs for double Poisson integrals
In this section (Z, Z, µ) is a Borel measure space, with µ non-atomic, σ-finite and positive. Also, N stands for a compensated Poisson random measure on (Z, Z) with control µ. This means that N = N (B) : B ∈ Z µ is an independently scattered random measure as in Definition E, such that, for every where N (B) is a Poisson random variable with parameter µ (B). Note that, for every h ∈ L 2 (Z, Z, µ) = H µ , where X N is defined by (53). Moreover, for every h ∈ H µ and λ ∈ R, the Lévy-Khinchine exponent ψ Hµ (h, λ) appearing in (55), is such that (see again [31,Proposition 19.5 (recall that this corresponds to the case ρ µ (z, dx) = δ 1 (dx) in Proposition 13).
As an application of the previous theory, we shall study the asymptotic behavior of a sequence of random variables of the type where f n ∈ L 2 s,0 µ 2 . In particular, we want to use Theorem 15 to establish sufficient conditions, ensuring that F n converges in law to a standard Gaussian distribution. We will suppose the following: Assumption N -(N 1 ) The sequence f n , n ≥ 1, in (90) verifies: (N 1 -ii) (Normalization condition) As n → +∞, where the tilde ( ) stands for symmetrization (note that f ⋆ l r g need not vanish on diagonals, and that we use convention (63)).
The next result is the announced central limit theorem.
Remarks -(a) Note that the statement of Theorem 18 does not involve any resolution of the identity. However, part (N 2 ) of Assumption N will play a crucial role in the proof.
(b) Observe that (c) Let G be a Gaussian measure on (Z, Z), with control µ, and, for n ≥ 1, let H n = I G 2 (h n ) be the double Wiener-Itô integral of a function h n ∈ L 2 s,0 (µ 2 ). In [19,Theorem 1] it is proved that, if 2 h n 2 → 1 and regardless of Assumption N, the following three conditions are equivalent: Note also that Theorem 1 in [19] applies to multiple integrals of arbitrary order.
(d) A sufficient condition for the uniform integrability of F 4 n is clearly that sup n E F 4+ε n < +∞ for some ε > 0. Note that in the Gaussian framework of [19,Theorem 1] the uniform integrability condition is always satisfied. Indeed, by noting H n = I G 2 (h n ) (n ≥ 1) the sequence of double integrals introduced in the previous remark, for every p > 2 there exists a finite constant c p such that sup n E (|H n | p ) ≤ c p sup n E H 2 n < +∞, where the last relation follows from the normalization condition E H 2 n = 2 h n 2 → 1.
We now want to show that f n ⋆ 1 1 f n (a, b) = Z f n (z, a) f n (z, b) µ (dz) → 0 implies that h n → 0. To do this, we start by observing that, a.e.-µ 2 (da, db) and thanks to Corollary 15, As a consequence, by noting (for fixed z) obtain that, for any fixed z, and therefore There are no cross terms because the multiple integrals have different orders, and hence are orthogonal.

Stable convergence of functionals of Gaussian processes
We shall now use Theorem 7 to prove general sufficient conditions, ensuring the stable convergence of functionals of Gaussian processes towards mixtures of normal distributions. This extends part of the results contained in [19] and [24], and leads to quite general criteria for the stable convergence of Skorohod integrals and multiple Wiener-Itô integrals. However, to keep the lenght of this paper within bounds, we have deferred the discussion about multiple Wiener-Itô integrals, as well as some relations with Brownian martingales to a separate paper, see [23].

Preliminaries
Consider a real separable Hilbert space H, as well as a continuous resolution of the identity π = {π t : t ∈ [0, 1]} ∈ R (H) (see Definition D). Throughout this paragraph, X = X (H) = {X (f ) : f ∈ H} stands for a centered Gaussian family, defined on some probability space (Ω, F , P), indexed by the elements of H and satisfying the isomorphic condition (14). Note, that due to the Gaussian nature of X, every vector as in (18) is composed of independent random variables, and therefore, in this case, R (H) = R X (H). When (14) is satisfied and X (H) is a Gaussian family, one usually says that X (H) is an isonormal Gaussian process, or a Gaussian measure, over H (see e.g. [17,Section 1] or [18]). As before, we write L 2 (H, X) to indicate the (Hilbert) space of H-valued and σ (X)-measurable random variables. The filtration F π (X) = {F π t (X) : t ∈ [0, 1]} (which is complete by definition) is given by formula (19).
In what follows, we shall apply to the Gaussian measure X some standard notions and results from Malliavin calculus (the reader is again referred to [17] and [18] for any unexplained notation or definition). For instance, D = D X and δ = δ X stand, respectively, for the usual Malliavin derivative and Skorohod integral with respect to the Gaussian measure X (the dependence on X will be dropped, when there is no risk of confusion); for k ≥ 1, D 1,2 X is the space of differentiable functionals of X, endowed with the norm · 1,2 (see [17,Chapter 1] for a definition of this norm); dom (δ X ) is the domain of the operator δ X . Note that D X is an operator from D 1,2 X to L 2 (H, X), and also that dom (δ X ) ⊂ L 2 (H, X). For every d ≥ 1, we define H ⊗d and H ⊙d to be, respectively, the dth tensor product and the dth symmetric tensor product of H. For d ≥ 1 we will denote by I X d the isometry between H ⊙d equipped with the norm √ d! · H ⊗d and the dth Wiener chaos of X.
The vector spaces L 2 π (H, X) and E π (H, X), composed respectively of adapted and elementary adapted elements of L 2 (H, X), are once again defined as in Paragraph 3. We now want to link the above defined operators δ X and D X to the theory developed in the previous sections. In particular, we shall use the facts that (i) for any π ∈ R X (H), L 2 π (H, X) ⊆ dom (δ X ), and (ii) for any u ∈ L 2 π (H, X) the random variable J π X (u) can be regarded as a Skorohod integral. They are based on the following (simple) result, proved for instance in [36,Lemme 1].

Proposition 19
Let the assumptions of this section prevail. Then, L 2 π (H, X) ⊆ dom (δ X ), and for every Moreover, if h ∈ E π (H, X) has the form h = n i=1 h i , where n ≥ 1, and h i ∈ E π (H, X) is such that with t (X)-measurable, then Relation (121) implies, in the terminology of [36], that L 2 π (H, X) is a closed subspace of the isometric subset of dom (δ X ), defined as the class of those h ∈ dom (δ X ) s.t. E δ X (h) 2 = h 2 L 2 (H,X) (note that, in general, such an isometric subset is not even a vector space; see [36, p. 170]). Relation (122) applies to simple integrands h, but by combining (121), (122) and Proposition 4, we deduce immediately that, for every h ∈ L 2 π (H, X), δ X (h) = J π X (h) , a.s.-P.
where the random variable J π X (h) is defined according to Proposition 4 and formula (27). Observe that the definition of J π X involves the resolution of the identity π, whereas the definition of δ does not involve any notion of resolution.
The next crucial result, which is partly a consequence of the continuity of π, is an abstract version of the Clark-Ocone formula (see [17]): it is a direct corollary of [36, Théorème 1, formula (2.4) and Théorème 3], to which the reader is referred for a detailed proof.
Proposition 20 (Abstract Clark-Ocone formula; Wu, 1990) Under the above notation and assumptions (in particular, π is a continuous resolution of the identity as in Definition D), for every F ∈ D 1,2 X , where D X F is the Malliavin derivative of F , and proj · | L 2 π (H, X) is the orthogonal projection operator on L 2 π (H, X).
Remarks -(a) Note that the right-hand side of (124) is well defined since D X F ∈ L 2 (H, X) by definition, and therefore where the last inclusion is stated in Proposition 19.
(b) Formula (124) has been proved in [36] in the context of abstract Wiener spaces, but in the proof of (124) the role of the underlying probability space is immaterial. The extension to the framework of isonormal Gaussian processes is therefore standard (see e.g. [18, Section 1.1]).
(c) Since D 1,2 X is dense in L 2 (P) and δ X L 2 π (H, X) is an isometry (due to relation (121)), the Clark-Ocone formula (124) implies that every F ∈ L 2 (P, σ (X)) admits a unique "predictable" representation of the form F = E (F ) + δ X (u) , u ∈ L 2 π (H, X) ; Now consider, as before, an independent copy of X, noted X = X (f ) : f ∈ H , and, for h ∈ L 2 π (H, X), define the random variable J π X (h) according to Proposition 4 and (28). The following result is an immediate consequence of Proposition 13, and characterizes J π X (h), h ∈ L 2 π (H, X), as a conditionally Gaussian random variable.

Proposition 21
For every h ∈ L 2 π (H, X) and for every λ ∈ R,

Stable convergence to a mixture of Gaussian distributions
The following result, based on Theorem 7, gives general sufficient conditions for the stable convergence of Skorohod integrals to a mixture of Gaussian distributions. In what follows, H n , n ≥ 1, is a sequence of real separable Hilbert spaces, and, for each n ≥ 1, X n = X n (H n ) = {X n (g) : g ∈ H n }, is an isonormal Gaussian process over H n ; for n ≥ 1, X n is an independent copy of X n (note that X n appears in the proof of the next result, but not in the statement). Recall that R (H n ) is a class of resolutions of the identity π (see Definition D), and that the Hilbert space L 2 π (H n , X n ) is defined after Relation (24).
If u n Corollary 23 Let H n , X n (H n ), π (n) , t n and U n , n ≥ 1, satisfy the assumptions of Theorem 22 (in particular, (37) holds), and consider a sequence of random variables {F n : n ≥ 1}, such that E (F n ) = 0 and F n ∈ D 1,2 Xn for every n. Then, a sufficient condition to have that F n → (s,U * ) Eµ (·) and E exp (iλF n ) | F π (n) tn (X n ) where U * ∨ n U n , Y ≥ 0 is s.t. Y ∈ U * and µ (λ) = exp − λ 2 2 Y , ∀λ ∈ R, is π (n) tn proj D Xn F n | L 2 π (n) (H n , X n ) 2 Hn P → 0 and proj D Xn F n | L 2 π (n) (H n , X n ) Proof. Since, for every n, F n is a centered random variable in D 1,2 Xn , the abstract Clark-Ocone formula ensures that F n = δ Xn proj D Xn F n | L 2 π (n) (H n , X n ) , the result follows from Theorem 22, by putting u n = proj D Xn F n | L 2 π (n) (H n , X n ) .

Example: a "switching" sequence of quadratic Brownian functionals
We are interested in the asymptotic behavior, for n → +∞, of the "switching" sequence where W (n) = W n odd W * n even.
In particular, we would like to determine the speed at which A n converges to zero as n → +∞, by establishing a stable convergence result. We start by observing that the asymptotic study of A n can be reduced to that of a sequence of double stochastic integrals. As a matter of fact, from the relation  = o P (1) + 1 + 8n 4n + 3 by (131), where o P (1) stands for a sequence converging to zero in probability (as n → +∞). We thus have shown that relations (127) and (129) of Theorem 22 are satisfied. It remains to verify relation (37), namely to show that there exists a sequence of σ-fields {U n : n ≥ 1} verifying U n ⊆ U n+1 ∩ F π (n) tn (X W ) and ∨ n U n = σ (W ). The sequence U n σ {W u − W s : 1 − t n ≤ s < u ≤ t n } , which is increasing and such that U n ⊆ F π (n) tn (X W ) (see (135)), verifies these properties. Therefore, Theorem 22 applies, and we obtain the stable convergence result (132).
Remarks -(a) The sequence A ′ n √ n (2n + 1) A n , although stably convergent and such that (136) is verified, does not admit a limit in probability. Indeed, simple computations show that A ′ n is not a Cauchy sequence in L 2 (P) and therefore, since the L 2 and L 0 topologies coincide on any finite sum of Wiener chaoses (see e.g. [32]), A ′ n cannot converge in probability. (b) Observe that, by using the notation introduced above (see e.g. 15), rank (π o ) = rank (π e ) = 1.