Poisson stochastic integration in Banach spaces

We prove new upper and lower bounds for Banach space-valued stochastic integrals with respect to a compensated Poisson random measure. Our estimates apply to Banach spaces with non-trivial martingale (co)type and extend various results in the literature. We also develop a Malliavin framework to interpret Poisson stochastic integrals as vector-valued Skorohod integrals, and prove a Clark-Ocone representation formula.


Introduction
This paper investigates upper and lower bounds for the L p -norms of stochastic integrals of the form whereÑ is a compensated Poisson random measure with jump space J and φ is a simple, adapted process. Such bounds translate into necessary and sufficient conditions, respectively, for Itô L p -stochastic integrability. It is well known that if φ takes values in a Hilbert space H, then by a straightforward generalization of the Wiener-Itô isometry one has It is however surprisingly more difficult to find bounds for the p-th moments of stochastic integrals for p = 2 and for processes φ taking values in more general Banach spaces. In a recent paper [13], the first-named author obtained sharp upper and lower bounds for the L p -norms or in other words, an Itô isomorphism, for stochastic integrals of the form (1.1) in the case that φ takes values in L q (S) with 1 < q < ∞. These estimates take six fundamentally different forms depending on the relative position of the parameters p and q with respect to 2, see Theorem 2.15 for a precise statement. This is in sharp contrast to the situation for stochastic integrals with respect to Wiener noise where, essentially as a consequence of Kahane's inequalities, the space of stochastically integrable processes can be described in terms of a single family of square function norms. In fact, an Itô isomorphism for Gaussian stochastic integrals in the much wider class of UMD Banach spaces was obtained in [31]. The result in [13] indicates, however, that the situation is more involved for Poisson stochastic integrals and it remains an open problem to find sharp bounds for such integrals with a Banach space-valued integrand.
The aim of the first part of this article is to obtain one-sided extensions of the estimates in [13] in more general Banach spaces. We present upper bounds for the L p -norm of the stochastic integral for Banach spaces with non-trivial martingale type and lower bounds for Banach spaces with finite martingale cotype. The main upper bounds in Theorem 2.11 state that if X has martingale type s ∈ (1,2], then if 1 < p ≤ s. If X has martingale cotype s ∈ [2, ∞), then the 'dual' inequalities hold, see Theorem 2.13. Moreover, in case X is a Hilbert space, the martingale type and cotype inequalities combine into a two-sided inequality that completely characterizes the class of L p -stochastically integrable processes. These statements extend and complement various partial results in the literature [4,16,27,28,42], see also the discussion after Theorem 2.11.
As an application, we present in Theorem 3.3 some estimates for stochastic convolutions using a standard dilation argument. In the setting of Hilbert spaces, these lead to sharp maximal inequalities for stochastic convolutions with a semigroup of contractions.
In general, the estimates in Theorems 2.11 and 2.13 do not lead to an Itô isomorphism if X is not a Hilbert space. Indeed, the aforementioned result in [13] shows that these estimates are already suboptimal for L q -spaces with q = 2. For UMD Banach spaces, however, we can still formulate an 'abstract' Itô-type isomorphism. Using a standard decoupling argument we obtain, for any 1 < p < ∞, a two-sided estimate of the form where ν p (E; X) is the completion of the space of simple functions f : E → X with respect to the Poisson p-norm introduced in Section 4; the implied constants depend only on p and X. Although the Poisson p-norm is in general still a stochastic object, we can calculate it in terms of deterministic quantities in case X is a Hilbert space or an L q -space. The isomorphism (1.2) serves as a basis for the development of a vector-valued Poisson Skorohod integral. In Section 5 we define a Malliavin derivative associated with a Poisson random measure in the Banach space-valued setting following the Gaussian approach of [26]. By deriving an integration by parts formula we show that the Malliavin derivative is a closable operator D with respect to the Poisson p-norms. Assuming that the Banach space X is UMD, the adjoint operator D * is shown to extend the Itô stochastic integral with respect to the compensated Poisson random measure (Theorem 5.10). We conclude by proving a Clark-Ocone representation formula in Theorem 6.6. Our results extend similar results obtained by many authors in a scalar-valued setting. References to this extensive literature are given in Sections 5 and 6. To the best of our knowledge, the Banach spacevalued case has not been considered before.

The Poisson stochastic integral
We start by recalling the definition of a Poisson random measure and its associated compensated Poisson random measure. Let (Ω, F , P) be a probability space and let (E, E ) be a measurable space. We write N = N ∪ {∞}.
An integer-valued random measure is a mapping N : Ω × E → N with the following properties: Definition 2.1. An integer-valued random measure N : Ω × E → N with σ-finite intensity measure µ is called a Poisson random measure if the following conditions are satisfied: (iii) For all B ∈ E the random variable N (B) is Poisson distributed with parameter µ(B); (iv) For all pairwise disjoint sets B 1 , . . . , B n in E the random variables N (B 1 ), . . ., N (B n ) are independent.
If µ(B) = ∞ it is understood that N (B) = ∞ almost surely. For the basic properties of Poisson random measures we refer to [9,Chapter 6]. For It is customary to callÑ the compensated Poisson random measure associated with N (even though it is not a random measure in the above sense, as it is defined on the sets of finite µ-measure only). Let (J, J , ν) be a σ-finite measure space and let N be a Poisson random measure on (R + × J, B(R + )× J ) with intensity measure dt× ν. Throughout this section we let F = (F t ) t>0 be the filtration generated by the random variables {N ((s, u]×A) : Definition 2.2. Let X be a Banach space. A process φ : Ω × R + × J → X is a simple, adapted X-valued process if there are a finite partition π = {0 = t 1 < t 2 < . . . < t l+1 < ∞}, random variables F ijk ∈ L ∞ (Ω, F ti ), disjoint sets A 1 , . . . , A m in J satisfying ν(A j ) < ∞, and vectors x ijk ∈ X for i = 1, . . . , l, j = 1, . . . , m, and k = 1, . . . , n such that Let t > 0 and B ∈ J . We define the (compensated) Poisson stochastic integral of φ on (0, t] × B with respect toÑ by where s ∧ t = min{s, t}. For brevity, we write I(φ) to denote I ∞,J (φ).
In this case we define where the limit is taken in L p (Ω; X).
A weaker notion of stochastic integrability, where L p -convergence in the second condition in Definition 2.3 is replaced by convergence in probability, has been studied extensively by Rosiński [36].
2.1. Conditions for L p -stochastic integrability. To give substance to the class of L p -stochastically integrable processes, we study two different inequalities below. First, we search for so-called Bichteler-Jacod inequalities In the next lemma we use the following inequality due to E.M. Stein (see [39], Chapter IV, the proof of Theorem 8). Let 1 < p < ∞ and 1 ≤ s < ∞. If (f i ) i≥1 is a sequence of scalar-valued random variables, then .
Proof. We may assume t = t l+1 . If we define F i,j = F ti for all j, then (F i,j ) is a filtration with respect to the lexicographic ordering. By the conditional Jensen inequality and Stein's inequality (2.9) we obtain φ L p (Ω;L s ((0,t]×J;X)) = φ L p (Ω;L s ((0,t]×J;X)) .
We will use the following elementary observation (see [13,Lemma 3.4]).
Lemma 2.8. Let N be a Poisson distributed random variable with parameter 0 ≤ λ ≤ 1. Then for every 1 ≤ p < ∞ there exist constants b p , c p > 0 such that Remark 2.9. By refining the partition π in Definition 2.2 if necessary, we can and will always assume that (t i+1 − t i )ν(A j ) ≤ 1 for all i = 1, . . . , l, j = 1, . . . , m. This will allow us to apply Lemma 2.8 to the compensated Poisson random variables Finally, we shall use the following reverse of the dual Doob inequality in (2.6). The authors learned this result from [18,Theorem 7.1], where this result is even obtained for non-commutative random variables. We give a simple proof for the commutative case following [18], which yields an improved constant. Lemma 2.10 (Reverse dual Doob inequality). Fix 0 < p ≤ 1. Let F be a filtration and let (E i ) i≥1 be the associated sequence of conditional expectations. If (f i ) i≥1 is a sequence of non-negative random variables in L 1 (Ω), then Let us first observe that by Hölder's inequality, Indeed, By the concavity of the map x → x p , any 0 < a ≤ b satisfy Therefore, We conclude that The result now follows by letting ε ↓ 0.
To formulate our Bichteler-Jacod inequalities we consider for 1 ≤ p, q < ∞ the norms Theorem 2.11 (Upper bounds and non-trivial martingale type). Let X be a martingale type s Banach space. If 1 < s ≤ p < ∞, then for any simple, adapted X-valued process φ and any B ∈ J , On the other hand, if 1 < p ≤ s then Proof. Clearly, the process is a submartingale and therefore, by Doob's maximal inequality, Let φ be as in (2.1), taking Remark 2.9 into account. Without loss of generality, we may assume that B = J. We writeÑ ij :=Ñ ((t i , t i+1 ] × A j ) for brevity. The sub-algebras . . , j (i = 1, . . . , l, j = 1, . . . , m) (2.13) form a filtration if we equip the pairs (i, j) with the lexicographic ordering. We shall use (i, j) − 1 to denote the element preceding (i, j) in this ordering. If we define y ij = k F ijk x ijk , then (d ij ) := (Ñ ij y ij ) is a martingale difference sequence with respect to the filtration (F (i,j) ) i,j and Suppose first that s ≤ p < ∞. By Theorem 2.5 and Lemma 2.6 we find Similarly, Putting our estimates together we find The result in the case s ≤ p < ∞ now follows from (2.12).
(2.14) For α = 1, 2 write y ij,α = k F ijk,α x ijk and let (d ij,α ) i,j denote the martingale difference sequence (Ñ ij y ij,α ) i,j . We apply Theorem 2.5, Jensen's inequality and Lemma 2.8 to the first term on the right hand side of (2.14) and find Since vector-valued conditional expectations are contractive, To the second term on the right hand side of (2.14) we apply Theorem 2.5, Lemma 2.10 and Lemma 2.8 to find By Lemma 2.7, Since ε > 0 was arbitrary, we conclude in view of (2.12) that (2.11) holds.
Theorem 2.11 extends several known vector-valued Bichteler-Jacod inequalities in the literature. An estimate for X equal to a Hilbert space H and 2 ≤ p < ∞ was obtained in [27, Lemma 3.1] (see also [20] for an earlier version of this result for H = R n ). The estimate in (2.10) is slightly stronger in this case. In fact, we will see in Corollary 2.14 below that it cannot be improved. In [28, Lemma 4] a slightly weaker inequality than (2.10) was obtained in the special case X = L s , p = s ≥ 2. Note however, that the estimate in (2.10) is still suboptimal in this case, see Theorem 2.15 below. In [16] Hausenblas proved (2.10) in the special case p = s n for some integer n ≥ 1. Finally, Theorem 2.11 has been obtained independently by Zhu using a different approach (see the work in progress [42]).
We now consider lower bounds for stochastic integrals of the form (2.3), based on the martingale cotype of the space. Lemma 2.12. Let X be a reflexive Banach space, let (Ω, F , P) be a probability space and let (S, Σ, µ) be a σ-finite measure space. Suppose 1 < p, p ′ , s, s ′ < ∞ satisfy 1 p + 1 p ′ = 1 and 1 s + 1 s ′ = 1. Then isometrically. The corresponding duality bracket is given by Proof. The reflexivity of X implies that for all 1 < q, r < ∞ we have The lemma now follows from the general fact from interpolation theory (see [2,Theorem 8 isometrically. In order to apply this lemma we recall (see [34]) that any Banach space with nontrivial martingale type or finite martingale cotype is uniformly convex and therefore reflexive. In fact we will need only that L p ′ (Ω; L p ′ (S; X * )) ∩ L p ′ (Ω; L s ′ (S; X * )) is norming for L p (Ω; L p (S; X)) + L p (Ω; L s (S; X)); this is true for arbitrary Banach spaces X and can be proved with elementary means. Theorem 2.13 (Lower bounds and finite martingale cotype). Let X be a Banach space with martingale cotype 2 ≤ s < ∞. If s ≤ p < ∞ we have, for any simple, adapted X-valued process φ and any B ∈ J , On the other hand, if 1 < p < s then Proof. Let φ be the simple adapted process given in (2.1), taking Remark 2.9 into account. We may assume that B = J. Suppose first that s ≤ p < ∞. If we define is a martingale difference sequence with respect to the filtration defined in (2.13) and By Theorem 2.5, Lemma 2.6 and Lemma 2.8 we find s,X ∩D p p,X . We deduce the inequality in the case 1 < p < s by duality from Theorem 2.11. By Lemma 2.12 D p ′ p ′ ,X * ∩ D p ′ s ′ ,X * is norming for D p s,X + D p p,X . We let ·, · denote the associated duality bracket. Let ψ be an element of the algebraic tensor product where G lmn ∈ L ∞ (Ω). Letψ be the associated simple adapted process defined bỹ Therefore, Since X has martingale cotype 1 < p < s, X * has martingale type 1 < s ′ ≤ 2 and s ′ ≤ p ′ . Therefore, we can subsequently apply Theorem 2.11 and Lemma 2.7 to obtain By taking the supremum over all ψ as above, we conclude that (2.15) holds.
We obtain two-sided estimates for the L p -norm of the stochastic integral in the special case where X has both martingale type and cotype equal to 2. By Kwapień's theorem (see e.g. [1, Theorem 7.4.1]), such a space is isomorphic to a Hilbert space.
Corollary 2.14. Let H be a Hilbert space. If 2 ≤ p < ∞, then for any simple, adapted H-valued process φ and any B ∈ J , The estimates in Corollary 2.14 characterize the class of L p -stochastically integrable Hilbert space-valued functions. Outside the setting of Hilbert spaces, the Bichteler-Jacod inequalities in Theorem 2.11 do not lead to optimal (i.e., two-sided) bounds for the stochastic integral. Indeed, this is already true if X is an L q -space with q = 2. In this case the following optimal bounds were recently established in [13]. Let (S, Σ, µ) be any measure space and consider the square function norm Theorem 2.15 (Two-sided bounds for X = L q (S) [13]). Let 1 < p, q < ∞. For any B ∈ J and any simple, adapted L q (S)-valued process φ, where I p,L q is given by The result in Theorem 2.15 can be further extended to integrands taking values in a non-commutative L q -space; see Section 7 of [13] for further details.

Maximal inequalities for Poisson stochastic convolutions
It is a well-known strategy to derive maximal L p -inequalities for stochastic convolutions from Bichteler-Jacod inequalities by using a dilation of the semigroup. This approach was first utilized in [17]. A result similar to Theorem 3.3 has been obtained independently in [42] by different methods and further ramifications are worked out there. Let us say that a strongly continuous semigroup S on a Banach space X has an isometric dilation on a Banach space Y if there exists an isomorphic embedding Q : X → Y , a bounded projection P : Y → QX and a strongly continuous group of isometries (U (t)) t∈R on Y such that QS(t) = P U (t)Q for all t > 0. Theorem 1]); (iii) If X is UMD space and A is an R-sectorial operator of type < π 2 on X, then the bounded analytic semigroup generated by −A allows an isometric dilation on the space γ(L 2 (R), X) of all γ-radonifying operators from L 2 (R) to X if and only if A has a bounded H ∞ -calculus [15].
We shall use the following simple approximation lemma.
Lemma 3.2. Fix 1 ≤ p, s < ∞ and let X be a Banach space. Suppose that φ : Ω × R + × J → X is a measurable process such that for any t ∈ R + and j ∈ J the map ω → φ(ω, t, j) is F t -measurable. If φ ∈ D p s,X ∩ D p p,X , then there is a sequence of simple, adapted processes φ n converging to φ in D p s,X ∩ D p p,X . Proof. Since ν is a σ-finite measure, it suffices to show that φ1 (0,T ]×B can be approximated in L p (Ω; L s ((0, T ] × B; X)) ∩ L p (Ω; L p ((0, T ] × B; X)) by simple, adapted processes, for any fixed T > 0 and set B ∈ J of finite measure. Since  . Fix 1 < s ≤ 2 and let X be a martingale type s Banach space. Let S be a bounded C 0 -semigroup on X and suppose that S has an isometric dilation on a martingale type s space Y . If s ≤ p < ∞, then any simple, adapted X-valued process φ satisfies On the other hand, if 1 ≤ p < s then Proof. In view of the identity there is no loss of generality in assuming that B = J. Let φ be a simple, adapted process. Since S is a bounded C 0 -semigroup, the map (ω, u, j) → 1 (0,t] (u)S(t − u)(φ(ω, u, j)) is strongly measurable and in D p r,X for r = p, s. Moreover, for every fixed u and j, ω → 1 (0,t] (u)S(t − u)(φ(ω, u, j)) is F umeasurable and by Lemma 3.2 and Theorem 2.11 it is L p -stochastically integrable. Since S has an isometric dilation on a martingale type s space Y , we can apply Theorem 2.11 to obtain The integrand in the last line is L p -stochastically integrable, so by Theorem 2.11, s,X ∩D p p,X . In the case 1 < p < s it suffices to consider a decomposition φ = φ 1 + φ 2 , where φ 1 , φ 2 are simple adapted processes (see the reduction argument in the proof of Theorem 2.11) and show that These inequalities follow using a dilation argument as above.
By applying Theorem 3.3 to the semigroups in (i) and (ii) of Example 3.1 we find improvements of the maximal inequalities obtained in [28] and [27]. Indeed, if X is a Hilbert space (so s = 2), then Theorem 3.3 improves upon [28, Lemma 4]. Since in this case our maximal estimates are already optimal for the trivial semigroup S(t) ≡ Id H (c.f. Corollary 2.14), they are the best possible. If X = L q (S) (so that s = q) and q = p, then we find a sharpened version of [27,Proposition 3.3]. To apply Theorem 3.3 to the semigroup generated by an R-sectorial operator satisfying the conditions in (iii) of Example 3.1, one should note that the space γ(L 2 (R), X) of all γ-radonifying operators from L 2 (R) to X is isometrically isomorphic to a closed subspace of L 2 (Ω; X), for a suitable probability spaceΩ. Therefore, if X has (martingale) type s, then γ(L 2 (R), X) has (martingale) type s as well.

The Poisson stochastic integral in UMD Banach spaces
As mentioned before, the estimates in Theorems 2.11 and 2.13 do not lead to an Itô isomorphism if X is not a Hilbert space. For UMD Banach spaces, however, we can formulate an 'abstract' Itô-type isomorphism which can serve as a basis for the development of a vector-valued Poisson Skorohod integral.
In what follows we let N be a Poisson random measure on a measurable space (E, E ) with σ-finite intensity measure µ. We denote 1 Bj x j , where x j ∈ X and the sets B j ∈ E µ are pairwise disjoint. For 1 ≤ p < ∞ we define the (compensated) Poisson p-norm of f by It is a simple matter to check that this definition does not depend on the particular representation of f as a simple function and that · νp(E;X) defines a norm on the linear space of simple X-valued functions.
A Banach space X is called a UMD space if for some p ∈ (1, ∞) (equivalently, for all p ∈ (1, ∞)) there is a constant β ≥ 0 such that for all X-valued L p -martingale difference sequences (d n ) n≥1 and all signs (ǫ n ) n≥1 one has The least admissible constant in this definition is called the UMD p -constant of X and is denoted by β p,X . It is well known that once the UMD p property holds for one p ∈ (1, ∞), then it holds for all p ∈ (1, ∞); for proofs see [6,29]. For more information on UMD spaces we refer to the survey papers by Burkholder [7] and Rubio de Francia [37].  The Lebesgue spaces L p (µ), 1 < p < ∞, are UMD spaces (with β p,L p (µ) = max p, p ′ ). More generally, if X is a UMD space, then L p (µ; X), 1 < p < ∞, is a UMD space (with β p,L p (µ;X) = β p,X )  Let us denote by L p F (Ω; ν p (R + × J; X)) the closure in L p (Ω; ν p (R + ×J; X)) of the space of all simple adapted processes in X.
We will see later in Proposition 6.4 that if X is UMD, then L p F (Ω; ν p (R + × J; X)) is a complemented subspace of L p (Ω; ν p (R + × J; X)). By the theorem, the stochastic integral extends uniquely to a bounded linear operator from L p F (Ω; ν p (R + × J; X)) onto a closed linear subspace of L p (Ω; X).
The reason for calling the presented Itô isomorphism 'abstract' is that the ν pnorm is still a stochastic object which may be difficult to calculate in practice. However, the results in Section 2 identify the spaces ν p (E; X) in several important cases. As before, E denotes a σ-finite measure space.
For a general Banach space X, Theorems 2.11 and 2.13 imply two continuous inclusions for ν p (E; X): if X has martingale type 1 < s ≤ 2, then L s (E; X) ∩ L p (E; X), s ≤ p L s (E; X) + L p (E; X), p ≤ s ֒→ ν p (E; X), (4.1) and if X has martingale cotype 2 ≤ s < ∞, then In particular, since any UMD space has finite martingale cotype, we see that if X is a UMD space, then every element in ν p (E; X) can be identified with an X-valued function on E.
In the next section, we shall use Theorem 4.5 to identify the UMD Poisson stochastic integral as a special instance of the Poisson Skorohod integral. Then, in the final Section 6 we will prove a Clark-Ocone type representation theorem for the UMD Poisson stochastic integral.

The Malliavin derivative
In the scalar-valued case there are various ways to extend the classical Malliavin calculus to the Poisson case. Very complete results can be found in the recent paper of Last and Penrose [22], to which we refer the reader for further references to this extensive subject.
Here we wish to extend the ideas developed in the previous sections into a vectorvalued Poisson Malliavin calculus. For this purpose we shall adopt an approach which stays close to the standard approach in the Gaussian case as presented, for example, in Nualart's book [32], in that we define a Poisson Malliavin derivative directly in terms of a class of cylindrical functions associated to a Poisson random measure. In doing so we can essentially follow the lines of vector-valued Malliavin calculus in the Gaussian case as developed in [25,26].
We consider a probability space (Ω, F , P), and a Poisson random measure N defined on a measurable space (E, E ) with σ-finite intensity measure µ. We shall use the notation [·, ·] to denote the inner product in L 2 (E, µ).
It will be useful to employ the following multi-index notation.  The real vector space of all cylindrical functions is denoted by C (Ω). We denote by C (Ω; X) = C (Ω) ⊗ X the collection of all vector-valued random variables F : Ω → X of the form where n ≥ 1, F i ∈ C (Ω), and x i ∈ X for i = 1, . . . , n. The elements of C (Ω; X) will be called X-valued cylindrical functions.
Remark 5.2. In the sequel, when taking a function F ∈ C (Ω) of the form (5.1), we will always assume (possibly without explicit mentioning) that the sets B 1 , . . . , B M are pairwise disjoint. Clearly, this does not yield any loss of generality.
It is easy to see that this definition does not depend on the particular representation of F . It should be compared to the one in [22], where an analogous construction was given on Poisson space. In the scalar-valued setting, an alternative (and equivalent) definition of the Malliavian derivative can be given in terms of a Fock space construction; for more details we refer to [24,33].
The following identity from Poisson stochastic analysis is well-known in the scalar-valued case (cf. [11, Eq. (I.23)]). For the convenience of the reader we supply a short proof.
Replacing j m by j m − 1 in the first summation, one obtains We infer that Theorem 5.6 (Closability). The operator D : Proof. The proof is based upon the fact that if Y is a Banach space, Z ⊆ Y * is a weak * -dense linear subspace, and f ∈ L p (Ω; Y ) is a function such that E f, ζ = 0 for all ζ ∈ Z, then f = 0 in L p (Ω; Y ). We apply this to Y = ν p (E; X) and Z the linear span of the X * -valued indicator functions 1 B ⊗ x * with B ∈ E µ and x * ∈ X * . Fix 1 < p < ∞ and let (F n ) be a sequence in C (Ω; X) such that F n → 0 in L p (Ω; X) and DF n → G in L p (Ω; ν p (E; X)) as n → ∞. We must prove that G = 0.
For each B ∈ E µ and x * ∈ X * , using Proposition 5.5 and the fact thatÑ (B) ∈ L q (Ω), 1 p + 1 q = 1, we obtain This being true for all B ∈ E µ and x * ∈ X * , we conclude that G = 0.
By routine arguments one establishes the following density result. We denote by the σ-algebra in Ω generated byÑ .
Thanks to this lemma, the closure of D is densely defined as an operator from L p (Ω, G ; X) into L p (Ω; ν p (E; X)). With slight abuse of notation we will denote this closure by D again, or, if we want to be more precise, by D X p . The dense domain of this closure in L p (Ω, G ; X) is denoted by D 1,p (Ω; X). This is a Banach space endowed with the norm F D 1,p (Ω;X) := F p L p (Ω;X) + DF p L p (Ω;νp(E;X)) 1 p .

The Skorohod integral.
As in the Gaussian case, the adjoint of the Poisson Malliavin derivative extends the Itô stochastic integral in a natural way. This anticipative extension of the stochastic integral, the so-called Skorohod integral, has been studied, in the Poisson setting, by many authors [8,11,19,22,33]. Here we will show that if X is a UMD space, the adjoint of the operator D introduced above extends the Itô integral of Section 4. From now on we assume that X is a UMD space. We begin by defining the divergence operator as the adjoint of the Malliavin derivative. It thus acts on random variables taking values in the dual space of ν p (E; X).
A word of explanation is needed at this point. The mapping where the sets B n ∈ E µ are disjoint, identifies ν p (E; X) isometrically with a closed subspace of L p (Ω; X). With this identification, D = D X p defines a densely defined and closed linear operator from L p (Ω, G ; X) into L p (Ω; L p (Ω; X)). Since X is UMD and therefore reflexive, the duals of L p (Ω, G ; X) and L p (Ω; L p (Ω; X)) may be identified with L q (Ω, G ; X * ) and L q (Ω; L q (Ω; X * )). The adjoint operator D * is then a densely defined closed linear operator from L q (Ω; L q (Ω; X * )) to L q (Ω, G ; X * ). We now define δ = δ X * q as the restriction of D * to L q (Ω; ν q (E; X * )). That this restricted operator is again densely defined will follow from the next lemma.
i ∈ C (Ω; X * ) be as in (5.2) and let B ∈ E µ be disjoint with the sets used in the representation of the F i 's. For all 1 < p < ∞ we In particular, δ is densely defined.
Proof. Let G ∈ C (Ω) ⊗ X. Using Propositions 5.4 and 5.5 and the fact that Since X is UMD and therefore reflexive, we can regard the divergence operator as an operator δ = δ X p : L p (Ω; ν p (E; X)) → L p (Ω, G ; X), by considering the adjoint of D X * q when 1 p + 1 q = 1 and using the identification X = X * * .
We return to the situation considered in Section 2 and take where (J, J , ν) is a σ-finite measure space. Let N be a Poisson random measure on (R + × J, B(R + ) × J , dt × ν). As before we let F = (F t ) t>0 be the filtration generated byÑ , i.e., We will show that in this setting the divergence operator δ is an extension of the Poisson stochastic integral I = I X p : L p (Ω; ν p (R + × J; X)) → L p (Ω, G ; X). We recall from Section 4 that L p F (Ω; ν p (R + × J; X)) denotes the closure of all simple, adapted processes in L p (Ω; ν p (R + × J; X)). The following result shows that the divergence operator δ coincides with the Itô integral for adapted integrands, and hence can be viewed as a Skorokhod integral for non-adapted processes.

(5.3)
Proof. Suppose first that φ is a simple, adapted process of the form (2.1), with F ijk ∈ C (Ω, F ti ) for all i, j, k. By Lemma 5.9, φ ∈ D p (δ) and so by linearity (5.3) holds. Since C (Ω, F ti ) is dense in L p (Ω, F ti ) by lemma 5.7 and δ is closed, we conclude that any simple, adapted process φ is in D p (δ) and δ(φ) = I(φ). Finally, by density of the simple, adapted processes in L p F (Ω; ν p (R + × J; X)) we find that L p F (Ω; ν p (R + × J; X)) ⊆ D p (δ) and (5.3) holds. Remark 5.11. It is important to emphasize that F denotes the natural filtration generated byÑ . Indeed, in the proof of Theorem 5.10 we use that C (Ω, F s ) is dense in L p (Ω, F s ) for any s ≥ 0.

A Clark-Ocone formula
In scalar-valued Poisson stochastic calculus, Clark-Ocone type representation theorems representing a G -measurable random variable as the stochastic integral of an adapted process defined in terms of its Malliavin derivative have been obtained in various degrees of generality by many authors. We mention [12,21,35,41] and refer the reader to these works for further bibliographic references. All these papers are concerned with the real-valued case. To the best of our knowledge, the Banach space-valued case has not been considered yet in the literature. Here we shall present an extension of the Clark-Ocone theorem to the UMD space-valued Poisson stochastic integral of Section 4.
Following the approach of [26], our first aim is to construct a projection P F in the space L p (Ω; ν p (R + × J; X)) onto the subspace L p F (Ω; ν p (R + × J; X)) introduced in Section 4. Formally this projection is given by The main issue is to give a rigorous interpretation of this formula in the present context. For this purpose we shall need a Poisson analogue of the notion of Rboundedness. Definition 6.1 (ν p -Boundedness). Let 1 ≤ p < ∞ and X, Y be Banach spaces. A collection of bounded linear operators T ∈ L (X, Y ) is said to be ν p -bounded if there exists a constant C > 0 such that the estimate holds for every finite collection of pairwise disjoint sets B j ∈ E µ , every finite sequence x j ∈ X and every finite sequence T j ∈ T .
If we replace the random variablesÑ (B j ) by independent Rademacher variables we obtain the related notion of R-boundedness. In this case the smallest admissible constant C is called the R-bound. Proposition 6.2. Let 1 ≤ p < ∞. If a collection of bounded linear operators T ∈ L (X, Y ) is R-bounded with R-bound R(T ), then it is ν p -bounded as well.
Proposition 6.4. Let X be a UMD space. For each partition π the mapping P π F has a unique extension to a bounded projection in the space L p (Ω; ν p (R + × J; X)). These projections are uniformly bounded and P F F = lim π P π F F defines a bounded projection in L p (Ω; ν p (R + × J; X)) onto L p F (Ω; ν p (R + × J; X)). The above limit is taken along the net of all partitions π of R + , which are partially ordered by refinement.
Proof. By Example 6.3 the collection of conditional expectation operators E(·|F t ) is ν p -bounded on L p (Ω; X). Hence for a simple process of the form = F νp(R+×J;L p (Ω;X)) .
As a consequence, the operator P π F has a unique extension to a bounded operator on L p (Ω; ν p (R + × J; X)), with norm bounded from above by a constant depending only on p and X. Obviously, this operator is a projection. Moreover, if π ′ ⊆ π, then P π F • P π ′ F = P π ′ F . This implies that the net (P F π ) π is upward directed. Since it is also uniformly bounded, the strong operator limit P F := lim π P π F = π P π F exists in L p (Ω; ν p (R + × J; X)) and defines a projection onto π P π F L p (Ω; ν p (R + × J; X)) = L p F (Ω; ν p (R + × J; X)).
The following lemma will be useful in the proof of Theorem 6.6.
Lemma 6.5. let X be a UMD space and let 1 < p, q < ∞ satisfy 1 p + 1 q = 1. For all random variables Φ ∈ L p F (Ω; ν p (R + × J; X)) and Ψ ∈ L q F (Ω; ν q (R + × J; X * )) we have Proof. When Φ and Ψ are simple adapted processes the result follows by direct computation. The general case follows by approximation.