A Clark-Ocone formula in UMD Banach spaces

Let H be a separable real Hilbert space and let F = (F_t)_{t\in [0,T]} be the augmented filtration generated by an H-cylindrical Brownian motion W_H on [0,T]. We prove that if E is a UMD Banach space, 1\leq p<\infty, and f\in D^{1,p}(E) is F_T-measurable, then f = \E f + \int_0^T P_F(Df) dW_H where D is the Malliavin derivative and P_F is the projection onto the F-adapted elements in a suitable Banach space of L^p-stochastically integrable L(H,E)-valued processes.


Introduction
A classical result of Clark [5] and Ocone [17] asserts that if F = (F t ) t∈[0,T ] is the augmented filtration generated by a Brownian motion (W (t)) t∈[0,T ] on a probability space (Ω, F , P), then every F T -measurable random variable F ∈ D 1,p (Ω), 1 < p < ∞, admits a representation where D t is the Malliavin derivative of F at time t. An extension to F T -measurable random variables F ∈ D 1,1 (Ω) was subsequently given by Karatzas, Ocone, and Li [10]. The Clark-Ocone formula is used in mathematical finance to obtain explicit expressions for hedging strategies.
The aim of this note is to extend the above results to the infinite-dimensional setting using the theory of stochastic integration of L (H , E)-valued processes with respect to H -cylindrical Brownian motions, developed recently by Veraar, Weis, and the second named author [15]. Here, H is a separable Hilbert space and E is a UMD Banach space (the definition is recalled below).
For this purpose we study the properties of the Malliavin derivative D of smooth E-valued random variables with respect to an isonormal process W on a separable Hilbert space H. As it turns out, D can be naturally defined as a closed operator acting from L p (Ω; E) to L p (Ω; γ(H, E)), where γ(H, E) is the operator ideal of γ-radonifying operators from a Hilbert space H to E. Via trace duality, the dual object is the divergence operator, which is a closed operator acting from L p (Ω; γ(H, E)) to L p (Ω; E). In the special case where H = L 2 (0, T ; H ) for another Hilbert space H , the divergence operator turns out to be an extension of the UMD-valued stochastic integral of [15].
The first two main results, Theorems 6.6 and 6.7, generalize the Clark-Ocone formula for Hilbert spaces E and exponent p = 2 as presented in Carmona and Tehranchi [4,Theorem 5.3] to UMD Banach spaces and exponents 1 < p < ∞. The extension to p = 1 is contained in our Theorem 7.1.
Extensions of the Clark-Ocone formula to infinite-dimensional settings different from the one considered here have been obtained by various authors, among them Mayer-Wolf and Zakai [13,14], Osswald [18] in the setting of abstract Wiener spaces and de Faria, Oliveira, Streit [7] and Aase, Øksendal, Privault, Ubøe [1] in the setting of white noise analysis. Let us also mention the related papers [11,12].
Acknowledgment -Part of this work was done while the authors visited the University of New South Wales (JM) and the Australian National University (JvN). They thank Ben Goldys at UNSW and Alan McIntosh at ANU for their kind hospitality.

Preliminaries
We begin by recalling some well-known facts concerning γ-radonifying operators and UMD Banach spaces.
Let (γ n ) n≥1 be sequence of independent standard Gaussian random variables on a probability space (Ω, F , P) and let H be a separable real Hilbert space. A bounded linear operator R : H → E is called γ-radonifying if for some (equivalently, for every) orthonormal basis (h n ) n≥1 the Gaussian sum n≥1 γ n Rh n converges in L 2 (Ω; E). Here, (γ n ) n≥1 is a sequence of independent standard Gaussian random variables on (Ω, F , P). Endowed with the norm Since the finite rank operators are dense in γ(H, E) and γ(H, E * ), we obtain a natural contractive injection Using, for instance, Burkholder's good λ-inequalities, it can be shown that if E is a UMD(p) space for some 1 < p < ∞, then it is a UMD(p)-space for all 1 < p < ∞, and henceforth a space with this property will simply be called a UMD space.
Examples of UMD spaces are all Hilbert spaces and the spaces L p (S) for 1 < p < ∞ and σ-finite measure spaces (S, Σ, µ). If E is a UMD space, then L p (S; E) is a UMD space for 1 < p < ∞. For an overview of the theory of UMD spaces and its applications in vector-valued stochastic analysis and harmonic analysis we recommend Burkholder's review article [3].
Below we shall need the fact that if E is a UMD space, then trace duality establishes an isomorphism of Banach spaces γ(H, E * ) ≃ (γ(H, E)) * .
As we shall briefly explain, this is a consequence of the fact that every UMD is K-convex.
Let (γ n ) n≥1 be sequence of independent standard Gaussian random variables on a probability space (Ω, F , P). For a random variable X ∈ L 2 (Ω; E) we define In this situation, π E f := lim N →∞ π E N defines a projection on L 2 (Ω; E) of norm π E = K(E). It is easy to see that E is K-convex if and only its dual E * is K-convex, in which case one has K(E) = K(E * ). For more information we refer to the book of Diestel, Jarchow, Tonge [8].
The next result from [19] (see also [9]) shows that if E is K-convex, the inclusion (1) is actually an isomorphism: The main step is to realize that K-convexity implies that the ranges of π E and π E * are canonically isomorphic as Banach spaces. This isomorphism is then used to represent elements of (γ(H, E)) * by elements of γ(H, E * ).
Remark 2.2. Let us comment on the role of the UMD property in this paper. The UMD property is crucial for two reasons. First, it implies the L p -boundedness of the vector-valued stochastic integral. This fact is used at various places (Lemma 5.2, Theorem 5.4). Second, the UMD property is used to obtain the boundedness of the adapted projection (Lemma 6.5). The results in Sections 3 and 4 are valid for arbitrary Banach spaces.

The Malliavin derivative
Throughout this note, (Ω, F , P) is a complete probability space, H is a separable real Hilbert space, and W : H → L 2 (Ω) is an isonormal Gaussian process, i.e., W is a bounded linear operator from H to L 2 (Ω) such that the random variables W (h) are centred Gaussian and satisfy A smooth random variable is a function F : Ω → R of the form with f ∈ C ∞ b (R n ) and h 1 , . . . , h n ∈ H. Here, C ∞ b (R n ) denotes the vector space of all bounded real-valued C ∞ -functions on R n having bounded derivatives of all orders. We say that F is compactly supported if f is compactly supported. The collections of all smooth random variables and compactly supported smooth random variables are denoted by S (Ω) and S c (Ω), respectively.
Let E be an arbitrary real Banach space and let 1 ≤ p < ∞. Noting that S c (Ω) is dense in L p (Ω) and that L p (Ω) ⊗ E is dense in L p (Ω; E), we see: The Malliavin derivative of an E-valued smooth random variable of the form Here, ∂ j denotes the j-th partial derivative. The definition extends to S (Ω) ⊗ E by linearity. For The following result is the simplest case of the integration by parts formula. We omit the proof, which is the same as in the scalar-valued case [16, Lemma 1.2.1].
A straightforward calculation shows that the following product rule holds for F ∈ S (Ω) ⊗ E and G ∈ S (Ω) ⊗ E * : On the left hand side ·, · denotes the duality between E and E * , which is evaluated pointwise on Ω. In the first term on the right hand side, the H-valued pairing ·, · between γ(H, E) and E * is defined by R, x * := R * x * . Similarly, the second term contains the H-valued pairing between E and γ(H, E * ), which is defined by x, S := S * x, thereby considering x as an element of E * * .
For scalar-valued functions F ∈ S (Ω) we may identify DF ∈ L 2 (Ω; γ(H, R)) with the classical Malliavin derivative DF ∈ L 2 (Ω; H). Using this identification we obtain the following product rule for F ∈ S (Ω) and G ∈ S (Ω) ⊗ E: An application of Lemma 3.2 to the product F, G yields the following integration by parts formula for F ∈ S (Ω) ⊗ E and G ∈ S (Ω) ⊗ E * : From the identity (4) we obtain the following proposition. Proposition 3.3. For all 1 ≤ p < ∞, the Malliavin derivative D is closable as an operator from L p (Ω; E) into L p (Ω; γ(H, E)).
Proof. Let (F n ) be a sequence in S (Ω) ⊗ E be such that F n → 0 in L p (Ω; E) and DF n → X in L p (Ω; γ(H, E)) as n → ∞. We must prove that X = 0.
Fix h ∈ H and define Fix G ∈ V h . Using (4) and the fact that the mapping Y → E Y (h), G defines a bounded linear functional on L p (Ω; γ(H, E)) we obtain Since W (h)G and DG(h) are bounded it follows that this limit equals zero. Since V h is weak * -dense in (L p (Ω; E)) * , we obtain that X(h) vanishes almost surely. Now we choose an orthonormal basis (h j ) j≥1 of H. It follows that almost surely we have X(h j ) = 0 for all j ≥ 1. Hence, X = 0 almost surely.
With a slight abuse of notation we will denote the closure of D again by D. The domain of this closure in L p (Ω; E) is denoted by D 1,p (Ω; E). This is a Banach space endowed with the norm We write D 1,p (Ω) := D 1,p (Ω; R).
As an immediate consequence of the closability of the Malliavin derivative we note that the identities (2), (3), (4) extend to larger classes of functions. This fact will not be used in the sequel.
For the moment let D denote the Malliavin derivative on L q (Ω; E * ), which is a densely defined closed operator with domain D 1,q (Ω; E * ) and taking values in L q (Ω; γ(H, E * )). The divergence operator δ is the part of the adjoint operator D * in L p (Ω; γ(H, E)) mapping into L p (Ω; E). Explicitly, the domain dom p (δ) consists of those X ∈ L p (Ω; γ(H, E)) for which there exists an F X ∈ L p (Ω; E) such that The function F X , if it exists, is uniquely determined, and we define The divergence operator δ is easily seen to be closed, and the next lemma shows that it is also densely defined. and Here (h j ) j≥1 denotes an arbitrary orthonormal basis of H.
Proof. For f ∈ S (Ω), R ∈ γ(H, E), and G ∈ S (Ω) ⊗ E * we obtain, using the integration by parts formula (4) (or Proposition 3.4(iii)), The sum j≥1 W (h j )f ⊗Rh j converges in L p (Ω; E). This follows from the Kahane-Khintchine inequalities and the fact that (W (h j )) j≥1 is a sequence of independent standard Gaussian variables; note that the function f is bounded.
Using an extension of Meyer's inequalities, for UMD spaces E and 1 < p < ∞ it can be shown that δ extends to a bounded operator from D 1,p (Ω; γ(H, E)) to L p (Ω; E). For details we refer to [11].

The Skorokhod integral
We shall now assume that H = L 2 (0, T ; H ), where T is a fixed positive real number and H is a separable real Hilbert space. We will show that if the Banach space E is a UMD space, the divergence operator δ is an extension of the stochastic integral for adapted L (H , E)-valued processes constructed recently in [15]. Let us start with a summary of its construction.
Let Following [15] we say that a process X : (0, T ) × Ω → γ(H , E) is an elementary adapted process with respect to the filtration F if it is of the form where 0 ≤ t 0 < · · · < t n ≤ T , the sets A ij ∈ F ti−1 are disjoint for each j, and h k , . . . , h k ∈ H are orthonormal. The stochastic integral with respect to W H of such a process is defined by Elementary adapted processes define elements of L p (Ω; γ(L 2 (0, T ; H ), E)) in a natural way. The closure of these elements in L p (Ω; γ(L 2 (0, T ; H ), E)) is denoted by L p F (Ω; γ(L 2 (0, T ; H ), E)). Moreover, for all X ∈ L p F (Ω; γ(L 2 (0, T ; H ), E)) we have the two-sided estimate I(X) L p (Ω;E) X L p (Ω;γ(L 2 (0,T ;H ),E)) , with constants only depending on p and E.
A consequence of this result is the following lemma, which will be useful in the proof of Theorem 6.6.
Proof. When X and Y are elementary adapted the result follows by direct computation. The general case follows from Proposition 5.1 applied to E and E * , noting that E * is a UMD space as well.
In the next approximation result we identify L 2 (0, t; H ) with a closed subspace of L 2 (0, T ; H ). The simple proof is left to the reader.
The next result shows that the divergence operator δ is an extension of the stochastic integral I. This means that δ is a vector-valued Skorokhod integral.
By linearity, it follows that the elementary adapted processes of the form (5) with t 0 > 0 are contained in dom p (δ) and that I and δ coincide for such processes.
To show that this equality extends to all X ∈ L p F (Ω; γ(L 2 (0, T ; H ), E)) we take a sequence X n of elementary adapted processes of the above form converging to X. Since I is a bounded operator from L p F (Ω; γ(L 2 (0, T ; H ), E)) into L p (Ω; E), it follows that δ(X n ) = I(X n ) → I(X) as n → ∞. The fact that δ is closed implies that X ∈ dom p (δ) and δ(X) = I(X).

A Clark-Ocone formula
Our next aim is to prove that the space L p F (Ω; γ(L 2 (0, T ; H ), E)), which has been introduced in the previous section, is complemented in L p (Ω; γ(L 2 (0, T ; H ), E)). For this we need a number of auxiliary results. Before we can state these we need to introduce some terminology.
Let (γ j ) j≥1 be a sequence of independent standard Gaussian random variables. Recall that a collection T ⊆ L (E, F ) of bounded linear operators between Banach spaces E and F is said to be γ-bounded if there exists a constant C > 0 such that for all n ≥ 1 and all choices of T 1 , . . . , T n ∈ T and x 1 , . . . , x n ∈ E. The least admissible constant C is called the γ-bound of T , notation γ(T ). Proposition 6.1. Let T be a γ-bounded subset of L (E, F ) and let H be a separable real Hilbert space. For each T ∈ T let T ∈ L (γ(H, E), γ(H, F )) be defined by Proof. Let (γ j ) j≥1 and ( γ j ) j≥1 be two sequences of independent standard Gaussian random variables, on probability spaces (Ω, F , P) and ( Ω, F , P) respectively. By the Fubini theorem, . This proves the inequality γ( T ) ≤ γ(T ). The reverse inequality holds trivially.
The next proposition is a result by Bourgain [2], known as the vector-valued Stein inequality. We refer to [6, Proposition 3.8] for a detailed proof. We continue with a multiplier result due to Kalton and Weis [9]. In its formulation we make the observation that every step function f : (0, T ) → γ(H , E) defines an element R g ∈ γ(L 2 (0, T ; H ), E) by the formula Since R f determines f uniquely almost everywhere, in what follows we shall always identify R f and f . Here we identified M (t) ∈ L (E, F ) with M (t) ∈ L (γ(H , E), γ(H , F )) as in Proposition 6.1.
The next result is taken from [15]. After these preparations we are ready to state the result announced above. We fix a filtration F = (F t ) t∈[0,T ] and define, for step functions f : (0, T ) → γ(H , L p (Ω; E)), where E(·|F t ) is considered as a bounded operator acting on γ(H , L p (Ω; E)) as in Proposition 6.1.
The boundedness of P F then follows from Proposition 6.3. For step functions f : (0, T ) → γ(H , L p (Ω; E)) it is clear from (6) that P 2 F f = P F f , which means that P F is a projection.
(ii): By the identification of Proposition 6.4, P F acts as a bounded projection in the space L p (Ω; γ(L 2 (0, T ; H ), E)). For elementary adapted processes X ∈ L p (Ω; γ(L 2 (0, T ; H ), E)) we have P F X = X, which implies that the range of P F contains L p F (Ω; γ(L 2 (0, T ; H ), E)). To prove the converse inclusion we fix a step function X : (0, T ) → γ(H , L p (Ω; E)) and observe that P F X is adapted in the sense that (P F X)(t) is strongly F t -measurable for every t ∈ [0, T ]. As is shown in [15,Proposition 2.12], this implies that P F X ∈ L p F (Ω; γ(L 2 (0, T ; H ), E)). By density it follows that the range of P F is contained in L p F (Ω; γ(L 2 (0, T ; H ), E)).
(iii): Keeping in mind the identification of Proposition 6.4, for step functions with values in the finite rank operators from H to E this follows from (6) by elementary computation. The result then follows from a density argument.
The following two results give an explicit expression for X. They extend the classical Clark-Ocone formula and its Hilbert space extension to UMD spaces.
Theorem 6.6 (Clark-Ocone representation, first L p -version). Let E be a UMD space and let 1 < p < ∞.
Proof. We may assume that E(F ) = 0. Let X ∈ L p F (Ω; γ(L 2 (0, T ; H ), E)) be such that F = I(X) = δ(X). Let 1 p + 1 q = 1, and let Y ∈ L q (Ω; γ(L 2 (0, T ; H ), E * )) be arbitrary. By Lemma 6.5, Theorem 5.4, and Lemma 5.2 we obtain Since this holds for all Y ∈ L q (Ω; γ(L 2 (0, T ; H ), E * )), it follows that X = P F (DF ). The uniqueness of P F (DF ) follows from the injectivity of I as a bounded linear operator from L p F (Ω; γ(L 2 (0, T ; H ), E)) to L p (Ω, F T ). With a little extra effort we can prove a bit more: Theorem 6.7 (Clark-Ocone representation, second L p -version). Let E be a UMD space and let 1 < p < ∞. The operator P F • D has a unique extension to a bounded operator from L p (Ω, F T ; E) to L p F (Ω; γ(L 2 (0, T ; H ), E)), and for all F ∈ L p (Ω, F T ; E) we have the representation Proof. It follows from Theorem 6.6 that F → I((P F • D)F ) extends uniquely to a bounded operator on L p (Ω, F T ; E), since it equals F → F − E(F ) on the dense subspace D 1,p (Ω, F T ; E). The proof is finished by recalling that I is an isomorphism from L p F (Ω; γ(L 2 (0, T ; H ), E)) onto its range in L p (Ω, F T ). Remark 6.8. An extension of the Clark-Ocone formula to a class of adapted processes taking values in an arbitrary Banach space B has been obtained by Mayer-Wolf and Zakai [13,Theorem 3.4]. The setting of [13] is slightly different from ours in that the starting point is an arbitrary abstract Wiener space (W, H, µ), where µ is a centred Gaussian Radon measure on the Banach space W and H is its reproducing kernel Hilbert space. The filtration is defined in terms of an increasing resolution of the identity on H, and a somewhat weaker notion of adaptedness is used. However, the construction of the predictable projection in [13,Section 3] as well as the proofs of [14, Corollary 3.5 and Proposition 3.14] contain gaps. As a consequence, the Clark-Ocone formula of [13] only holds in a suitable 'scalar' sense. We refer to the errata [13,14] for more details.

Extension to L 1
We continue with an extension of Theorem 6.7 to random variables in the space L 1 (Ω, F T ; E). As before, F = (F t ) t∈[0,T ] is the augmented filtration generated by the H -cylindrical Brownian motion W H .
We denote by L 0 (Ω; F ) the vector space of all strongly measurable random variables with values in the Banach space F , identifying random variables that are equal almost surely. Endowed with the metric L 0 (Ω; F ) is a complete metric space, and we have lim n→∞ X n = X in L 0 (Ω; F ) if and only if lim n→∞ X n = X in measure in F .
Some properties of this process have been studied in [15,Section 4]. Let (F n ) n≥1 be a sequence of F T -measurable random variables in S (Ω) ⊗ E which is Cauchy in L 1 (Ω, F T ; E). By [15,Lemma 5.4] there exists a constant C ≥ 0, depending only on E, such that for all δ > 0 and ε > 0 and all m, n ≥ 1, In this computation, ( * ) follows from Theorem 6.6 which gives E(F |F t ) − E(F ) = E I(P F DF ) F t = E I(ξ P F DF (T )) F t = I(ξ P F DF (t)).
The estimate ( * * ) follows from Doob's maximal inequality. Since the right-hand side in the above computation can be made arbitrarily small, this proves that (P F (DF n )) n≥1 is Cauchy in measure in γ(L 2 (0, T ; H ), E). For F ∈ L 1 (Ω, F T ; E) this permits us to define (P F • D)F := lim n→∞ P F (DF n ), where (F n ) n≥1 is any sequence of F T -measurable random variables in S (Ω) ⊗ E satisfying lim n→∞ F n = F in L 1 (Ω, F T ; E). It is easily checked that this definition is independent of the approximation sequence. The resulting linear operator P F • D has the stated properties. This time we use the fact that I is a homeomorphism from L 0 F (Ω; γ(L 2 (0, T ; H ), E)) onto its image in L 0 (Ω, F T ; E); this also gives the uniqueness of (P F • D)F .