Malliavin Calculus in L´evy spaces and Applications to Finance

The main goal of this paper is to generalize the results of Fourni´e et al. [8] for markets generated by L´evy processes. For this reason we extend the theory of Malliavin calculus to provide the tools that are necessary for the calculation of the sensitivities, such as diﬀerentiability results for the solution of a stochastic diﬀerential equation


Introduction
In recent years there has been an increasing interest in Malliavin Calculus and its applications to finance. Such applications were first presented in the seminal paper of Fournié et al. [8]. In this paper the authors are able to calculate the Greeks using well known results of Malliavin Calculus on Wiener spaces, such as the chain rule and the integration by parts formula. Their method produces better convergence results from other established methods, especially for discontinuous payoff functions. There have been a number of papers trying to produce similar results for markets generated by pure jump and jump-diffusion processes. For instance El-Khatib and Privault [6] have considered a market generated by Poisson processes. In Forster et al. [7] the authors work in a space generated by independent Wiener and Poisson processes; by conditioning on the jump part, they are able to calculate the Greeks using classical Malliavin calculus. Davis and Johansson [4] produce the Greeks for simple jump-diffusion processes which satisfy a separability condition. Each of the previous approaches has its advantages in specific cases. However, they can only treat subgroups of Lévy processes. This paper produces a global treatment for markets generated by Lévy processes and achieves a similar formulation of the sensitivities as in Fournié et al. [8]. We rely on Malliavin calculus for discontinuous processes and expand the theory to fulfill our needs. Malliavin calculus for discontinuous processes has been widely studied as an individual subject, see for instance Bichteler et al. [3] for an overview of early works, Di Nunno et al. [5], Løkka [12] and Nualart and Vives [14] for pure jump Lévy processes, Solé et al. [16] for general Lévy processes and Yablonski [17] for processes with independent increments. It has also been studied in the sphere of finance, see for instance Benth et al. [2] and Léon et al. [11]. In our case we focus on square integrable Lévy processes. The starting point of our approach is the fact that Lévy processes can be decomposed into a Wiener process and a Poisson random measure part. Hence we are able to use the results of Itô [9] on the chaos expansion property. In this way every square integrable random variable in our space can be represented as an infinite sum of integrals with respect to the Wiener process and the Poisson random measure. Having the chaos expansion we are able to introduce operators for the Wiener processes and Poisson random measure. With an application to finance in mind, the Wiener operator should preserve the chain rule property. Such a Wiener operator was introduced in Yablonski [17] for the more general class of processes with independent increments, using the classical Malliavin definition. In our case we adopt the definition of directional derivative first introduced in Nualart and Vives [14] for pure jump processes and then used in Léon et al. [11] and Solé et al. [16]. The chain rule formulation that is achieved for simple Lévy processes in Léon et al. [11], and for more general processes in Solé et al. [16], is only applicable to separable random variables. As Davis and Johansson [4] have shown, this form of chain rule restricts the scope of applications, for instance it excludes stochastic volatility models that allow jumps in the volatility. We are able to bypass the separability condition, by generalizing the chain rule in this setting. Following this, we define the directional Skorohod integrals, conduct a study of their properties and give a proof of the integration by parts formula. We conclude our theoretical part with the main result of the paper, the study of differentiability for the solution of a Lévy stochastic differential equation. With the help of these tools we produce formulas for the sensitivities that have the same simplicity and easy implementation as the ones in Fournié et al [8].
The paper is organized as follows. In Section 2 we summarize results of Malliavin calculus, define the two directional derivatives, in the Wiener and Poison random measure direction, prove their equivalence to the classical Malliavin derivative and the difference operator in Løkka [12] respectively, and prove the general chain rule. In Section 3 we define the adjoint of the directional derivatives, the Skorohod integrals, and prove an integration by parts formula. In Section 4 we prove the differentiability of a solution of a Lévy stochastic differential equation and get an explicit form for the Wiener directional derivative. Section 5 deals with the calculation of the sensitivities using these results. The paper concludes in Section 6, with the implementation of the results and some numerical experiments.

Malliavin calculus for square integrable Lévy Processes
is the augmented filtration generated by Z. Then the process can be represented as: where {W t } t∈[0,T ] is the standard Wiener process and µ(·, ·) is a Poisson random measure independent of the Wiener process defined by where A ∈ B(R 0 ) . The compensator of the Lévy measure is denoted by π(dz, dt) = λ(dt)ν(dz) and the jump measure of the Lévy process by ν(·), for more details see [1]. Since Z is square integrable the Lévy measure satisfies R 0 z 2 ν(dz) < ∞. Finally σ is a positive constant, λ the Lebesgue measure and R 0 = R \ {0}. In the followingμ(ds, dz) = µ(ds, dz) − π(ds, dz) will represent the compensated random measure. In order to simplify the presentation, we introduce the following unifying notation for the Wiener process and the random Poisson measure also we define an expanded simplex of the form: for j 1 , . . . , j n = 0, 1.

Chaos expansion
The theorem that follows is the chaos expansion for processes in the Lévy space L 2 (Ω). It states that every random variable F in this space can be uniquely represented as an infinite sum of integrals of the form (1). This can be considered as a reformulation of the results in [9], or an expansion of the results in [12].
We can show that this definition is reduced to the Malliavin derivative if we take j i = 0, ∀i = 1, . . . , n, and to the definition of [12] if j i = 1,∀i = 1, . . . , n.
From the above we can reach the following definition for the space of random variables differentiable in the l-th direction, which we denote by D (l) , and its respective derivative D (l) : 1. Let D (l) be the space of the random variables in L 2 (Ω) that are differentiable in the l-th direction, then 2. Let F ∈ D (l) . Then the derivative on the l-th direction is: From the definition of the domain of the l−directional derivative, all the elements of L 2 (Ω) with finite chaos expansion are included in D (l) . Hence, we can conclude that D (l) is dense in L 2 (Ω).

Relation between the Classical and the Directional Derivatives
In order to study the relation between the classical Malliavin derivative, see [13], the difference operator in [12] and the directional derivatives, we need to work on the canonical space. The canonical Brownian motion is defined on the probability space (Ω W , F W , P W ), where Ω W = C 0 ([0, 1]); the space of continuous functions on [0, 1] equal to null at time zero; F W is the Borel σ-algebra and P W is the probability measure on F W such that B t (ω) := ω(t) is a Brownian motion.
Respectively, the triplet (Ω N , F N , P N ) denotes the space on which the canonical Poisson random measure. We denote with Ω N the space of integer valued measures ω on [0, 1] × R 0 , such that ω (t, u) ≤ 1 for any point (t, u) ∈ [0, 1]×R 0 , and ω (A×B) < ∞ when π(A×B) = λ(A)ν(B) < ∞, where ν is the σ-finite measure on R 0 . The canonical random measure on Ω N is defined as With P N we denote the probability measure on F N under which µ is a Poisson random measure with intensity π. Hence, µ(A × B) is a Poisson variable with mean π(A × B), and the variables In our case we have a combination of the two above spaces. With (Ω, F, {F t } t∈[0,1] , P) we will denote the joint probability space,where Ω := Ω W ⊗ Ω N equipped with the probability measure P := P W ⊗ P N and F t := F W t ⊗ F N t . Then there exists an isometry Therefore we can consider every F ∈ L 2 (Ω W ; L 2 (Ω N )) as a functional F : ω → F (ω, ω ). This implies that L 2 (Ω W ; L 2 (Ω N )) is a Wiener space on which we can define the classical Malliavin derivative D. The derivative D is a closed operator from L 2 (Ω W ; L 2 (Ω N )) into In the same way the difference operatorD defined in [12] with domainD 1,2 is closed from As a consequence we have the following proposition.
Given the directional derivatives D andD we reach the subsequent proposition.
where Z depends only on the Wiener part and Z ∈ D (0) , Z depends only on the Poisson random measure and f (x, y) is a continuously differentiable function with bounded partial derivatives in x, then

Chain rule
The last proposition is an extension of the results in [11], where the authors consider only simple Lévy processes, and similar to corollary 3.6 in [16]. However, this chain rule is applicable to random variables that can be separated to a continuous and a discontinuous part;separable random variables, for more details see [4]. In what follows we provide the proof of chain rule with no separability requirements. The first step is to find a dense linear span of Doléans-Dade exponentials for our space. To achieve this, as in [12], we use the continuous function which is totally bounded and has an inverse. Moreover γ ∈ L 2 (ν), e λγ − 1 ∈ L 2 (ν), ∀λ ∈ R and for h ∈ C([0, T ]) we have e hγ − 1 ∈ L 2 (π), hγ ∈ L 2 (π), e hγ ∈ L 1 (π).
Proof. The proof follows the same steps of [12].
The proof of the chain rule requires the next technical lemma.
Proof. We follow the same steps as in Lemma 6 in [12]. Since F k converges to F lim k→∞ ∞ n=0 j 1 ,...,jn=0,1 Since F k , F ∈ D (0) from the definition of the directional derivative we have From (4) we can choose a subsequence such that g k m+1 Using the fact that D (0) is a densely defined and closed operator, and that the elements of the linear span S are separable processes, we prove in the following theorem the chain rule for all processes in D (0) .

Theorem 2. (Chain Rule)
Let F ∈ D (0) and f be a continuously differentiable function with bounded derivative. Then f (F ) ∈ D (0) and the following chain rule holds: Proof. Let F ∈ D (0) . F can be approximated in L 2 (Ω) by a sequence {F n } ∞ n=0 , where F n ∈ S for all n ∈ N. Every term of F n , as a linear combination of Lévy exponentials, is in D (0) . Then from Lemma 2 there exists a subsequence {F n k } ∞ k=0 such that lim k→∞ D (0) However, the elements of the sequence {F n k } ∞ k=0 are separable processes. We can then apply the chain rule in Proposition 2 to the process f (F n k ) and we have Since f is continuously differentiable with bounded derivative lim k→∞ f (F n k ) = f (F ) in L 2 (Ω), and from the dominate convergence theorem we can conclude that lim k→∞ f (F n k ) = f (F ) in L 2 (Ω). Hence lim k→∞ f (F n k )D Remark.The theory developed in this chapter also holds in the case that our space is generated by an d-dimensional Wiener process and k-dimensional random Poisson measures. However, we will have to introduce new notation for the directional derivatives in order to simplify things. For the multidimensional case, D (0) t F will denote a row vector, where the element of the i-th row is the directional derivative for the Wiener process W i , for all i = 1, . . . , d. Similarly we define the row vector D (1) (t,z) F . Furthermore D i F , i = 1, . . . , d, will be scalars denoting the directional derivative of the i-th Wiener process W i for i = 1, . . . , d, and the derivative in the direction of the i-th random Poisson measureμ i for i = d + 1, . . . , d + k.

Skorohod Integral
The next step after the definition of the directional derivatives is to define their adjoint, which are the Skorohod integrals in the Wiener and Poisson random measure directions. The first two result of the section are the calculation of the Skorohod integral and the study of its relation to the Itô and Stieltjes-Lebesgue integrals. These are extensions of the results in [4] and [10] from simple Poisson processes to square integrable Lévy processes. The proof are performed in parallel ways as in [4] (or in more detail in [10]), therefore they are omitted. The main result however is an integration by parts formula. Although the separability result is yet again an extension of [4], having attained a chain rule for D (0) that does not require a condition, we are able to provide a simpler and more elegant proof. Finally the section closes with a technical result.

Definition 3. The Skorohod integral
Let δ (l) be the adjoint operator of the directional derivative D (l) ,l = 0, 1. The operator δ (l) maps L 2 (Ω × U l ) to L 2 (Ω). The set of processes h ∈ L 2 (Ω × U l ) such that: for all F ∈ D (l) , is the domain of δ (l) , denoted by Domδ (l) . For every h ∈ Domδ (l) we can define the Skorohod integral in the l-th direction δ (l) (h) for which for any F ∈ D (l) .
The following proposition provides the form of the Skorohod integral. Then the l-th directional Skorohod integral is if the infinite sum converges in L 2 (Ω).
Having the exact form of the Skorohod integral we can study its properties. For instance the Skorohod integral can be reduced to an Itô or Stieltjes-Lebesgue integral in the case of predicable processes.

Proposition 4. Let h t be a predictable process such that
Then h ∈ Dom δ (l) for l = 0, 1 and We are now able to prove one of the main results, the integration by parts formula.

Proposition 5. (Integration by parts formula)
if and only if the second part of the equation is in L 2 (Ω).

Proof. From Theorem 2 we have
. Hence, from the definition of the Skorohod integral we have Combining (8), (9) and Proposition 4 the proof is concluded.
Note that when F is an m-dimensional vector process and h a m × m matrix process the integration by part formula can be written as follows: The last proposition of this chapter will provide a relationship between the Itô and the Stieltjes-Lebesgue integrals and the directional derivatives.

Proposition 6.
Let h t be a predictable square integrable process. Then Proof. This result can be easily deduced from the definition of the directional derivative.

Differentiability of Stochastic Differential Equations
The aim of this section is to prove that under specific conditions the solution of a stochastic differential equation belongs to the domains of the directional derivatives. Having in mind the applications in finance, we will also provide a specific expression for the Wiener directional derivative of the solution.
Let {X t } t∈[0,T ] be an m-dimensional process in our probability space, satisfying the following stochastic differential equation: are continuously differentiable with bounded derivatives. The coefficients also satisfy the following linear growth condition: for each t ∈ [0, T ], x ∈ R m where C is a positive constant. Furthermore there exists ρ : R → R with R 0 ρ(z) 2 ν(dz) < ∞, and a positive constant D such that for all x, y ∈ R m and z ∈ R 0 . Under these conditions there exists a solution for (10) which is also unique 1 . For what follows we denote with σ i the i-th column vector of σ and adopt the Einstein convention of leaving the summations implicit.
In the next theorem we prove that the solution {X t } t∈[0,T ] is differentiable in both directions of the Malliavin derivative. Moreover we reach the stochastic differentiable equations satisfied by the derivatives.
for s ≤ t a.e. and D i s X t = 0, a.e. otherwise.

Proof.
1. Using Picard's approximation scheme we introduce the following process if n ≥ 0. We prove by induction that the following hypothesis holds true for all n ≥ 0.
exists for all s ≥ r, D (0) X n s is a predictable process and for some constants c 1 , c 2 . It is straightforward that (H) is satisfied for n = 0. Let us assume that (H) is satisfied for n ≥ 0. Then from Theorem 2, b(s, X n s − ), σ(s, X n s − ) and γ(s, z, X n s − ) are in D (0) . Furthermore, we have that Since the coefficients have continuously bounded first derivatives in the x direction and condition (11), there exists a constant K such that However, From the above we can conclude that X n+1 From Cauchy-Schwartz and Burkholder-Davis-Gundy 2 inequality, (19) takes the following form From (15), (16) and (17) we have where β = sup n,i E sup r≤s≤t |σ i (s, X n s − )| 2 . Thus, hypothesis (H) holds for n + 1. From Applebaum [1], Theorem 6.2.3, we have that E sup s≤T |X n s − X s | 2 → 0 as n goes to infinity. By induction to the inequality (20), see for more details appendix A, we can conclude that the derivatives of X n s are bounded in L 2 (Ω × [0, T ]) uniformly in n. Hence X t ∈ D (0) . Applying the chain rule to (12) we conclude our proof.
2. Following the same steps we can prove the second claim of the theorem.
With the previous theorem we have proven that the solution of (10) is in D (0) , and reached the stochastic differential equation that D (0) s X t satisfies. However, the Wiener directional derivative can take a more explicit form. As in the classical Malliavin calculus we are able to associate the solution of (12) with the process Y t = ∇X t ; first variation of X t . Y satisfies the following stochastic differential equation 3 : where prime denotes the derivative and I the identity matrix. Hence, we reach the following proposition which provides us with a simpler expression for D where Y t = ∇X t is the first variation of X t .
Proof. The elements of the matrix Y satisfy the following equation: where δ is the Dirichlet delta.
Let {Z t } t∈[0,T ] be a d × d matrix valued process that satisfies the following equation By applying integration by parts formula we can prove that Furthermore it is easy to show applying again Itô's formula, that Y il t Z lk r − σ k j (r, X r − ) verifies (12) for all r < t. Hence the proof is concluded.

Sensitivities
Using the Malliavin calculus developed in the previous sections we are able to calculate the sensitivities, i.e. the Greek letters. The Greeks are calculated for an m-dimensional process {X t } t∈[0,T ] that satisfies equation (10). We denote the price of the contingent claim as where φ(X t 1 , . . . , X tn ) is the payoff function, which is square integrable, evaluated at times t 1 , . . . , t n and discounted from maturity T .
In what follows we assume the following ellipticity condition for the diffusion matrix σ.
Assumption 1. The diffusion matrix σ is elliptic. That implies that there exists k > 0 such that

Variation in the Drift Coefficient
Let us assume the following perturbed process where is a scalar and ξ is a bounded function. Then we reach the following proposition.
Proposition 8. Let σ be a uniformly elliptic matrix. We denote u (x) the following payoff Then Proof. The proof is based on an application of Girsanov's theorem. For

Variation in the Initial Condition
In order to calculate the variation in the initial condition we will define the set Γ, as follows where t i are as in (23).
Proposition 9. Assume that the diffusion matrix σ is uniformly elliptic. Then for all Proof. Let φ be a continuously differentiable function with bounded gradient. Then we can differentiate inside the expectation 4 and we have where ∇ i φ(X t 1 , . . . , X tn ) is the gradient of φ with respect to X t i , and ∂ ∂x X t i is the d × d matrix of the first variation of the d-dimensional process X t i . From (22) we have Hence for any ζ ∈ Γ and inserting the above to (24) follows that from Theorem 2 φ(X t 1 , . . . , X tn ) ∈ D (0) , thus From the definition of the Skorohod integral we reach However, ζ(t)(σ −1 (t, X t − )Y t − ) * is a predictable process, thus the Skorohod integral coincides with the Wiener. Since the family of continuously differentiable functions is dense in L 2 , the result hold for any φ ∈ L 2 , see [8] and [4] for more details.

Variation in the diffusion coefficient
For this section we consider the following perturbed process where is a scalar and ξ is a continuously differentiable function with bounded gradient. We also introduce the variation process in respect to , Z t = ∂ ∂ X t , which satisfies the following sde In this case, we introduce the set Γ n , where Γ n = {ζ ∈ L 2 ([0, T )) : ζ(t)dt = 1, ∀i = 1, . . . , n}.
Then for all ζ(t) ∈ Γ Proof. Let φ be a continuously differentiable function with bounded gradient as in Proposition 9,we can differentiate inside the expectation. Hence Inserting the above to (25) we have the following the result follows. If β ∈ D (0) using Proposition 5 we can calculate the Skorohod integral.

Variation in the jump amplitude
For this section we consider the following perturbed process where is a scalar and ξ is a continuously differentiable function with bounded gradient. As in the previous section, we will also introduce the variation process in respect to , Z t = ∂ ∂ X t , which satisfies the following sde And we will use the set Γ n as it is defined in the previous section.