Weak solutions to gamma-driven stochastic differential equations

We study a stochastic differential equation driven by a gamma process, for which we give results on the existence of weak solutions under conditions on the volatility function. To that end we provide results on the density process between the laws of solutions with different volatility functions.


Introduction
The goal of the present paper is to give conditions such that the Lévy-driven stochastic differential equation dX t = σ(X t− ) dL t , X 0 = 0 (1) has a weak solution that is unique in law.Here L is a gamma process with L 0 = 0 and therefore L is a subordinator, i.e. a stochastic process with monotonous sample paths.Furthermore, L has a Lévy measure ν admitting the Lévy density where α and β are two positive constants.The process L has independent increments, and L t − L s has a Gamma(α(t − s), β) distribution for t > s.Recall that the Gamma(a, b) distribution has a density given by x → b a Γ(a) x a−1 e −bx for x > 0.
Under the assumption that the function σ (in view of financial applications we refer to it as volatility function) is measurable, positive and satisfies a linear growth condition, we will see in Theorem 4 that Equation (1) admits a weak solution that is unique in law.This is the main result of the present paper.Note that under the stronger condition that σ is Lipschitz continuous, Equation (1) even has a unique strong solution, see (Protter 2004, Theorem V.6).
We will briefly outline the relevance of gamma processes.They form a special class of Lévy processes (see, e.g., Kyprianou (2014)), are a fundamental modelling tool in several fields, e.g.reliability (see van Noortwijk (2009)) and risk theory (see Dufresne et al. (1991)).Since the driving gamma process L in (1) has non-decreasing sample paths and the volatility function σ is non-negative, also the process X has non-decreasing sample paths.Such processes find applications across various fields.A reliability model as in (1) has been thoroughly investigated from a probabilistic point of view in Wenocur (1989), and constitutes a far-reaching generalisation of a basic gamma model.Furthermore, non-decreasing processes are ideally suited to model revenues from an innovation: in Chance et al. (2008), the authors study the question of pricing options on movie box office revenues that are modelled through a gamma-like stochastic process.Another potential application is in modelling the evolution of forest fire sizes over time, as in Reed & McKelvey (2002).
Any practical application of the model (1) would require knowledge of the volatility function σ, that has to be inferred from observations on the process X.This is a statistical problem to which we present a nonparametric Bayesian approach in Belomestny et al. (202?).The obtained results in this paper assume either a piecewise constant volatility function σ or a Hölder continuous one.In both cases one needs existence of weak solutions to (1) and a likelihood ratio.The present paper covers those two cases and provides the foundations of the statistical analysis.For a survey of other contributions to statistical inference for Lévy driven SDEs we also refer to Belomestny et al. (202?).

Absolute continuity and likelihood
In the proof our main result, Theorem 4, we need the likelihood ratio between different laws of solutions to (1).In this section we give the relevant results.
Let (Ω, F , F, P) be a filtered probability space and let (L t ) t≥0 be a gamma process adapted to F, whose Lévy measure admits the density v given by (2).
Assume that X is a (weak) solution to (1), and assume that X is observed on an interval [0, T ].We denote by P σ T (a probability measure on F X T = σ(X t , 0 ≤ t ≤ T )) its law.In agreement with this notation we let P 1 T be the law of X when σ ≡ 1, in which case X t = L t , t ∈ [0, T ].The measure P 1 T will serve as a reference measure.The choice σ = 1 for obtaining a reference measure is natural, but also arbitrary.Many other choices for the function σ are conceivable, in particular other constant functions.The question we are going to investigate first is under which conditions the laws P σ T and P 1 T are equivalent.Suppose that the process σ(X t− ), t ∈ [0, T ] is strictly positive and define First we show that for X given by (1), its compensated jump measure under P σ T is determined by (3).
Lemma 1. Assume that (1) admits a weak solution for a given measurable function σ with σ(X s− ) > 0 a.s.for all s ≥ 0. Under the measure P σ T , the third characteristic of the semimartingale X, its compensated jump measure ν σ (dx, dt), is given by ν σ (dx, dt) = v σ (t, x) dxdt.
Proof.Let f be a bounded measurable function and let X be given by (1).Then (the summation is only for those s with ∆X s > 0, and t ∈ [0, T ] is arbitrary) with µ L being the jump measure of L, which is the sum of a local martingale M under P, adapted to F, and the predictable process As the latter expression only depends on the process X, the local martingale M is also adapted to F X = {F X t , t ≥ 0}.By a simple change of variable, the double integral equals to Since P σ T is the law of X for X given by ( 1), the expression in the display, again depending on X only, is also the Absolute continuity of P σ T w.r.t.P 1 T is guaranteed, see (Jacod & Shiryaev 2003, Theorem III.5.34), under the condition Here H T has been derived from Equation (5.7) in (Jacod & Shiryaev 2003, Chapter III).As the driving process L is a gamma process with Lévy density v given by ( 2), one has in fact Clearly, conditions on σ have to be imposed to have absolute continuity, or even equivalence, these are given now.Of course, we still have to assume that a weak solution to (1) exists.As already announced, sufficient conditions for this will be presented in Theorem 4. Below, the jump measure of X is denoted µ X .
Proposition 2. Assume that σ is a positive locally bounded measurable function on [0, ∞) such that (1) admits a weak solution unique in law.It is furthermore assumed that σ is lower bounded by a constant σ 0 > 0. Then the laws P σ T and P 1 T are equivalent, and one has where E T is the Doleans exponent at time T of the process within the parentheses.
In other words, Z T := is the solution at time T to the SDE Proof.Split the integrand h t in (5) into two integrals, for x ∈ [0, 1] and x ∈ (1, ∞), call them h < t and h > t respectively.For h < t we use the elementary inequality (e 2 (here we also used x ≤ 1), which is bounded by the finite constant β 2 2 ( 1 σ 2 0 + 1).To treat the integral h > t we use the elementary inequality (a − b) 2 ≤ 2(a 2 + b 2 ), which leads us to study, also using Here the first term on the right-hand side is bounded by σ(X t− )/β.As X is increasing, X t− is between zero and X T , which is finite P σ T -a.s.By the local boundedness of σ, also sup t≤T σ(X t− ) ≤ sup x≤XT σ(x) is finite P σ T -a.s.From the obtained bounds on h < t and h > t it follows that H T is a.s.bounded under P σ T , so the condition (4) is satisfied.The expression for the likelihood ratio as a Doléans exponential in (6) follows from Theorem III.5.19 in Jacod & Shiryaev (2003).
The next proposition gives an explicit expression for the Radon-Nikodym derivative in Proposition 2. This is useful when computations with this Radon-Nikodym derivative have to be done, for instance for likelihood based inference in a statistical analysis.
Proposition 3. Let the conditions of Proposition 2 hold.Then the solution Z to (7) has at any time T > 0 the explicit representation where both double integrals are a.s.finite.
Proof.It follows from Lemma 18.8 in Liptser & Shiryayev (1978), under the condition that the process F defined by ) is a process of finite variation, that the explicit expression in (8) holds.We proceed by showing that F has a.s.finite variation over any interval [0, T ].Note first that the variation of F over [0, T ] is F T := (0,T ] (0,∞) |Y (t, x)− 1|(µ X (dx, dt) + v(x) dx dt).In view of Proposition II.1.28 in Jacod & Shiryaev (2003), it is sufficient to check that (0,T ] (0,∞) |Y (t, x) − 1|v(x) dx dt is finite.We consider the inner integral, split into two integrals, one for x ≥ 1, one for 0 We find that (0,T ] x≥1 |Y (t, x) − 1|v(x) dx dt is finite a.s., as σ is assumed to be a locally bounded function.For the other inner integral we use the elementary inequality for p, q > 0 Then we have, using that σ is lower bounded by σ 0 , Hence also (0,T ] 0<x<1 |Y (t, x) − 1|v(x) dx dt is finite.

Weak solutions
We will use a variation of Proposition 2 to establish existence of a weak solution to (1) under a growth condition on σ.The precise result follows.
Proof.This proof is inspired by Section 5.3B of Karatzas & Shreve (1991) for a similar problem in a Brownian setting.Fix T > 0 and consider a probability space (Ω, F , Q) on which X is defined as the gamma process.We choose Ω to be the Skorohod space, F = F X = σ(X t , t ≥ 0), and X the coordinate process.Furthermore we use the filtration 2).Define L by dL t = 1 σ(Xt− ) dX t and L 0 = 0. Since σ is bounded from below and measurable, the process L is welldefined.We again take and make a measure change on F X T , parallel to Proposition 2, Provided that P T is a probability measure on F X T , which happens if . By the arguments in the proof of Lemma 1, one obtains that under P T the process L has third characteristic ν L,P (dz, dt) = v L,P (t, z)dzdt, with v L,P (t, z) = v X,P (t, zσ(X t− ))σ(X t− ), which is nothing else but v(z), implying that under P T , L is a gamma process on [0, T ], and it also holds that dX = σ(X t− ) dL t .We conclude that under P T , X is a solution of the SDE, where L is a gamma process with Lévy density v. What remains to be shown for existence of a weak solution is that P T is a probability measure on F X T .We use Theorem IV.3 of Lépingle & Mémin (1978), this will be done via a detour as a direct application doesn't give the desired results.First we compute ∞ 0 (y(σ(x), z) log y(σ(x), z) − y(σ(x), z) + 1)v(z) dz, where y(σ, z) Consider for some δ > 0, to be chosen later, T n = nδT , for n = 0, . . .N , with T N = T , so δ = 1/N ; note that T n − T n−1 = δT .Let Z n be the solution to with Z n 0 = 1, for n = 1, . . ., N .If the Z n are martingales, not just local martingales, under Q w.r.t. the filtration F X , then Tn , one obtains which can be seen equal to one by an induction argument.To see that Z n are martingales, we use Theorem IV.3 of Lépingle & Mémin (1978), i.e. the aim is to show In view of the computations above this amounts to showing that Using that σ is lower bounded and satisfies the growth condition, and that X is an increasing process under Q, we have that the integrand above is upper bounded by α(K(1 , where C ′ is another positive constant, so it is sufficient to prove that E Q exp(δαKT X T ) < ∞.But, under Q, X T has a Gamma(αT, β) distribution, so the latter expectation, calculated as an integral w.r.t. to the gamma distribution, is seen to be finite if δ < β/(αKT ), equivalently N > αKT /β.For such a choice of N , we obtain that the Z n are martingales and hence E Q Z T = 1.As a consequence P T is a probability measure on (Ω, F X T ) for every T > 0. In fact the P T form a consistent family of probability measures, and hence there exists a probability measure P on F such that P| F X T = P T , see Lemma 18.18 in Kallenberg (2002).This shows that there exists a weak solution on the entire time interval [0, ∞).Finally, we turn to uniqueness in law of a weak solution.Consider two possible weak solutions X i , or rather (X i , L i ) for i = 1, 2, on an interval [0, T ], defined on their own filtered probability spaces (Ω i , F i , F i , P i ).Consider changes of measures d Pi = Z i T dP i .Here we take T = t in We assume for a while that the Z i T have expectations one under P i so that the Pi are probability measures on F i T , equivalent to the P i .By the arguments used earlier in this proof, under Pi the processes X i are gamma processes with parameters α, β.Hence the distributions, for i = 1, 2, of the X i are identical under the probability measures Pi .Consider then samples X i (n) = (X i t1 , . . .X i tn ), 0 ≤ t 1 ≤, . . ., ≤ t n ≤ T , and Borel sets B of R n .Then On the right hand side of Equation ( 10), all random quantities are defined in terms of X i , hence the expectations in (10) are the same for i = 1, 2. This shows that the finite dimensional distributions of X 1 and X 2 are identical and hence the laws of X 1 and X 2 are the same as well.It is left to show that the Z i T have expectations one under P i .We follow the same path as above, we first compute F (x) := (ỹ(σ(x), z) log ỹ(σ(x), z) − ỹ(σ(x), z) + 1) v(z/σ) σ dz.Thereto we consider f (σ) = (ỹ(σ, z) log ỹ(σ, z) − ỹ(σ, z) + 1) v(z/σ) σ dz.It turns out that f (σ) = α( 1 σ −1+log σ), so f (σ) = f ( 1 σ ), and hence F (x) = α( 1 σ(x) −1+log σ(x)).From here we continue to solve the SDE for Z i on intervals (T n−1 , T n ], similar to (9), with T n = nδT , resulting in processes Z i,n that are martingales with Z i,n Tn−1 = 1.To show the martingale property for Z i,n , we use again Theorem IV.3 of Lépingle & Mémin (1978), i.e. we show Here the integrand is bounded by α( 1 σ0 −1+log K +log(1+X i T )).Hence for a constant C, depending on T , the exponent is less than or equal to C(1+X i T ) δαT , and we have to show for a well chosen δ > 0 that E P i (1 + X i T ) δαT < ∞, equivalently E P i (X i T ) δαT < ∞.The conditions in Proposition 4.1 of Klebaner & Liptser (2014) (on their operator L s (x s− )) are satisfied by the linear growth condition on σ, and as a result one has E P i (X i T ) 2 < ∞.Therefore we take δ < 2/αT .