The McKean stochastic game driven by a spectrally negative L´evy process

We consider the stochastic-game-analogue of McKean’s optimal stopping problem when the underlying source of randomness is a spectrally negative L´evy process. Compared to the solution for linear Brownian motion given in Kyprianou (2004) one ﬁnds two new phenomena. Firstly the breakdown of smooth ﬁt and secondly the stopping domain for one of the players ‘thickens’ from a singleton to an interval, at least in the case that there is no Gaussian component.


Introduction.
Let X = {X t : t ≥ 0} be a Lévy process defined on a filtered probability space (Ω, F, F, P), where F := {F t : t ≥ 0} is the filtration generated by X which is naturally enlarged (cf. Definition 1.3.38 of Bichteler (2002)). Write T 0,∞ for the family of stopping times with respect to F. For x ∈ R denote by P x the law of X when it is started at x and write simply P 0 = P. Accordingly we shall write E x and E for the associated expectation operators. In this paper we shall assume throughout that X is spectrally negative meaning here that it has no positive jumps and that it is not the negative of a subordinator. It is well known that the latter allows us to talk about the Laplace exponent ψ(θ) := log E[e θX 1 ] for θ ≥ 0. In general one may write where a ∈ R, σ 2 ≥ 0 and where the jump measure Π of X has zero mass on [0, ∞) and satisfies This paper is concerned with stochastic games in the sense of, for example, Dynkin (1969), Cvitanić and Karatzas (1996) and Kifer (2000). We are principally interested in showing, under certain assumptions, the existence of a pair of stopping times τ * and σ * in T 0,∞ such that for all x ∈ R and all stopping times τ, σ ∈ T 0,∞ , where M x (τ, σ) = E x [e −rτ (K − e Xτ ) + 1 {τ ≤σ} + e −rσ ((K − e Xσ ) + + δ))1 {σ<τ } ] and K, δ > 0. When this happens we shall refer the pair (τ * , σ * ) as a stochastic saddle point (also known as Nash equilibrium cf. Ekström and Peskir (2006)) and we shall refer to V (x) = M x (τ * , σ * ) as the value of the game (3). Moreover we shall refer to the triple (τ * , σ * , V ) as a solution to the stochastic game (3). Another objective is to be able to say something constructive about the nature of the stopping times τ * and σ * as well as the function V . The assumptions we shall make are that the parameter r satisfies 0 ≤ ψ(1) ≤ r and r > 0. (4) Note that the assumption that r > 0 conveniently means that the gain in the expectations in (3) is well defined and equal to zero on the event {σ = τ = ∞}. In Section 10 at the end of this paper we shall make some remarks on the case that r = 0 and ψ(1) > 0.
When ψ(1) = r > 0 the stochastic game (3) can be understood to characterise the risk neutral price of a so-called game option in a simple market consisting of a risky asset whose value is given by {e Xt : t ≥ 0} and a riskless asset which grows at rate r (cf. Kifer (2000)). The latter game option is an American-type contract with infinite horizon which offers the holder the right but not the obligation to claim (K − e Xσ ) + at any stopping time σ ∈ T 0,∞ , but in addition, the contract also gives the writer the right but not the obligation to force a payment of (K −e Xτ ) + +δ at any stopping time τ ∈ T 0,∞ . This paper does not per se discuss the financial consequences of the mathematical object (3) however.
The stochastic game (3) is closely related to the McKean optimal stopping problem which, when r = ψ(1), characterises the value of a perpetual American put option (cf. McKean (1965)). Indeed, should it be the case that the stochastic saddle point in (3) is achieved when σ = ∞, then U = V . Thanks to a plethora of research papers on the latter topic it is known that an optimal stopping strategy for (5) is then where X t = inf s≤t X s and e r is an exponentially distributed random variable with parameter r which is independent of X. We refer to Chan (2004) and Mordecki (2002) who handled specifically the case that X is spectrally negative and the case that X is a general Lévy process respectively. The stochastic game (3) may therefore be thought of as a natural extension of the McKean optimal stopping problem and we henceforth refer to it as the McKean stochastic game.
Despite the fact that a solution to the stochastic game (3) has been explicitly characterised for the case that X is a linear Brownian motion in Kyprianou (2004), it turns out that working with spectrally negative Lévy processes, as we do here, is a much more difficult problem. Naturally, this is the consequence of the introduction of jumps which necessitates the use of more complicated potential and stochastic analysis as well as being the cause of a more complicated optimal strategy for particular types of spectrally negative Lévy processes thanks to the possibility of passing barriers by jumping over them. Indeed the analysis performed in this paper leaves open a number of finer issues concerning the exact characterisation of the solution, in particular when a Gaussian component is present. In that case, it appears that a considerably more subtle analysis is necessary to take account of how the strategies of the sup-player and inf-player (who are looking for a maximising τ * and minimising σ * in (3), respectively) will depend on the 'size' of the jumps compared to the Gaussian coefficient. This is left for further study and in this respect, the current work may be seen as a first treatment on the topic. The case of two-sided jumps is also an open issue and we refer to Remark 8 later in the text for some discussion on the additional difficulties that arise. Finally we refer the reader to Gapeev and Kühn (2005) and Baurdoux and Kyprianou (2008) for other examples of stochastic games driven by Lévy processes.
2 Solutions to the McKean stochastic game.
The conclusions of Ekström and Peskir (2006) guarantee that a solution to the McKean stochastic game exists, but tells us nothing of the nature of the value function. Below in Theorems 2, 3 and 4 we give a qualitative and quantitative exposition of the solution to (3) under the assumption (4). Before doing so we need to give a brief reminder of a class of special functions which appear commonly in connection with the study of spectrally negative Lévy processes and indeed in connection with the description below of the McKean stochastic game. For each q ≥ 0 we introduce the functions W (q) : R → [0, ∞) which are known to satisfy for all x ∈ R and a ≥ 0 where τ + a := inf{t > 0 : X t > a} and τ − 0 = inf{t > 0 : X t < 0} (cf. Chapter 8 of Kyprianou (2006)). In particular it is evident that W (q) (x) = 0 for all x < 0 and further, it is known that on (0, ∞) W (q) is almost everywhere differentiable, there is right continuity at zero and is the largest root of the equation ψ(θ) = q (of which there are at most two). For convenience we shall write W in place of W (0) . Associated to the functions W (q) are the functions Z (q) : R → [1, ∞) defined by for q ≥ 0. Together the functions W (q) and Z (q) are collectively known as scale functions and predominantly appear in almost all fluctuation identities for spectrally negative Lévy processes. For example it is also known that for all x ∈ R and a, q ≥ 0, and where q/Φ(q) is to be understood in the limiting sense ψ ′ (0) ∨ 0 when q = 0.
If we assume that the jump measure X has no atoms when X has bounded variation then it is known from existing literature (cf. Kyprianou et al. (2008) and Doney (2005)) that W (q) ∈ C 1 (0, ∞) and hence Z (q) ∈ C 2 (0, ∞) and further, if X has a Gaussian component they both belong to C 2 (0, ∞). For computational convenience we shall proceed with the above assumption on X. It is also known that if X has bounded variation with drift d, then W (q) (0) = 1/d and otherwise W (q) (0) = 0. (Here and in the sequel we take the canonical representation of a bounded variation spectrally negative Lévy process X t = dt − S t for t ≥ 0 where {S t : t ≥ 0} is a driftless subordinator and d is a strictly positive constant which is referred to as the drift). Further, Consider the exponential change of measure Under P 1 , the process X is still a spectrally negative Lévy process and we mark its Laplace exponent and scale functions with the subscript 1. It holds that for λ ≥ 0 and, by taking Laplace transforms, we find for q ≥ 0. The reader is otherwise referred to Chapter VII of Bertoin (1996) or Chapter 8 of Kyprianou (2006) for a general overview of scale functions of spectrally negative Lévy processes.
For comparison with the main results in Theorems 2, 3 and 4 below we give the solution to the McKean optimal stopping problem as it appears in Chan (2004) (see also Mordecki (2002)).
We return now to the solution of the McKean stochastic game and present our main results in terms of scale functions. (i) If δ ≥ U (log K), then a stochastic saddle point is given by τ * in Theorem 1 and σ * = ∞, in which case V = U.
(ii) If δ < U (log K), a stochastic saddle point is given by the pair where x * uniquely solves x * > k * (the optimal level of the corresponding McKean optimal stopping problem in Theorem 1) and y * ∈ [log K, z * ], where z * is the unique solution to The next theorem gives partial information on the value of y * . Unfortunately, we are unable to give a complete characterisation of y * .
The question whether y * = log K is more difficult to answer when the Gaussian component of X is strictly positive and we refer to Section 8 for a discussion on this case.
For practical purposes, one would also like to be able to characterise y * as the unique solution to some functional equation. Recall that the value function V is said to have smooth pasting at a boundary point of the stopping region whenever it is differentiable there. Similarly, continuous pasting at a boundary point of the stopping region is said to occur whenever there is continuity there. Experience in the theory of optimal stopping shows that the position of an optimal threshold often follows as a consequence of a continuous or smooth pasting condition. See for example Boyarchenko and Levendorskii (2002) Chan (2004), Shiryaev (2000, 2002), Gapeev (2002), Kyprianou (2005) and Surya (2007). In this case, despite the fact that we are able to make decisive statements about pasting of the value function onto the upper and lower gain functions (see Theorem 4 below), the desired characterisation of y * has not been achieved (note however the discussion following Theorem 4).
Our last main result gives information concerning the analytical shape of the value function V . In particular we address the issue of smooth and continuous pasting at x * and y * . Define the function j : R → R by where which is to be understood in the limiting sense, i.e. α = e x * ψ ′ (1) − Kψ(1) when r = ψ(1).
Theorem 4. For the McKean stochastic game under the assumption (4), when δ < U (log K), V is continuous everywhere. In particular

Moreover
(i) If X has unbounded variation, then there is smooth pasting at x * . Further, there is smooth pasting at y * if and only if y * > log K.
(ii) If X has bounded variation, then there is no smooth pasting at x * and no smooth pasting at y * .
Note that it is in fact possible to show that V is everywhere differentiable except possibly at x * , y * and log K. This is clear from the expression for V (x) on x ∈ (−∞, y * ). However, when y * > log K, for the region x ∈ (y * , ∞) things are less clear without an expression for V . None the less, it is possible with the help of potential densities, which themselves can be written in terms of the scale functions, to write down a formula for V on the aforementioned region. This formula is rather convoluted involving several terms and simply for the sake of brevity we refrain from including it here. It may be possible to use this formula and the pasting conditions to find y * , though it seems difficult to show that a solution to the resulting functional equation is unique.
There are a number of remarks which are worth making about the above three theorems.
Theorem 2 (i) follows as a consequence of the same reasoning that one sees for the case that X is a linear Brownian motion in Kyprianou (2004). That is to say, when δ ≥ U (log K) it follows that U (x) ≤ (K − e x ) + + δ showing that the inf-player would not be behaving optimally by stopping in a finite time. The proof of this fact is virtually identical to the proof given in Kyprianou (2004) with the help of the Verification Lemma given in the next section and so we leave this part of the proof of Theorem 2 (i) as an exercise.
We shall henceforth assume that U (log K) > δ.
For the McKean stochastic game when X is a linear Brownian motion and r = ψ(1) > 0 it was shown in Kyprianou (2004) that, with the above assumption on δ, is small enough, a saddle point is given by for the sup-player and inf-player respectively, where x * is some value strictly less than log K. Also it was shown there that the solution is convex and that there is smooth pasting at x * . For spectrally negative Lévy processes in general, Theorems 2-4 show that considerably different behaviour occurs.
Firstly, as was already found in numerous papers concerning optimal stopping problems driven by spectrally one sided Lévy processes (cf. Alili and Kyprianou (2005), Chan (2004) and Avram et al. (2004)), smooth pasting breaks down when the Lévy process is of bounded variation. Secondly and more interestingly, the different form of the stopping region for the inf-player can be understood intuitively by the following reasoning. In the linear Brownian motion case there is no possibility for the process started at x > log K to enter (−∞, log K] without hitting {log K}. The positive discount rate r and the constant pay-off on [log K, ∞) imply that in this case it does not make sense for the inf-player to stop anywhere on (log K, ∞). However, when X has negative jumps there is a positive probability to jump below points. When X starts at a value which is slightly greater than log K, there is the danger (for the inf-player) that X jumps to a large negative value, which could in principle lead to a relatively large pay-off to the sup-player. The trade-off between this fact and the positive discount rate r when there is no Gaussian component results in the interval hitting strategy for the inf-player indicated by Theorem 3. Note also in that case that the fact that Π(−∞, log K − y * ) > 0 implies that when X 0 > y * the Lévy process may still jump over the stopping interval of the inf-player and possibly stop the game (either immediately or with further movement of X) by entering (−∞, x * ). This is also a new feature of the optimal strategies compared to the linear Brownian motion case as in the latter context, when X 0 > y * , the sup-player will never exercise before the inf-player.
The paper continues with the following structure. In the next section we present a set of sufficient conditions to check for a solution to the McKean stochastic game. Following that, in Sections 4 and 5 we present a description of the candidate solution in the regions x ≤ log K and x > log K. To some extent, the solution may be de-coupled into these two regions thanks to the spectral negativity of the underlying process. In Section 6 we show that the previously described candidate solution fulfils the sufficient conditions outlined in Section 3 thus proving Theorem 2. Finally in Sections 7 and 9 we give the proofs of Theorems 3 and 4 respectively.

Verification technique.
To keep calculations brief and to avoid repetition of ideas, it is worth stating up front the fundamental technique which leads to establishing the existence and hence characterisation of a solution. This comes in the form of the following Verification Lemma.
Lemma 5 (Verification Lemma). Consider the stochastic game (3) with r > 0. Suppose that τ * and σ * are both in T 0,∞ and let Then the triple Note that the assumption r > 0 implies that Θ r ∞,∞ = 0. From the supermartingale property (vi), Doob's Optional Stopping Theorem, (iv) and (i) we know that for any τ ∈ T 0,∞ and t ≥ 0, It follows from Fatou's Lemma by taking t ↑ ∞ that Now using (v), Doob's Optimal Stopping Theorem, (iii) and (ii), we have for any σ ∈ T 0,∞ and t ≥ 0 , Taking limits as t ↑ ∞ and applying the Dominated Convergence Theorem, taking note of the non-negativity of G, we have and hence (τ * , σ * ) is a saddle point to (3). 4 Candidature on x ≤ log K.
Here we describe analytically a proposed solution when X 0 ∈ (−∞, log K].
where x * > k * uniquely solves (13). Then w has the following properties on (−∞, log K], (iv) the right derivative at x * is computed as follows where in the latter case d is the drift term, Proof. First note that the left hand side of (13) is equal to which is a decreasing continuous function in x. Further, h(log K) = 0 and so we need to show that h(−∞) > δ/K in order to deduce that x * is uniquely defined. From Theorem 1 we have that U (log K) = Kh(k * ) where k * < log K is defined in Theorem 1. Hence by monotonicity and the assumption on the size of δ, h(−∞) ≥ h(k * ) = U (log K)/K > δ/K. It also follows immediately from this observation that x * > k * .
Next, denote by w(x) the right hand side of (19). The remainder of the proof consists of verifying that w fulfils conditions (i) to (ix) of Lemma 6 for x ∈ (−∞, log K]. We label the proof in parts accordingly. (i) Using (6) and (7) and the exponential change of measure (10), we find that for where the last equality follows from the definition of x * in (13).
(ii) By definition For any x ≤ log K, the integrand on the right hand side above is positive and hence w(x) ≥ K−e x for x ≤ log K.
(iii) We also see that (iv) The derivative of w at x ∈ (−∞, log K]\{x * } is given by Taking limits as x ↓ x * gives the stated result. In taking the latter limit, one needs to take account of the fact for all q ≥ 0, W (q) (0) = 0 if X has unbounded variation and otherwise it is equal to 1/d where d is the underlying drift.
(v) Taking the expression for the value function, U , of the McKean optimal stopping problem (5) recall that x * > k * where k * is optimal level for (5). It is also known that U is convex and decreasing in x. Hence for any x > k * (vi) and (vii). These two conditions follow by inspection using (13) in the case of (vi) and the fact that Z (q) (x) = 1 for all x ≤ 0 in the case of (vii).
(viii) From (i), (vi) and (vii) we deduce from the strong Markov property that for X 0 = x ≤ log K we have that and now by the tower property of conditional expectation we observe the required martingale property.
(ix). Noting that w is a C 2 (x * , log K) function, a standard computation involving Itô's formula shows that (Γ − r)w = 0 on (x * , log K) thanks to the just established martingale property. For x < x * we have that where Γ is the infinitesimal generator of X. Despite the conclusion of part (iv) for the case of bounded variation, the function w is smooth enough to allow one to use the change of variable formula in the case of bounded variation, and the classical Itô's formula in the case of unbounded variation (cf. Kyprianou and Surya (2007) and Protter (2004)) to show that, in light of the above inequality, {e −r(t∧τ + log K ) w(X t∧τ + log K ) : t ≥ 0} is a P x -supermartingale for x ≤ log K.

Candidature on x >log K.
In this section we give an analytical and probabilistic description of a proposed solution when X 0 > log K.
where w δ (x) = w(x) given in (19) for x ≤ log K and w δ (x) = δ for x > log K. Then v has the following properties, (vi) if y * = log K then necessarily X has a Gaussian component and for where the function j was defined in (15), (vii) y * ≤ z * , where z * was defined as the unique solution of (14), Proof. (i) Note that when x < log K we have P x (τ − log K = 0) = 1 so that v(x) = w(x).
(ii) and (iii) These are trivial to verify in light of (i).
(iv) Denote X * t = X t∧τ − log K for all t ≥ 0. Since w δ is a continous function and since X * is quasileft continuous we can deduce that v is upper semicontinuous. Furthermore, w δ is bounded and continuous, so we can apply a variant 1 of Corollary 2.9 on p46 of Peskir and Shiryaev (2006), see Theorem 3 on p127 of Shiryaev (1978), to conclude that there exists an optimal stopping time, say, σ * , which without loss of generality we assume to be not greater than τ − log K . By considering the stopping time σ = ∞ we see by its definition that v(x) < KE x [e −rτ − log K ] and hence lim x↑∞ v(x) = 0. From the latter we deduce that the set defined by is non-empty. The upper semicontinuity of v implies that this set is open. Corollary 2.9 of Peskir and Shiryaev (2006) also implies that it is optimal to take σ * as the time of first entry into the set R\C.
In what follows, if ζ is a stopping time for X we shall write ζ(x) to show the dependence of the stopping time on the value of X 0 = x. For x > y > log K we have that τ − log K (x) ≥ τ − log K (y) and thus, also appealing to the definition of v as an infimum, where in the second inequality we have used that σ * (y) ≤ τ − log K (y) ≤ τ − log K (x) and from Lemma 6 (v), w δ is a decreasing function.
(v) The fact that v is non-increasing and that C, defined above, is open implies that there exists a y * ≥ log K such that C = (y * , ∞). In that case σ * = τ − y * .
(vi) By the dynamic programming principle, taking into account the fact that w δ = w for . It is shown in the Appendix that the right hand side above is equal to the right hand side of (20). Now assume that X has no Gaussian component and suppose for contradiction that y * = log K. If X has bounded variation with drift d, it is known that W (r) (0) = 1/d and hence this would where α was given in (16). Note that we have used the fact that since k * < x * < log K where k * is the optimal crossing boundary in the McKean optimal stopping problem (cf. Theorem 1), we have that α > 0. Taking account of part (iii) of this Lemma we thus have a contradiction. When X has unbounded variation with no Gaussian component, we deduce from (9) that v ′ (log K+) = ∞, which again leads to a violation of the upper bound in (iii).
(vii) First we need to prove that z * in (14) is well-defined and that y * ≤ z * . Denote by k(z) the left hand side of (14). We start by showing that k(log K+) > δ/K. As we have remarked in the proof of (iv) where the equality follows from (8). We use (vi) to show that v(log K+) = δ. When X has no Gaussian component this follows from the fact that y * > log K and when X has a Gaussian component this follows from continuity of the function j. It thus holds that k(log K+) > δ/K. Note that k is a continuous function on (log K, ∞) From (8) it follows that k decreases on (log K, ∞) and that lim z→∞ k(z) = 0. Hence there exists a unique z * ∈ (log K, ∞) such that which implies y * ≤ z * . Recall from earlier remarks that the first part of the theorem can be proved in the same way as was dealt with for the case of Brownian motion in Kyprianou (2004). We therefore concentrate on the second part of the theorem.
We piece together the conclusions of Lemmas 6 and 7 in order to check the conditions of the Verification Lemma.
In particular we consider the candidate triple (V * , τ * , σ * ) which is generated by the choices τ * = inf{t > 0 : X t < x * } and σ * = inf{t > 0 : X t ∈ [log K, y * ]} where the constants x * and y * are given in Lemmas 6 and 7 respectively. Note also, thanks to the fact that X is spectrally negative, we have that Note now that conditions (i) -(iv) of Lemma 5 are automatically satisfied and it remains to establish the supermartingale and submartingale conditions in (v) and (vi). For the former we note that if the initial value x ∈ [x * , log K) then spectral negativity and Lemma 6 (ix) gives the required supermartingale property. If on the other hand x > y * then since, by Lemma 7 (ix), e −rt v(X t ) is a martingale up to the stopping time τ − y * and since, by Lemma 6 (ix), given } is a supermartingale, the required supermartingale property follows. For the submartingale property, it is more convenient to break the proof into the cases that y * = log K and y * > log K.
For the case that y * > log K pick two arbitrary points log K < a < b < y * . Now note from the proof of Lemma 6 (ix) that (Γ − r)v(x) = 0 on x ∈ (x * , log K). Further, it is easy to verify that, thanks to the fact that K, a). The submartingale property follows by piece-wise consideration of the path of X and the following two facts. Firstly, thanks to the above remarks on the value of (Γ − r)v(x) and an application of the Itô-Meyer-Tanaka formula (cf. Protter (2004) To deal with the case that y * = log K recall from Lemma 7 (vi) that necessarily X has a Gaussian component. As mentioned in Section 2, this is a sufficient condition to guarantee that both scale functions are twice continuously differentiable on (0, ∞). An application of Itô's formula together with the martingale properties mentioned in Lemmas 6 (viii) and 7 (ix) show that (Γ − r)v = 0 on (x * , log K) ∪ (log K, ∞). Using this fact together with the Itô-Meyer-Tanaka formula (cf. Protter (2004)) the submartingale property of {e −r(t∧τ − x * ) v(X t∧τ − x * ) : t ≥ 0} follows thanks to its semi-martingale decomposition which now takes the form x * } where L log K is semi-martingale local time of X at log K and M is a martingale. Specifically, the integral is non-negative as one may check from (9) the expression given for v in Note that we have used the fact that α, defined in (16), is strictly positive. The latter fact was established in the proof of Lemma 7 (vi).
Remark 8. It is clear from the above proof that we have made heavy use of the fact that X has jumps only in one direction. In particular, this has enabled us to split the problem into two auxiliary problems and we have solved the problem independently on (−∞, log K] and then use this solution to construct the solution on (log K, ∞). In the case that X has jumps in both directions, the analysis breaks down at a number of points. Fundamentally however, since X may pass a fixed level from below by jumping over it, one is no longer able to solve the stochastic game on (−∞, log K] without knowing the solution on (log K, ∞). None the less, Ekström and Peskir (2006) still provide us with the existence of a stochastic saddle point.
7 y * > log K when X has no Gaussian component: proof of Theorem 3.
(i) It follows immediately from Lemma 7 that when y * = log K we necessarily have that X has a Gaussian component.
Next we show that Π(−∞, log K − y * ) > 0. Suppose that X 0 ∈ (log K, y * ), then we know that {e −rt V (X t ) : t ≤ τ − log K } is a submartingale and that V (x) = δ on [log K, y * ]. We deduce from Itô's formula (see for example Theorem 36 of Protter (2004)) that in the semi-martingale decomposition of the aforementioned submartingale, the drift term must be non-negative and hence for any x ∈ (log K, y * ) Therefore, since V is decreasing on (−∞, log K), we find that Π(−∞, log K −y * ) > 0 as required.
8 Remarks on y * for the case that X has no Gaussian component.
In the previous section we showed that y * > log K whenever X has no Gaussian component. In this section we show that when X has a Gaussian component the distinction between y * = log K and y * > log K is a more subtle issue. This distinction is important, since in the next section we will show that when X is of unbounded variation, the value function is differentiable at y * if and only if y * > log K. Lemma 7 (vi) implies that y * = log K exactly when the value function is equal to j(x). Reviewing the calculations in the previous sections one sees that it is the upper bound condition (ii) of Lemma 5 which may not hold for j and otherwise all other conditions are verifiable in the same way as before. A sufficient condition that Lemma 5 (ii) holds is that j is a decreasing function in which case of course y * = log K. Whenever X has no Gaussian component, the function j violates this upper bound condition, as was shown in the proof of Lemma 7 (vi). This is caused by the behaviour of the scale function W at zero: when the Gaussian component of X is zero, either W is discontinuous or it has infinite right derivative at zero. Assume now that X has a Gaussian component. Then the behaviour of the scale function at zero implies that j(log K+) = δ and that j has finite derivative on (log K, ∞). From these properties alone we are not able to deduce anything about the value of y * . In fact, as we will show next, whether the upper bound condition is satisfied by j depends on the sign of j ′ (log K+). Whenever j ′ (log K+) > 0, it must hold that y * > log K, since otherwise Lemma 7 (iii) and (vi) lead to a contradiction. We show that a sufficient condition for j to be decreasing, and hence for y * to be equal to log K, is given by j ′ (log K+) < 0. Recall that j(x) = w(x) on (−∞, log K]. From Lemma 19 (v) and j ′ (log K+) < 0 we deduce the existence some γ > 0 such that j is decreasing on (−∞, log K + γ]. Next let log K + γ ≤ x < y ≤ x + γ. By the strong Markov property we deduce that j(y) − j(x) < 0, which implies that j is a decreasing function on R.
Remark 9. Note that when X is a Brownian motion and r = ψ(1) = σ 2 /2 then the discussion above agrees with Theorem 2 in Kyprianou (2004). Indeed, in this case the scale functions are given by and thus j ′ (log K+) = −δ < 0.
We conclude that a stochastic saddle point is indeed given by Also, for the other cases r = σ 2 /2, similar calculations lead to the results found in Kyprianou (2004).
Unfortunately, there are rather few spectrally negative Lévy processes for which the scale function are known in terms of elementary or special functions. Hence, in general, numerical analysis is needed to check whether the condition j ′ (log K+) < 0 holds.
9 Pasting properties at y * : proof of Theorem 4.
Using notation as in the proof of Lemmas 5 and 7, it follows from monotonicity of V and the definition of (τ * , σ * ) as a saddle point that for and continuity of V follows from continuity of G and dominated convergence.
It has already been shown in Section 4 whilst proving Theorem 2 that there is smooth pasting at x * if and only if X has unbounded variation. It remains then to establish the smoothness of V at y * .
(i) Suppose first that X is of unbounded variation. When X has a Gaussian component, recall from (22) that showing that there can be no smooth fit at y * when y * = log K.
Next suppose that y * > log K. Our aim is to show that V ′ (y * +) = 0. In order to do this we shall need two auxiliary results.
Lemma 10. Suppose X is of unbounded variation and let c < 0. Then Proof. Let c < 0. Define for ε > 0 and let X t = sup s≤t X s . Let Note that A ε happens if and only if there exists a left endpoint g of an excursion such that (i) L g < ε (at time g the process must not have exceeded ε), (ii) ǫ L h < X h + ε ∀h < g in the support of dL (during excursions before time g the process must stay above −ε), (iii) ǫ Lg (ρ Xg+ε ) > X g − c (the first exit time below −ε must be the first exit time below c).
Hence we can use the compensation formula (with g and h denoting left end points of excursion intervals) to deduce that Using the fact that X L −1 t = t we find for ε small enough It is known however (cf. Millar (1977)) that since X has unbounded variation lim t↓0 ǫ(ρ t ) = 0 which in turn implies that lim ε↓0 P(A ε ) ε = 0 as required.
Lemma 11. For any spectrally negative Lévy process Proof. First suppose that X does not drift to −∞, i.e. Φ(0) = 0. In that case, it is known that W is proportional to the renewal function of the descending ladder height process. The result is then immediate from the known sub-additivity of renewal functions (cf. Chapter III of Bertoin (1996)). In the case that Φ(0) > 0 (i.e. X drifts to −∞), it is known that where W * plays the role of the scale function for X conditioned to drift to +∞ (which is again a spectrally negative Lévy process) and the result follows.
We are now ready to conclude the proof of part (i) of Theorem 4. To this end suppose y > log K and X is of unbounded variation. Since V = δ on [log K, y * ] it suffices to show that the right derivative of V exists at y * and that V ′ (y * +) = 0. Since V (y * ) = δ and since V (x) ≤ δ for any x > log K we have for any x > y * To show that V ′ (y * ) = 0 we must thus show that lim inf In order to achieve this define for ε < log K − y * τ * ε = inf{t ≥ 0 : X t / ∈ [y * − ε, y * + ε]} Furthermore τ + := inf{t ≥ 0 : X t > y * + ε} and τ − := inf{t ≥ 0 : X t < y * − ε}.
We have that for small enough ε, {e −r(t∧τε) V (X t∧τε )} t≥0 is a P y * -submartingale, hence by the optional sampling theorem Furthermore we use Lemma 10 and the fact that V is bounded by K to deduce The two expectations on the right hand side of (26) can be evaluated in terms of scale functions with the help of (6) and (7). Also, because X is of unbounded variation, it is known that W (q) (0) = 0. Combining these facts, (25), (26) and using Lemma 11 we find This concludes the proof of part (i) of Theorem 4.
(ii) Suppose now that X has bounded variation. We know that necessarily X has no Gaussian component and hence by Theorem 3 that y * > log K. We see from (21) and continuity of V that for ε > 0 V (y * + ε) − δ ε ≤ E e −rτ − y * (y * ) w δ (X τ − y * (y * ) + y * + ε) − w δ (X τ − y * (y * ) + y * ) ε where as before we are working under the measure P and indicated the dependency of stopping times on an initial position of X. Now recalling that w δ is a non-increasing function and is equal to V on (−∞, log K), we have further with the help of Theorem 3, dominated convergence and the fact that V is decreasing on (−∞, log K) that lim sup y * (y * ) V ′ (X τ − y * (y * ) + y * )1 {X τ − y * (y * ) +y * <log K} ] < 0.
Hence there is continuous fit but no smooth fit at y * in this case.
Dealing with the case that r = 0 and ψ(1) > 0 first requires the problem to be formulated in a slightly different way as a careful inspection of the proof of the Verification Lemma reveals that there is a problem with the inequality in (18)  Suppose again that U (x) is the solution to (5) but now under the regime r = 0 and ψ(1) > 0. It is not difficult to see that U (x) = K when X does not drift to ∞ and otherwise is given by the expression in Theorem 1 with r = 0. When δ is smaller than U (log K), we claim the saddle point is given by τ * = τ − x * and σ * = inf{t : X t ≥ log K}, where x * is the unique solution to Kψ (1) log K−x 0 e −y W (y)dy = δ.
(Note that here we use the assumption ψ(1) > 0). For x ≤ log K the value function is given by x−x * 0 e −y W (y)dy.
Indeed it is possible to mildly adapt the statement and proof of the Verification Lemma to show that these choices of τ * and σ * constitute a saddle point. The reader is referred to Section 10 of Chapter 5 of Baurdoux (2007) for a more detailed study of the r = 0 case.

Appendix.
Our objective here is to show that We need first a preliminary Lemma. Recall that T K = inf{t > 0 : X t = log K}.

Lemma 12.
For all x ∈ R the following two identities hold and