Optimal prediction for positive self-similar Markov processes

This paper addresses the question of predicting when a positive self-similar Markov process X attains its pathwise global supremum or infimum before hitting zero for the first time (if it does at all). This problem has been studied in Glover et al. (2013) under the assumption that X is a positive transient diffusion. We extend their result to the class of positive self-similar Markov processes by establishing a link to Baurdoux and van Schaik (2013), where the same question is studied for a Levy process drifting to minus infinity. The connection to Baurdoux and van Schaik (2013) relies on the so-called Lamperti transformation which links the class of positive self-similar Markov processes with that of Levy processes. Our approach will reveal that the results in Glover et al. (2013) for Bessel processes can also be seen as a consequence of self-similarity.


Introduction
In keeping with the development of a family of prediction problems for Brownian motion and, more generally, Lévy processes, cf. [10,8,9,3] to name but a few, we address the question of predicting the time when a positive self-similar Markov process (pssMp) attains its pathwise global supremum or infimum. Aside from the embedding of this problem within the general theory of optimal stopping, the interest and novelty in the current setting is to show that, in contrast to the approach in [9] for self-similar diffusions, the problem can reduced via time-change to a more homogenous setting.
We shall spend some time to set up some notation in order to formulate the problem rigorously. A positive self-similar Markov process X = {X t : t ≥ 0} with self-similarity index α > 0 is a [0, ∞)-valued standard Markov process defined on a filtered probability where X = {X t : t ≥ 0} the running minimum process X t := inf 0≤u≤t X u , t ≥ 0. Again, by definition ofĈ, the set {0 ≤ t < ζ : X t = X ζ− } a singleton; see Subsection 2.3 for details. If X has positive jumps, the word "attains" is used in a loose sense analogously to above. Stopping as close as possible toΘ then leads to solving the optimal stopping problem where x > 0 and the infimum is taken over a certain set of G-stopping times τ which is specified later. Our interest in (1) and (2) was raised thanks to [9], where the authors solve (2) under the assumption that X is a diffusion in (0, ∞) such that lim t→∞ X t = ∞. Their result states that the optimal stopping time is given by where f * is the minimal solution to a certain differential equation. In particular, when X is a d-dimensional Bessel process with d > 2, it is shown that f * (z) = λ * 1 z, z ≥ 0, for some constant λ * 1 > 1, which is a root of some polynomial. Due to the fact that the class of Bessel processes for d > 2 belongs to the class of pssMps with α = 2, it is possible to express the optimal stopping time (3) (up to a time-change) in terms of the underlying Lamperti representation ξ (of X) reflected at its infimum. This raises the suspicion that the simple form of (3) in the Bessel case could be a consequence of the self-similarity of X and suggests that (2) (or an analogue of it) can also be solved for the class of pssMps.
In this paper we show that the speculations in the previous paragraph are indeed true. Specifically, we prove that the optimal stopping times in (1) and (2) are of the simple form τ * = inf{t ≥ 0 : X t ≥ K * X t } andτ * = inf{t ≥ 0 : X t ≤K * X t } for some constants 0 < K * < 1 andK * > 1 respectively. As alluded to above, the key step is to reduce (1) and (2) to a one-dimensional problem with the help of the so-called Lamperti transformation [14] which links pssMps to Lévy processes.

Killed Lévy processes
A process ξ with values in R ∪ {−∞} is called a Lévy process killed at rate q ≥ 0 if ξ starts at 0, has stationary and independent increments and k := inf{t > 0 : ξ t = −∞} has an exponential distribution with parameter q ≥ 0. In the case q = 0 it is understood that P[k = ∞] = 1, that is, no killing. It is well known that a Lévy process X killed at rate q is characterised by its Lévy triplet (γ, σ, Π) and the killing rate q, where σ ≥ 0, γ ∈ R and Π is a measure on R satisfying the condition R (1 ∧ x 2 ) Π(dx) < ∞. The Laplace exponent of ξ under P is defined by ψ(θ) := log(E[e θξ 1 ]) for any θ ∈ R such that ψ(θ) < ∞. It is known that (cf. Theorem 3.6 in [13]), for θ ∈ R, and in this case we have In particular, if ξ is of bounded variation, (5) may be written as Finally, for any killed Lévy process (starting at zero) and any is a P-martingale. Hence, we may further define the family of measures where {F t : t ≥ 0} is the natural filtration associated to ξ. In particular, under P v the process ξ is a Lévy process and its Laplace exponent is given by and infinite lifetime, that is, P v [k = ∞] = 1; cf. Theorem 3.9 in [13].

Scale functions
We suppose throughout this subsection that ξ is an unkilled spectrally negative Lévy process (q = 0). Spectrally negative means that Π is concentrated on (−∞, 0) and thus ξ only exhibits downward jumps. Observe that in this case, the Laplace exponent ψ(θ) exists at least for θ ≥ 0 by (4). Its right-inverse is defined by A special family of functions associated with unkilled spectrally negative Lévy processes is that of scale functions (cf. [12,13]) which are defined as follows. For η ≥ 0, the η-scale function W (η) : R → [0, ∞) is the unique function whose restriction to (0, ∞) is continuous and has Laplace transform and is defined to be identically zero for x ≤ 0. Further, we shall use the notation W (η) v (x) to mean the η-scale function associated to X under P v . For fixed x ≥ 0, it is also possible to analytically extend η → W (η) (x) to η ∈ C. A useful relation that links the different scale functions is (cf. Lemma 3.7 in [12]) for v ∈ R such that ψ(v) < ∞ and η ∈ C. Moreover, the following regularity properties of scale functions are known; cf. Sections 2.3 and 3.1 of [12].
Smoothness: For all η ≥ 0, W (η) is Lebesgue-almost everywhere differentiable. Moreover, , if X is of bounded variation and Π has no atoms, , if X is of unbounded variation and σ = 0, Continuity at the origin: For all η ≥ 0, Right-derivative at the origin: For all q ≥ 0, where we understand the second case to be +∞ when σ = 0. The second scale function is Z (η) v and defined as follows. For v ∈ R such that ψ(v) < ∞ and η ≥ 0 we define Z

The Lamperti transformation
Lamperti's main result in [14] asserts that any pssMp X may, up to its first hitting time of zero, be expressed as the exponential of a time-changed Lévy process. We will now explain this in more detail. Instead of writing (X, P x ) to denote the positive self-similar Markov process starting at x > 0, we shall sometimes write : t ≥ 0} in order to emphasise the dependency of the path on its initial value. Similarly, we write It will be important to understand the behaviour of ϕ(x −α ζ−) := lim t↑ζ ϕ(x −α t). In particular, note that the distribution of ϕ(x −α ζ−) does not depend on x > 0. Moreover, the following result is known; see Lemma 13.3 in [13].
As the distribution of ϕ(x −α ζ (x) −) is independent of x, we will rename it e. When e = ∞ almost surely we interpret it as an exponential distribution with parameter zero. Now define the right-inverse of ϕ, Moreover, define the process ξ := {ξ t : t ≥ 0} by setting, for x > 0, and ξ t = −∞ for t ≥ e (in the case that e < ∞). The main result in [14] states that a pssMp is nothing else than a space and time-changed killed Lévy process.
Proposition 2.2 (Lamperti transformation). If X (x) , x > 0, is a positive self-similar Markov process with index of self-similarity α > 0, then it can be represented as and either (i) ζ (x) = ∞ almost surely for all x > 0, in which case ξ is an unkilled Lévy process satisfying lim sup t↑∞ ξ t = ∞, or ζ (x) − = 0 almost surely for all x > 0, in which case ξ is an unkilled Lévy process satisfying lim t↑∞ ξ t = −∞, or ζ (x) − > 0 almost surely for all x > 0, in which case ξ is a killed Lévy process.
Also note that we may identify The version of the Lamperti transformation we have just given is Theorem 13.1 in [13], where one can also find a proof of it.
We conclude this subsection by explaining why the sets {t ≥ 0 : X t = X ζ } and {0 ≤ t < ζ : X t = X ζ− } mentioned in the introduction are singletons. By definition of C andĈ it is clear that both sets are non-empty, but they could potentially contain more than one element. In view of the Lamperti transformation we see that the aforementioned sets contain only a single element provided the same is true for the sets {t ≥ 0 : ξ t = sup 0≤u<∞ ξ u } and {0 ≤ t < e : ξ t = inf 0≤u<t ξ u }, where ξ is the underlying Lamperti representation of X in C andĈ respectively. However, it is known that local extrema (and hence global extrema) of Lévy processes are distinct except for compound Poisson processes, see Proposition 4 in [4]. But for X in C orĈ the Lamperti transformation can never be a compound Poisson process and thus the assertion follows. 6 3 Reformulation of problems and main results

Predicting the time at which the maximum is attained
Suppose throughout this subsection that X ∈ C with parameter of self-similarity α > 0 and let ξ be its Lamperti representation which is a spectrally negative Lévy process killed at some rate q ≥ 0 satisfying lim t↑∞ ξ t = −∞ whenever q = 0. For θ ≥ 0, let ψ(θ) be the Laplace exponent of ξ and φ(θ) = q + ψ(θ) the Laplace exponent of ξ unkilled. Denote by Φ the right-inverse of φ and note that Φ(q) > 0.
We begin our analysis with two steps that are almost identical to Lemmas 1 and 2 of [9]. For this reason, we omit the proof of the first of the two lemmas below and streamline the proof the second to the particular case at hand.
Proof. For any G-stopping time τ with finite mean, following verbatim the proof of Lemma 2 in [9], we have by Fubini's theorem, Using the strong Markov property of X we obtain on {t < ζ}, Hence, using the Lamperti transformation we obtain for 0 < x ≤ s, Plugging this into (13) gives the result.
We are interested in minimising the expectation on the left-hand side of (12) over the set M of all integrable G-stopping times τ . The requirement that τ is integrable ensures that (12) is well defined. Taking into account the specific form of the right-hand side of (12), one sees that for It turns out that, in providing a solution to (1) we need to restrict ourselves to the case that the underlying Lévy process in the Lamperti transform (and hence the pssMp) satisfies a condition. We therefore define the modified class The criterion ψ(α) < 0 is a technical one, which turns out to be equivalent to Θ being finite in mean (see Theorem 3.5). Later on, in Section 7, we will provide examples where this condition can be checked.
Summing up, for X ∈ C 1 we are led to the optimal stopping problem where the infimum is taken over all integrable G-stopping times τ . We are now in a position to state our first main result.
Theorem 3.3. Let X ∈ C 1 with index of self-similarity α > 0, in which case its Lamperti representation ξ is a spectrally negative Lévy process killed at rate q ≥ 0. Recall that φ is the Laplace exponent of ξ unkilled and Φ its right-inverse. Let W (·) (z) be the scale function associated with φ. Then the solution of (14) is given by Remark 3.4. The right-hand side of (15) is equal to zero unless ξ is of bounded variation; see (9).
The following theorem, also proved in Section 6, shows that the additional condition on ξ in the restricted class C 1 implies that Θ always has a finite mean.
Noting from the Lamperti transformation that ζ = e 0 e αξt dt, one also sees that ψ(α) < 0 is also necessary and sufficient for E x (ζ) < 0 for all x > 0. Theorem 3.3 is a result is a consequence of the analysis in Section 4 and 5 and its proof is given in Section 6. An explicit example is provided in Section 7. 8
Lemma 3.6. For x > 0 and any G-stopping time τ with finite mean we have The specific form of the right-hand side of (16) shows again that for where M is the set of all integrable G-stopping times τ . Similarly to the problem of predicting the maximum, in order to solve the problem of predicting the minimum, we need to work in a more restrictive class of pssMp thanĈ. To this end, let use definê For X ∈Ĉ 1 , we are led to the optimal stopping problem where the infimum is taken respectively with the two cases over all G-stopping times τ or all integrable G-stopping times τ . We can now state the analogue of Theorem 3.3.
Theorem 3.7. Assume that X ∈Ĉ 1 with index of self-similarity α > 0, in which case the dualξ of the Lamperti representation of X is a spectrally negative Lévy process killed at rate q ≥ 0. Moreover, recall thatφ is the Laplace exponent of the dualξ unkilled andΦ its right-inverse. LetŴ (·) (z) be the scale function associated withφ. Then the solution of (17) is given byv This result is again a consequence of the analysis of Sections 4 and 5 and the analogue of Remark 3.4 applies here as well. An example including the case when X is a d-dimensional Bessel process for d > 2 is provided in Section 7. As in Subsection 3.1, it is natural to ask whenΘ has a finite mean. In this respect we have the following result.
Note also that, if q = 0 and X ∈Ĉ, then the issue of whether E x (ζ) < ∞ is irrelevant since ζ = ∞ almost surely. On the other hand, when q > 0 and X ∈Ĉ, again noting that 4 Reduction to a one-dimensional problem 4 (14) The aim in this subsection is to reduce (14) to a one-dimensional optimal stopping problem.

.1 Reduction of problem
We begin by reducing (14) to an optimal stopping problem in which X starts at x = 1. More precisely, the self-similarity of X implies that the process is equal in law to the process Note that the process in (19) is adapted to G, whereas the process in (20) is adapted tõ where the first infimum is taken over G-stopping times τ and the second overG (x) -stopping times τ ′ . Before we can continue with the reduction of (14), we need to introduce a new filtration H := {H t : t ≥ 0} in G. Recall that the process is right-continuous and adapted to G. Then is a right-continuous process which is strictly increasing on [0, ϕ(ζ (1) −)). In particular, I u is a G-stopping time for each u ≥ 0. We now use I u , u ≥ 0, to time-change the filtration G according to By Lemma 7.3 in [11] it follows that H is right-continuous. Also observe that the Lamperti representation ξ is adapted to H. Finally, denote by M 1 the set of allG (x) -stopping times and by M 2 the set of all H-stopping times. As a final piece of notation before we formulate the main result of this subsection, define the measure P α by where Φ and q are as at the beginning of where y = s/x, Y log(y) u := log(y) ∨ ξ u − ξ u and ξ u := sup 0≤t≤u ξ t for u ≥ 0. In particular, under P α the spectrally negative Lévy process ξ is not killed.
Proof. Using the fact that ϕ is strictly increasing on [0, ζ) and the Lamperti transformation shows that for τ ′ ∈ M (x) 1 , Next, note that ϕ ′ (t) = (X t ) −α = e −αξ ϕ(t) for t < ζ (1) . Hence, changing variables with u = ϕ(t) shows that the right-hand side of (25) is equal to As τ ′ ∈ M (x) 1 , it follows that ϕ((x −α τ ′ ) ∧ ζ) is a H-stopping time that is less or equal than e, and hence we conclude that In other words, we have found a lower bound for v(x, s) in terms of an optimal stopping problem for the Lamperti representation ξ reflected at its maximum. Using Fubini's theorem and a change of measure according to (22) yields for ν ∈ M 2 , Finally, note that the Laplace exponent of ξ under P α is given by the expression ψ α (θ) = ψ(θ + α) − ψ(α), θ ≥ 0. In particular, ψ α (0) = 0 and hence ξ is not killed under P α .
Despite the inequality in (24), we are in a good enough position with this lemma to deduce the solution of (14). To see why, suppose that the optimal stopping time for (24) is given by for some k * > 0. Additionally, setting K * := e −k * , define It then holds that and thus τ * is optimal for (14). Hence it remains to show that the optimal stopping time for (24) is indeed of the assumed form. This is done in Section 5.

Reduction of problem (17)
Analogously to the previous subsection, we want to reduce (17) to a one-dimensional optimal stopping problem. Let M Following the same line of reasoning as in Subsection 4.1, one may obtain the analogue of Lemma 4.1; see Lemma 4.2 below. The only difference is that we express all in terms of the dual processξ so that we obtain a one-dimensional optimal stopping problem in (29) that is of the same type as in (24) (a one-dimensional optimal stopping problem for a spectrally negative Lévy process reflected at its supremum). The advantage of this is that once the onedimensional problem is solved, we can deduce the solution for both (14) and (17). Moreover, the fact that (24) and (29) only differ by switching to the dual essentially says that the problem of predicting the time at which the maximum or minimum is attained is, at least on the level of Lamperti representations, essentially the same.
Analogously to Subsection 4.1, it follows that if the optimal stopping time for (29) is given by ν * = inf{t ≥ 0 : Y log(y) t ≥k * } for somek * > 0, then is optimal in (17), whereK * := ek * . The remaining task is again to solve (29) and show that the optimal stopping time is indeed given by ν * . This is done in Section 5.

The one-dimensional optimal stopping problem
In this section we solve a separate optimal stopping problem which is set up in such a way that once it is solved one can use it to deduce the solution of (24) and (29) and hence the solution of (14) and (17) respectively. This section is self-contained and can be read completely independently of Sections 3 and 4. Therefore, for convenience we will reuse some of the notation -there should be no confusion.

Setting and formulation of one-dimensional problem
Let us spend some time introducing the notation and formulating the problem. Suppose that Ξ = {Ξ t : t ≥ 0} is an (unkilled) spectrally negative Lévy process defined on a filtered probability space (Ω, F , F := {F t : t ≥ 0},P) satisfying the natural conditions; cf. [6], Section 1.3, p.39. For convenience we will assume without loss of generality that (Ω, F ) = (R [0,∞) , B [0,∞) ), where B is the Borel-σ-field on R. The coordinate process on (Ω, F ) is denoted by Y = {Y t : t ≥ 0}. Further, let q ≥ 0 and suppose that Ξ underP is such that lim t↑∞ Ξ t = −∞ whenever q = 0. Also assume that the Lévy measure associated with Ξ has no atoms whenever Ξ is of bounded variation. This is a purely technical condition which ensures that the q-scale functions W (q) associated with Ξ are continuously differentiable on (0, ∞); see (8). Next, let β ∈ R \ {0} such thatẼ[e βΞ 1 ] < ∞. This condition is automatically satisfied if β > 0 due to the spectral negativity of Ξ and hence it is only an additional assumption when β < 0. The Laplace exponent is given by and its right-inverse is defined as In particular, note that Φ(q) > 0 and define Moreover, denote byP β the measure obtained by the change of measure dP β dP Ft = e βΞt−φ(β)t , t ≥ 0.
Finally, for y ≥ 0, let P β y be the law of We are interested in the optimal stopping problem for y ≥ 0 and (q, β) ∈ A, where : q > φ(β) or q = 0 and β < 0}, and the set M denotes the set of F-stopping times such that Note that M is the set of all F-stopping times except when q = 0 and β < 0 in which case (31) is indeed a restriction because φ(β) > 0 due to the assumption that lim t↑∞ Ξ t = −∞.

Solution of one-dimensional problem
Given the underlying Markovian structure of (30), it is reasonable to look for an optimal stopping time of the form However, when q = 0 and β < 0, we need to check whether τ k ∈ M.
Lemma 5.1. Let k > 0. If q = 0 and β < 0 (and hence φ(β) > 0), it holds that Proof. Throughout this proof, let Ξ t := sup 0≤u≤t Ξ u , t ≥ 0, and write τ k,y := inf{t ≥ 0 : y ∨ Ξ t − Ξ t ≥ k} for y ≥ 0. If y ≥ k the assertion is clearly true and hence suppose that y < k. Using the fact that β < 0 in the second inequality, we have It is now shown in Theorem 1 in [2] that the expression on the right-hand side is finite.
The next question we address is what the value function associated with the stopping times τ k looks like. To this end, introduce the quantity The next result gives an expression for V k in terms of scale functions.
Proof. Define for η ≥ 0 the functions Now recall from Theorem 8.11 in [13] that the density of the η-potential measure of Y upon leaving [0, k) under P β y is, for y, z ∈ [0, k], given by Here and for the remainder of this section, unless otherwise stated, all derivatives of scale functions will be understood as the right limit of their densities with respect to Lebesgue measure. Using the expression in (33), we see that for y ≥ 0, If (q, β) ∈ A is such that q > φ(β) the result follows by setting η = q − φ(β). Hence, the remaining case is when q = 0 and β < 0 (and hence φ(β) > 0). In this case, note that by Lemma 5.1 we have for any w ∈ U := {z ∈ C : Now define for w ∈ U the functions The functions g n are analytic in U since one can differentiate under the integral sign. Moreover, for w ∈ U we have the estimate which together with the fact that the right-hand side tends to zero as n ↑ ∞ implies that g n converges uniformly to g in U.
Thus, Weierstrass' theorem shows that g is analytic in U.
Next, we deal with the right-hand side of (34). From the series representation of W (q) (x) provided in the proof of Lemma 3.6 in [12], it is possible to show that (after some work) the right-hand side of (34) is also analytic (on the whole of C). By the identity theorem it then follows that (34) holds for η ∈ U, in particular for real η such that η > −φ(β). Finally, to obtain the result for η = −φ(β), take limits on both sides of (34) and use dominated convergence on the left-hand side and analyticity on the right-hand side. This completes the proof.
Having this semi-explicit form for V k , the next step is to find the "good" threshold k > 0. This is done using the principle of smooth or continuous fit (cf. [16,17,18]) which suggests choosing k such that lim y↑k V ′ k (y) = 0 if Ξ is of unbounded variation and lim y↑k V k (y) = 0 if Ξ is of bounded variation. Note that, although the smooth or continuous fit condition is not necessarily part of the general theory of optimal stopping, it is imposed by the "rule of thumb" outlined in Section 7 of [1].
First assume that Ξ is of unbounded variation. In that case, we know that scale functions are continuously differentiable on (0, ∞). Using (7) and (9), it follows that Letting y tend to k yields Now note that by (7) and (10) we have Similarly, W (q−φ(β))′ β (k) = e −βk (W (q)′ (k) − βW (q) (k)) which is clearly positive if β < 0. If β > 0, this is still true because W (q)′ (z)/W (q) (z) > Φ(q) for z > 0 and Φ(q) > β. In view of (35), we are forced to conclude that Similarly, if Ξ is of bounded variation, we get and hence, using (7) and (9), we infer Summing up, irrespective of the path variation of Ξ, we expect the optimal k > 0 to solve (36) and therefore we need to investigate the equation more closely.
Proof. Using (7), it follows that for k > 0. If (q, β) ∈ A such that β > 0, then Φ(q) > β and, using (7), The same is of course true if (q, β) ∈ A and β < 0. Additionally, it holds that lim k↑∞ h(k) > 0. Indeed, let z 0 > k 0 such that f (z) ≥ 1/2 for z ≥ z 0 and hence for k > z 0 , where in the last equality we have used (7). Again by (7), W (q) (k) = e Φ(q)k W Φ(q) (k) which together with the fact that Φ(q) > β implies that the right-hand side tends to infinity as k ↑ ∞. Combining this with the fact that f (−1) = 0 and the intermediate value theorem shows that there is a unique k * > k 0 such that h(k * ) = 0. This completes the proof.
We are now in a position to formulate our main result of this section.
Theorem 5.4. The solution to (30) is given by with optimal stopping time τ k * , where k * is as in Lemma 5.3.
Proof. Let V be defined as the right-hand side of (38). It is enough to check the following conditions: (i) V (y) ≤ 0 for all y ≥ 0; (ii) the process is a P β y -submartingale for all y ≥ 0.
To see why these are sufficient conditions, note that (i) and (ii) together with Fatou's lemma in the second inequality and Doob's stopping theorem in the third inequality show that for τ ∈ M, Since these inequalities are all equalities for τ = τ k * the result follows. The remainder of this proof is devoted to checking conditions (i) and (ii).
Verification of condition (ii): The proof of this appeals to standard techniques and, hence, we only outline the main steps and omit the details.
As for a first step, one may use the Markov property to show that the process is a P β y -martingale for 0 < y < k * . Indeed, for t ≥ 0, the strong Markov property gives from which the desired martingale property follows.
As for a second step, use Doob's optional stopping theorem to deduce that for 0 < k < k * the process e −(q−φ(β)t)(t∧τ k ) V (Y t∧τ k ), t ≥ 0, is a P β y -martingale for 0 ≤ y < k. Using this in conjunction with the appropriate version of Itô's formula (cf. Theorem 71, Chapter IV of [19]) implies that whereΓ β is the generator of −Ξ underP β . Finally, applying the appropriate version of Itô's formula one more time to the process e −(q−φ(β)t) V (Y t ), t ≥ 0, and using (39) shows that is a P β y -submartingale for all y ≥ 0. This finishes the sketch of the proof of (ii).

Proofs of main results
Proof of Theorem 3.3. The result follows by Lemma 4.1 (and what was said just after it) and Theorem 5.4. Specifically, using Theorem 5.4 with Ξ equal to ξ unkilled, β = α (noting in particular that φ(α) < q by assumption), y = log(s/x) and then setting where in the second equality we changed variables according to u = log(s/z). The expression for v(x, s) in the theorem now follows after an application of (7). As for the optimal constant K * , we see that K * satisfies the equation and the proof is complete.
Proof of Theorem 3.5. Write Θ (x) in place of Θ to emphasise the dependency on the initial position X 0 = x > 0. Self-similarity, and in particular the Lamperti transform, implies that where G = sup{t > 0 : ξ t = ξ e }. It follows that E x (Θ) < ∞ for all x > 0 if and only if E 1 (Θ) < ∞. Following standard excursion theory, cf. Chapter 6 of [13] or Chapter VI of [4], making particular use of the fact that the ladder height process of a (killed) spectrally negative Lévy process is a (killed) unit drift, we have 20 such that the sum is taken over a Poisson point process of excursions of ξ − ξ from zero, {(t, ǫ t ) : t ∈ I}, where I is the index set of the point process, ǫ t := {ǫ t (s) : s ≤ ς t } and ς t is the excursion length of ǫ t ; moreover, χ := inf{t > 0 : ς t = ∞} in the case that e = ∞ and, otherwise, χ := inf{t > 0 : ς t > e (t) }, where, for each excursion indexed by t > 0, e (t) is an independent copy of the exponential random variable e. Write n for the intensity measure of this Poisson point process of excursions. For the case of a spectrally negative Lévy process, it is well known that χ is exponentially distributed with parameter Φ(q). Note that Φ(q) is strictly positive if ξ drifts to −∞ or ψ(0) < 0, i.e. the process ξ is killed. If we write a = lim p→∞ Φ(p)/p, then it is also known (cf. the computations in Section 6.3 of [13]) that the second expectation on the right-hand side of (41) is equal to aE 1 [ such that both sides are finite (resp. infinite) at the same time. We immediately see that E 1 [exp(αχ)] < ∞ if and only if Φ(q) > α, that is to say, if and only if ψ(α) < 0. Appealing to a method developed in [7] together with Theorem VI.20 and Lemma VII.7 in [4], it can be checked that where P Φ(q) is the law of ξ under P 1 after the change of measure given in (6) with v = Φ(q) and V Φ(q) andV Φ(q) are the renewal measures of the ascending and descending ladder height processes of (ξ, P Φ(q) ) respectively. From, e.g. [5], it is known that dV Φ(q) (z) = dz on z ≥ 0 and [0,∞) e −βy dV Φ(q) (y) = β ψ(β + Φ(q)) , β ≥ 0.
Proof of Theorem 3.7. The result follows by Lemma 4.2 (and what was said just after it) and Theorem 5.4 with Ξ. Specifically, using Theorem 5.4 with Ξ equal toξ unkilled, β = −α (noting in particular that ψ(−α) exists by assumption with φ(−α) < q if q > 0), y = log(x/i) and then settingK * := e k * giveŝ v(x, i) = −x α log(K * ) log(x/i) where in the second equality we changed variables according to u = log(z/i). The expression forv(x, i) in the theorem now follows after an application of (7). As for the optimal constant K * , we see thatK * satisfies the equation thus completing the proof. whereχ,n andâ play the same role as χ, n and a, but now for the processξ. Following the reasoning that leads to (43) we see that, whenever the right-hand side is finite, Note that, sinceψ(Φ(q)) = 0, when the right-hand side is finite, then it is positive valued. Moreover, the left-hand side of (44) is a monotone function of α and hence we see that, when q = 0, E 1 (Θ) < ∞ if and only if α <Φ(0), i.e.ψ(α) < 0. Moreover, if q > 0, then requiring that α <Φ(q), i.e.ψ(α) < 0, is a sufficient condition for E 1 (Θ) < ∞ but a necessary condition would require us to know how the exponentψ behaves if its domain is extended into the negative half line.

Examples
In this section we present two examples, one of which shows that our results are consistent with the existing literature.