Levy Processes with finite variance conditioned to avoid an interval

Conditioning Markov processes to avoid a set is a classical problem that has been studied in many settings. In the present article we study the question if a Levy process can be conditioned to avoid an interval and, if so, the path behavior of the conditioned process. For Levy processes with finite second moments we show that conditioning is possible and identify the conditioned process as an h-transform of the original killed process. The h-transform is explicit in terms of successive overshoot distributions and is used to prove that the conditioned process diverges to plus infinity and minus infinity with positive probabilities.

Conditioning Markov processes to avoid sets is a classical problem. Indeed, suppose (P x ) x∈E is a family of Markov probabilities on the state space E, and that T is the first hitting time of a fixed set. When T is almost surely finite, it is non-trivial to construct and characterise the conditioned process through the natural limiting procedure lim s→∞ P x (Λ | s + t < T ) (1) or the randomized version lim q→0 P x (Λ, t < e q | e q < T ), (2) for Λ ∈ F t and x ∈ E. Here, (F t ) t≥0 denotes the natural filtration of the underlying Markov process and e q are independent exponentially distributed random variables with parameter q > 0.
A classical example is Brownian motion conditioned to avoid the negative half-line. In this case, the limits (1) and (2) lead to a so-called Doob h-transform of the Brownian motion killed on entering the negative half-line, by the positive harmonic function h(x) = x on (0, ∞). This Doob h-transform turns out (see Chapter VI.3 of [19]) to be the Bessel process of dimension 3, which is transient. This example is typical, in that a conditioning procedure leads to a new process which is transient where the original process was recurrent. Extensions of this result have been obtained in several directions, most notably to random walks and Lévy processes. A prominent example with several applications is that of a Lévy process conditioned to stay positive, which was found by Chaumont and Doney [5] using the randomised conditioning (2). In that case, the associated harmonic function h is given by the potential function of the descending ladder height process. Similarly, Bertoin and Doney [2] have shown how to condition a random walk to stay non-negative. Other examples include random walks conditioned to stay in a cone (Denisov and Wachtel [7]), isotropic stable processes conditioned to stay in a cone (Kyprianou et al. [14]), spectrally negative Lévy processes conditioned to stay in an interval (Lambert [16]), subordinators conditioned to stay in an interval (Kyprianou et al. [13]), Lévy processes conditioned to avoid the origin (Pantí [17] and Yano [23]) and self-similar Markov processes conditioned to avoid the origin (Kyprianou et al. [12]). The purpose of this article is to take advantage of the path discontinuities of Lévy processes and to condition them to avoid an interval. In Döring et al. [8] this problem was tackled for strictly stable processes since their structure as self-similar Markov processes allowed to deduce the right harmonic functions. The proofs were based on the so called deep factorisation (see Kyprianou et al. [11,15]), which analyses stable process using the Lamperti-Kiu transform. In the present article, we consider Lévy processes with zero mean and finite variance. This assumes less structure on the Lévy process, but at the same time excludes the stable processes, which have infinite second moments. The discrete-time analogue of our problem was considered by Vysotsky [22], who used a Doob h-transform to condition a centred random walk with finite second moment to avoid an interval. One of the harmonic functions we will discover is the analogue of the harmonic function found by Vysotsky for random walks, but the techniques needed are different. Before presenting our results, we introduce the most important definitions and results concerning Lévy processes. More details can be found, for example, in Bertoin [1], Kyprianou [10] or Sato [21].
Lévy Processes: A Lévy process ξ is a stochastic process with stationary and independent increments whose trajectories are almost surely right-continuous with leftlimits (RCLL). For each x ∈ R, we define the probability measure P x under which the canonical process ξ starts at x almost surely. We write P for the measure P 0 . The dual measureP x denotes the law of the so-called dual process −ξ started at x. A Lévy process can be identified using its characteristic exponent Ψ, defined by the equation E[e iqξt ] = e −tΨ(q) , q ∈ R, which has the Lévy-Khintchine representation: where a ∈ R is the so-called centre of process, σ 2 ≥ 0 is the variance of the Brownian component, and the Lévy measure Π is a real measure with no atom at 0 satisfying (x 2 ∧ 1)Π(dx) < ∞.
Our main assumption is: (A) ξ has zero mean and finite variance, and is not a compound Poisson process.
We define T B = inf{t ≥ 0 : ξ t ∈ B} for any open or closed set B ⊆ R. This is known to be a stopping time with respect to the right-continuous natural enlargement of the filtration induced by ξ, which we denote by (F t ) t≥0 . For certain auxiliary results, we will need to distinguish two cases: Killed Lévy processes and h-transforms: For a < b the killed transition measures are defined as p The corresponding sub-Markov process is called the Lévy process killed in [a, b]. A harmonic function for the killed process is a measurable function h : A harmonic function taking only strictly positive values is called a positive harmonic function. Thanks to the Markov property, harmonicity is equivalent to ( . When h is a positive harmonic function, the associated Doob h-transform is defined via the change of measure for Λ ∈ F t . From Chapter 11 of Chung and Walsh [6], we know that under P x h the canonical process is a conservative strong Markov process. In Chapter 11 of Chung and Walsh [6] it is shown that (4) extends from deterministic times to (F t ) t≥0 -stopping times T ; that is, Ladder height processes and potential functions: A crucial ingredient in our analysis is the potential function U − of the descending ladder height process, which is positive harmonic for a Lévy process killed on the negative half-line. To introduce U − , some notation is needed. Denote the local time of the Markov process (sup s≤t ξ s −ξ t ) t≥0 at 0 by L, which is also called the local time of ξ at the maximum. Let L −1 t = inf{s > 0 : L s > t} denote the inverse local time at the maximum and κ(q) = − log E e −qL −1 1 , for q ≥ 0, the Laplace exponent of L −1 . We define H t = sup s≤L −1 t ξ s , the so-called (ascending) ladder height process. It is well-known that H is a subordinator and we denote by a + the drift coefficient of H, and by µ + its Lévy measure. Under the dual measureP, the process L −1 is the inverse local time at the minimum, and we denote its Laplace exponent byκ. Still under this dual measure, H is the descending ladder height process, and we define a − and µ − to be its drift coefficient and Lévy measure.
The q-resolvents of H, for q ≥ 0, will be denoted by U q + ; that is, For q = 0 we abbreviate U + (dx) = U 0 + (dx), and denote the so-called potential function by U + (x) = U + ([0, x]), for x ≥ 0. We define U q − and U − according to the same procedure for the descending ladder height process. If ξ is not a compound Poisson process, it is known that U + and U − are continuous.

Main results
Before stating the main results, some more notation is needed to define our harmonic functions. We first define inductively the sequence of successive stopping times at which the process jumps crossing a or b: Second, let K † := inf{k ≥ 1 : τ k = T [a,b] } be the index indicating the time at which the process hits the given interval, let ] be the distribution of the position of ξ after its k-th jump across the interval, for k ≥ 0.
It is important to note that each ν x k can be expressed explicitly in terms of the Lévy measures and potential measures of the ladder height processes. Indeed, ν x 1 is nothing but an overshoot distribution, for which a formula is given in Proposition III.2 of Bertoin [1], using that the overshoot of ξ has the same distribution as the overshoot of the corresponding ladder height subordinator H. Applying the strong Markov property successively yields explicit expressions for all other ν x k . Theorem 2.1. If Assumptions (A) and (B) hold, then the function is a positive harmonic function for ξ killed on entering [a, b], i.e.
If Assumption (B) is not satisfied, then h + is always harmonic, but may not be positive. To be precise, when (B) fails, h + is positive on (b, ∞) but zero on (−∞, a).
Similarly, under (A) and (B), the function is positive harmonic as well. As above, when (B) fails, h − remains harmonic, but is positive only on (−∞, a). The harmonic functions h + and h − typically do not have a simple closed form (for a positive example see Section 3 below). This seemingly reduces their applicability but they are explicit enough to be used for conditioning purposes. We use them below as a tool to prove that conditioning in the sense of (2) works and, as a consequence of general h-transform theory, obtain that the conditioned process is strong Markov.
Additionally, it turns out that the harmonic functions are explicit enough to explain the limiting behavior of trajectories under the conditioned law.
Remark 2.3. Vysotsky [22] considered the analogous problem for a centred random walk S = (S n ) n∈N with finite variance. He derived a harmonic function V which is the discrete analogue of some linear combination of h + and h − . Proving harmonicity in the discrete-time situation is less involved for the following reason. It is enough to show that V (S) is a discrete-time martingale for which it is enough to derive the martingale property for one time-step. Since, in discrete-time, ] the computation is direct. The continuous-time situation of Lévy processes is much more delicate as t ≤ T [a,b] does not hold almost surely for any t ≥ 0.
With the harmonic functions h + , h − and their positive linear combinations it is now possible to h-transform the killed process as in (4). The h-transforms P + (resp. P − ) are defined through (4) with the positive harmonic functions h + (resp. h − ). In the sequel we identify the right ways to condition in order to obtain h-transforms with h + and h − and then derive the right linear combination of h + and h − that corresponds to conditioning the Lévy process to avoid the interval in the sense of (2). The next proposition gives a probabilistic representation of P x + by conditioning to avoid To understand the Lévy process to avoid the interval without additional condition on the late values a natural guess is an h-transform with a linear combination of h + and h − . Possible skewness of the Lévy process implies that different weights must be chosen for h + and h − . Our proofs show that the right harmonic function is Note that,κ(q) and κ(q) behave like √ q for q ց 0 if ξ oscillates and has finite variance, see for instance Patie and Savov [18], Remark 2.21. Hence, C exists and is strictly positive and from Corollary 2.2 it follows that h is a positive harmonic function if we assume only (A). The h-transform of ξ killed in [a, b] with h from (6) will be denoted by P . Our main result can now be formulated. Conditioning to avoid an interval is always possible for Lévy processes with second moments and the conditioned law corresponds to the h-transform with h from (6).
Typically the first property analyzed for a conditioned process is the longtime behavior.
It is often the case that the conditioning turns a recurrent process into a transient process. Nonetheless, a priori it is completely unclear what the limit behavior under P ± and in particular P is. Processes might be oscillating, diverge to +∞ or −∞, or might even diverge to both infinities with positive probability. The next proposition covers the case P + : Analogously, assuming (A) and (B) one can show that ξ drifts to −∞ almost surely under P x − . It remains to consider the behaviour of (ξ, P x ). Our final theorem shows that Lévy processes with second moments conditioned to avoid an interval drift to +∞ and −∞ with (explicit) positive probabilities: so that, in particular, P x -almost surely trajectories do not oscillate.
In the recent article [8] it was proved that stable processes conditioned to avoid an interval are transient. Since stable processes have infinite second moments our new results do not apply and it remains unclear if trajectories oscillate or diverge to +∞ and −∞ with positive probabilities. One can even use explicit formulas for the potential functions and the overshoot distributions (see e.g. Rogozin, [20]) to show that in the

An explicit example
When ξ is a Lévy process with no drift and two-sided exponential jumps, it is possible to compute the harmonic functions h + , h − and h explicitly. Let where σ ≥ 0, (B t ) t≥0 is a standard Brownian motion, and Nt i=1 Y i is a compound Poisson process with rate λ > 0 and absolutely continuous jump distribution with density For definiteness, let σ = √ 2 and λ = 1. The Laplace exponent ψ of ξ, given by E[e −θξt ] = e −tψ(θ) , can be expressed, for θ ∈ (−η, η), by where β = η 2 + 1 > η. Note that ξ oscillates and has finite variance, so (A) holds, (B) and (B) both hold as well. Let which is the Laplace exponent of a subordinator with unit drift, jump rate β − η and exponential jumps of parameter η. Since the uniqueness of the Wiener-Hopf factorisation [10, Theorem 6.15(iv)] implies that υ andυ are indeed the Laplace exponents of the ascending and descending ladder height subordinators, respectively. Since by [10, equation (5.23)], we can identify the potential measures and the potential functions To find h + in closed form we first need to find the measures ν x k explicitly. This can in principle be done using the expressions we have just found for U ± and the Lévy measures of the ladder height subordinators, but in fact the overshoot distributions have already been found in Kou and Wang [9], Corollary 3.1, where and are proven. We now claim that and hold for all k ≥ 0, where c = e −η(b−a) (β − η)/(β + η). For proving this, note that For k = 0 the claims are clearly correct. Next, note that for x > b: Now, let us assume the claims are correct for k − 1, k ≥ 1. Then, for x < a, b < y, Having formulas for U − and all ν k we can proceed to compute h + . Combining (10), (11) and (12) standard integration shows, for k ≥ 1, ) if x < a and, finally, We used that by symmetry κ =κ and consequently C = lim qց0 κ(q)/κ(q) = 1.

Proofs
Before going into the proofs let us discuss the form of the measures ν k defined before. We assume in the theorems that ξ oscillates, hence, all appearing first hitting times are almost surely finite. Keeping in mind that on the event {K † > k} the time τ k is the time of the k th jump across the interval. By the strong Markov property and ν x 0 (dy) = δ x (dy), we find the relations for x < a. More generally, the strong Markov property also implies the relation for x > b and k, l ∈ N and the analogous identities hold for x < a. It is important to note that (see e.g. Bertoin [1], Proposition III.2) analytic formulas exist for the overshoot distributions: and, analogously, Hence, analytic expressions for the ν k exist in the oscillating case even though these become more involved for big k due to the recursive definition. As an example, for Before we start with the proof, we need a lemma which is intuitively clear, but needs a certain argumentation: Let ξ be a Lévy process which is not the negative of a subordinator. Then, for all y, z > 0, With the Markov property we get, for s > 0, With this we see but this cannot happen unless ξ is the negative of a subordinator. This concludes the proof.
To prove Proposition 4.1 we will combine two statements. The discrete analogous statements were also used (with different arguments) by Vysotsky [22] to show finiteness of the harmonic function in the discrete case.
Proof. If ξ is the negative of a subordinator it holds γ + = 0. So assume that ξ is not the negative of a subordinator, in particular we can apply Lemma 4.2.
We separate three regions of the range of x. First we consider very small x, i.e. we consider the limit of x tending to −∞, then we consider the values of x which are close to a and last we treat the remaining values.
If ξ oscillates or drifts to ∞, the bound for x close to −∞ is more involved. Because E [H 1 ] < ∞, ξ has stationary overshoots in the sense that the weak limit of P x (ξ T [a,∞) ∈ dy) for x ց −∞ exists. It can be expressed as where a + is the drift of (H, P) and µ + its Lévy measure with the right-tailμ + . For the first special version of a subordinator see for example Bertoin et al. [4], for the general version for example Bertoin and Savov [3]. Since weak convergence is equivalent to the pointwise convergence of the distribution function at continuity points, due to the explicit formula in (16) it holds that, for b > a, Hence, also in this case there exist a K < a and a γ 1 < 1 such that for all x ≤ K. Now we have to treat the case x ∈ (K, a). Therefore we separate two cases. Case 1: The process ξ is regular upwards. First, we consider the limit for x → a. Since ξ is regular upwards it holds and hence, there is some δ > 0 such that It remains to consider x ∈ (K, a − δ]. First note that For the first term we use the Markov property to get Together we have for all x ∈ (K, a − δ]: Because of this it follows that Case 2: The process ξ is not regular upwards. In this case it holds We split up again For the first term we use the Markov property to get Together we have for all x ∈ (K, a): From (17) follows that For the general case (both, regular upwards and not) set γ + := max(γ 1 , γ 2 , γ 3 ) < 1.
Analogously to the lemma before it holds The second Lemma which we need to prove Proposition 4.1 is the following: for all x > b.
Proof. We start to show that for all K > 0. For that we estimate U + (y) for y > K with Proposition III.1 of Bertoin [1] which says that there are constants c 1 , c 2 ≥ 0 such that for all x > 0, where Φ(λ) = E [0,∞) e −λHt dt and I(x) = (0,x]μ + (y) dy. We combine these two statements as follows: Hence, by assumption, for all K > 0. The second inequality can be seen fromÊ [ To prove the claim let us first split as and estimate the first summand, using monotonicity of U + , as Applying the overshoot formula (15) the second summand can be treated in the following way: for all x > b.
Analogously to the lemma above one can show in the case that ξ oscillates and E [H 1 ] < ∞ that for all α ∈ (0, 1) there exists a constant C − (α) > 0 such that Now we are ready to combine Lemmas 4.3 and 4.4 to show finiteness of h + (x). The idea how to combine them was also used by Vysotsky [22].
Set γ = max(γ + , γ − ) and note by Lemma 4.3 that for x > b and k ≥ 1: We estimate the first term in the same way by Going on with this procedure until ν x 0 we see for k ≥ 1 (for k = 0 we get obiously U − (x − b) as upper bound). In the same way we get for x < a: for k ≥ 0 (here we get an upper bound dependend on U + because the number of steps is odd). All together we get is the q-potential of the dual ladder height process. It follows immediately that h q + (x) ≤ h + (x) for all x / ∈ [a, b] and by monotone convergence that h q + converges pointwise to h + for q ց 0. Proposition 4.5. Assume (A) and let e q be independent exponentially distributed random variables with parameter q > 0. Then, for x / ∈ [a, b], and To prove this crucial proposition we need a small lemma which is basically just the strong Markov property: Lemma 4.6. Let be s ≥ 0 and k ≥ 0. Then it holds Proof. We focus on the case x > b and prove the first equality. We use the strong Markov property in the shift operator formulation, see e.g. Chung and Walsh [6], p. 57. Therefore we introduce D := {ω : [0, ∞) → R | ω is RCLL}. The shift operator is a map θ t : D → D such that X s • θ t = X t+s . The strong Markov property tells that for a (F t ) t≥0 -stopping time T it holds for all F ∞ := t≥0 F t -measurable and integrable Y . Here, we set T = τ 2k and Y = 1 {s<T (−∞,b] } . It is clear that Y is bounded and that Y is F ∞ -measurable can be seen as follows: With (21) we obtain for our choice of Y : Using this we get We used that {ξ τ 2k > b} ∈ F τ 2k and {τ 2k < T [a,b] } ∈ F τ 2k ∩ F T [a,b] ⊆ F τ 2k which can be seen by Theorem 1.3.6 of [6]. The remaining claims follow analogously.
Now we continue the proof of Proposition 4.5 for which we use the identitŷ proved by Kyprianou [10], Section 13.2.1 for a general Lévy process.
Proof of Proposition 4.5. We only consider the case x > b and start to prove the bounds To derive the lower bound we defineτ k = min(τ k , T [a,b] ). It follows, in particular, that τ k = τ k on K † ≥ k andτ k+1 −τ k = 0 on K † ≤ k. For the next chain of equalities we use (22), Lemma 4.6 and the lack of memory property of e q : Furthermore, it holds that Before proving the bounds of (23) we note that The first equality follows from the definition ofτ k and the facts that T [a,b] < ∞ almost surely (because ξ is recurrent under Assumption (A)) and that τ k diverges to +∞ almost surely. The third one is due to the fact that for x < b the process remains above b only in the intervals [τ 2k ,τ 2k+1 ). With (25), summing (24) over k yieldŝ which is (23). Since ξ is recurrent P x (e q ≥ T [a,b] ) converges to 1 for q ց 0, hence, (23) implies the claim.
The key for the proof of Theorem 2.1 are the relations in Proposition 4.5. We use them in a similar way Chaumont and Doney [5] proved harmonicity of a certain function for the Lévy process killed on the negative half-line.
Proof of Theorem 2.1. First note that (B) guarantees that h + (x) is strictly positive for all x ∈ R \ [a, b], which is not the case for x < a when (B) fails. From now on Assumption (B) won't be used anymore. For x ∈ R \ [a, b] and t ≥ 0 we have to show First we show that the left-hand side is smaller or equal to the right-hand side. This can be done applying Proposition 4.5 in the first step and Fatou's Lemma in the second one: The last equality follows because, according to Kyprianou [10], Section 13.2.1, it holds that lim qց0 q κ(q) = 0 if ξ oscillates. To show the equality it remains to show that we can replace the inequality in (26) by an equality. To apply the dominated convergence theorem, we use Proposition 4.5 which says also that for all q > 0. Furthermore, we have just seen that So we can apply dominated convergence to switch the limit and the integral.

4.3.
Conditioning and h-transforms. The aim of this section is to prove Proposition 2.4 and Theorem 2.5.

Proof of Proposition 2.4.
Integrating out e q , using Proposition 4.5 and the Markov property, gives From Proposition 4.5 we also know 1 κ(q) P ξt (e q < T [a,b] , ξ eq > b) ≤ h + (ξ t ) for all q > 0 and 1 Λ 1 {t<T [a,b] } h + (ξ t ) is integrable since h + is harmonic. So we can use dominated convergence to conclude lim qց0 P x (Λ, t < e q |e q < T [a,b] , ξ eq > b) where we used again Proposition 4.5 in the final equality. Hence, conditioning is possible and coincides with the h-transform with h + which confirms Proposition 2.4.
For the proof of Theorem 2.5 we will use a corollary of Proposition 4.5.
Corollary 4.7. Assume (A) and let e q be an independent exponentially distributed random variable with parameter q > 0. Then, for x / ∈ [a, b], we have and . With Proposition 4.5 and its counterpart for h − we have P x (e q < T [a,b] , ξ eq > b) ≤κ(q)h q + (x) and P x (e q < T [a,b] , ξ eq < a) ≤ κ(q)h q − (x) from which the first claim follows. Furthermore we have again with Proposition 4.5: With this we get and the proof is complete.
Proof of Theorem 2.5. We follow a similar strategy as in the proof of Proposition 2.4. First note that since lim qց0 κ(q)/κ(q) exists, the ratio is bounded for q ∈ (0, 1) by some β > 0. Hence, with Corollary 4.7 we get for all y / ∈ [a, b]. So we use dominated convergence and the second part of Corollary 4.7 to get

4.4.
Long-time behaviour. Finally, we analyze the transience behavior of the conditioned processes constructed in the previous section.
Proof of Proposition 2.6.
Step 1: We show that ξ under P x + is almost surely bounded from below. First note that, for x < a, For the first equality we used ν x ) for x < a, in the second we plugged-in the definition of h + (y) for y > b and used Fubini's theorem, in the third we used (13) and for the final equality we used the definition of h + (x) for x < a. Since ξ T (−∞,c] < a for c < a it follows, for all x ∈ R \ [a, b], that where we used again the strong Markov property (21) in the final equality. According to Theorem 1.3.6 of Chung and Walsh [6] it holds that So we continue for all Now consider just x < a and observe for x < a. Our aim is to switch the limit and the sum. In order to justify the dominated convergence theorem it is enough to verify With Proposition 4.1 we have where c 1 , c 3 and γ are the constants from Proposition 4.1 and its proof. It follows that So we can switch the limit and the integral in (29). With the same upper bound for every summand for itself we can even move the limit inside the expectation. Hence, Since ξ oscillates (which implies τ k < ∞ P x -almost surely) we obtain that 1 {T (−∞,c] ∈[τ 2k ,τ 2k+1 )} converges to 0 almost surely under P x for c → −∞. Hence, and, with the above argumentation, we also find that P x + (T (−∞,c] < ∞ for all c < a) = 0 for x > b. This finishes the arguments for Step 1.
Step 2: In the second step we show that ξ is transient under P x + , i.e. only spends finite time in sets of the form [d, a) ∪ (b, c] for d < a and c > b. Actually, we even show that the expected occupation is finite: To abbreviate we denote the potential of (ξ, P x ) killed on entering a Borel set B by U B (x, dy).
It follows To compute the righthand side we apply Proposition VI.20 of Bertoin [1] for y > b:

It holds analogously that
It follows in particular that the time the process (ξ, P x + ) spends in sets of the form [d, a) ∪ (b, c] is finite almost surely. Together with the first result that the process is bounded below almost surely and that the process is conservative it follows that lim t→∞ ξ t = +∞ almost surely under P x + . Proof of Theorem 2.7. The proof strategy is similar to the one above. Transience of the conditioned process is verified again by computing the occupation measure using the representation of the conditioned process as h-transform. The computation is in analogy to (30), using that h = h + + Ch − is bounded by Proposition 4.1. Next, recall from the counterpart of Proposition 2.6 for P x − that under (B), we deduce for all x ∈ R \ [a, b] under (B). If (B) fails we know Let us check if (31) holds in this case, too. If x > b the left-hand side of (31) is 0 (because there are no jumps bigger than b − a), as well as the right-hand side. For x > a the measure P x − corresponds to the process conditioned to stay below a which is known to drift to −∞ (see Chaumont and Doney [5]). In particular it holds P x − (T (−∞,c] < ∞) = 1, c < a from which we can deduce (31) in the same way as before. So (31) holds for all x ∈ R \ [a, b] just under (A).
Again using (5) yields In the proof of Proposition 2.6 we have already seen that

So we get
and, because of transience, Analogously one derives P x (lim t→∞ ξ t = ∞) = Ch − (x) h(x) and the proof is complete.

Extension to transient Lévy processes
When conditioning a process to avoid an interval, the most interesting case is when the process is recurrent; if it is transient, it may avoid the interval with positive probability, and things become simpler. On the other hand, the conditionings in Proposition 2.4, to avoid the interval while finishing above (or below) it, may still be non-trivial. In this section, we drop Assumption (A), and require only that ξ is not a compound Poisson process and does not oscillate. In particular, we do not assume that ξ has finite second moments; only for the study of h − do we need further conditions.
Without loss of generality, we assume from now on that ξ drifts to +∞, and indicate which of our results still hold and which need modification. Under this assumption, the function h defined by (6) simplifies to h + . This can be seen from the fact that κ(0) = 0 <κ(0), which implies C = lim qց0 κ(q) κ(q) = 0.
. This is easily seen to be harmonic using the strong Markov property: Transience ensures that ℓ is a positive harmonic function. We next show that ℓ is indeed a multiple of h = h + . To do so we will use the identityκ(q)U q − (x) = P x (e q < T (−∞,0] ), where e q is an independent exponentially distributed random variable with parameter q > 0 (see Kyprianou [10], Section 13.2.1 for a general Lévy process). Since ξ drifts to +∞, we haveκ(0) > 0, and hencê The idea is to separate the two-sided entrance problem in infinitely many one-sided entrance problems and use the strong Markov property to combine them. For x > b, using the strong Markov property, we find Now we split up P y (T [a,b] = ∞) in the same manner, i.e., (13) yields By induction the following series representation is obtained: For x < a a similar computation can be carried out, and we obtain Theorem 2.1: This is a consequence of the discussion above. Theorem 2.5: Since we condition here on a positive probability event, the h-transform and the conditioning are related in a standard way, using the strong Markov property and integrating out e q : for Λ ∈ F t , t ≥ 0. Proposition 2.4: The conditioning of Proposition 2.4 is equivalent to the conditioning of Theorem 2.5, since the additional condition to stay above the interval at late time vanishes in the limit due to the transience towards +∞. Since h = h + the result of Proposition 2.4 follows. Proposition 2.6 and Theorem 2.7: Since the conditioned measure is a restriction of the original one, the long-time behaviour of the conditioned process is identical to that of the original process. Hence, the statements of Proposition 2.6 and Theorem 2.7 hold. 5.1.2. Condition (B) fails. The definition of h + in this case simplifies to This function is plainly not positive everywhere. It is nonetheless harmonic for the process killed on entering [a, b]. The conditionings in Theorem 2.5 and Proposition 2.4 can still be carried out but, as we now prove, the results are somewhat different. Let h ↑ : (b, ∞) → [0, ∞) be given by h ↑ (x) = U − (x − b), the restriction of h + to (b, ∞). As shown by Chaumont and Doney [5], this function is harmonic for the process ξ killed on entering (−∞, b], and the h-transform of this process using h ↑ is the process ξ conditioned to avoid (−∞, b]. We will write (P x ↑ ) x∈(b,∞) for the probabilities associated with this Markov process. Consider now the conditioning of Proposition 2.4. When x > b the process cannot cross below the set [a, b] and return above it without hitting the set. Therefore, we have that lim qց0 P x (Λ, t < e q | e q < T [a,b] , ξ eq > b) = lim qց0 P x (Λ, t < e q | e q < T (−∞,b] ) = P x ↑ (Λ), the last equality being due to Chaumont and Doney [5]. For x < a, P x (e q < T [a,b] , ξ eq > b) = 0 for every q > 0, so the conditioning does not have any sense. In total, the conditioning of Proposition 2.4 reduces to conditioning ξ to avoid (−∞, b).
We turn next to the conditioning in Theorem 2.5. Let us define h ↓ : (−∞, a) → [0, ∞) by h ↓ (x) = U + (a − x), which is a positive harmonic function for the process killed on entering [a, ∞) resulting in the process conditioned to avoid [a, ∞) when h-transformed with h ↓ . As before, we write (P x ↓ ) x∈(−∞,a) for the probabilities associated with the conditioned process, which is killed at its lifetime ζ. By the same reasoning in the case where (B) holds, lim qց0 P x (T [a,b] > e q ) =κ(0)h + (x) =κ(0)h ↑ (x) when x > b; and, when x < a, using the asymptotics of T [a,∞) which we have already seen, we obtain P x (T [a,b] > e q ) = P x (T [a,∞) > e q ) ∼ κ(q)U + (a − x) as q ց 0, since ξ cannot jump over [a, b] from below. If x > b, and Λ ∈ F t , the same technique as in the proof of Theorem 2.5 gives rise to the calculation lim qց0 P x (Λ, t < e q | e q < T [a,b] ) Similarly, if x < a, we obtain lim qց0 P x (Λ, t < e q | e q < T [a,b] ) = P x ↓ (Λ, t < ζ). This shows that the conditioning from Theorem 2.5 leads not to a single Doob htransform of a killed Lévy process, but rather to a Markov process which behaves entirely differently depending on whether it is started above or below the interval. The long-time behaviour can be deduced from Chaumont and Doney [5]: the conditioned process approaches +∞ when started above b, and is killed when started below a.

5.2.
Study of h − . This section is kept informal; the claims can be proved by an adaptation of arguments developed in Section 4. In order to study h − we need to assume that E[H 1 ] < ∞ andÊ[H 1 ] < ∞. Note that here the descending ladder height subordinator has finite lifetime ζ, so we understand E[H 1 ] =Ê[H 1 1 1<ζ ]. The function h − is merely superharmonic, in the sense that We may still define the superharmonic transform but the transformed process is now a killed Markov process, with lifetime ζ. The dual version of the conditioning of Proposition 2.4 is then given by (33) P x − (Λ, t < ζ) = lim qց0 P x (Λ, t < e q | e q < T [a,b] , ξ eq < a), x ∈ R \ [a, b], and gives rise to a killed strong Markov process. This is a generalization of the subordinator conditioned to stay below a level as studied in Kyprianou et al. [13].