Exponential functionals of Markov additive processes

We provide necessary and sufficient conditions for convergence of exponential integrals of Markov additive processes. Other than in the classical L\'evy case studied by Erickson and Maller we have to distinguish between almost sure convergence and convergence in probability. Our proofs rely on recent results on perpetuities in a Markovian environment by Alsmeyer and Buckmann.


Introduction
Given a bivariate Lévy process (ξ t , η t ) t≥0 the corresponding exponential functional is defined as provided that the integral converges a.s. Necessary and sufficient conditions for this convergence in terms of the Lévy characteristics of (ξ t , η t ) t≥0 have been given in [15,Thm. 2].
As shown in [25] exponential functionals of Lévy processes describe exactly the stationary distributions of generalized Ornstein-Uhlenbeck processes, a class of processes that stems from physics, and nowadays has numerous applications e.g. in finance and insurance, see e.g. [21,28]. Due to this connection, the resulting importance in applications, and their complexity, exponential functionals have gained a lot of attention from various researchers over the last decades, see e.g. [6,7,9,23,26,27] to name just a few.
Dating back to [18] Markov switching models have become a popular tool in financial mathematics and elsewhere. Thus it is a natural attempt to study exponential functionals with a Markov switching behaviour. In our paper, given a bivariate Markov additive process (ξ t , η t , J t ) t≥0 with Markovian component (J t ) t≥0 , we denote the exponential integral of the Markov additive process (ξ t , η t , J t ) t≥0 by E (ξ,η) (t) := (0,t] e −ξ s− dη s , 0 < t < ∞, (1.2) and -in analogy to the Lévy case -refer to its limit as exponential functional, whenever it exists.
We prove necessary and sufficient conditions for convergence of E ξ,η (t) as t → ∞. As it will turn out, by contrast with the classical Lévy setting here we have to distinguish between almost sure convergence and convergence in probability. We also provide an example of an integral that converges in probability but not almost surely. Another somewhat surprising contrast to the classical setting is the fact that lim t→∞ ξ t = ∞ a.s. is no longer necessary for convergence of the integral. Thirdly, the possible degenerate behaviour of E ξ,η (t) allows for much more flexibility compared to the Lévy setting.
Note that exponential functionals of (Markov) additive processes have recently attracted the attention of other researchers as well. Finiteness and tails of the functional (1.3) with η t = t are treated in the recent manuscript [1]. In [30,31] the functional (1.3) with η t = t is studied with an emphasis on moments, while [33] considers similar questions and relations to discrete random structures for the special case of (1.3) for η t = t and ξ being a Markov modulated subordinator. Further, exponential integrals of Markov additive processes (1.2) with η t = t appear in the Lamperti-Kiu representation of real self-similar Markov processes, see e.g. [12,13,24] and references therein.
The paper is organized as follows. Section 2 briefly reviews known results on convergence of exponential integrals of (bivariate) Lévy processes and on perpetuities in a Markovian environment. The class of bivariate Markov additive processes that we shall work with together with some relevant properties are introduced in Section 3. In Section 4 we present and prove our main result giving necessary and sufficient conditions for convergence of the exponential integral (1.3) and discuss the degenerate cases. Finally, Section 5 is devoted to deriving sufficient conditions for convergence of (1.3) that are easier applicable than those from Section 4.

Markov modulated perpetuities
Recently Alsmeyer and Buckmann [2] generalized the results from [16] to a Markovian environment. More precisely they study convergence of as n → ∞, where (A n , B n ) n∈N is a sequence of random vectors in R 2 which is modulated by an ergodic Markov chain (M n ) n∈N 0 with countable state space S and stationary law π in the sense that conditionally on M n = j n ∈ S, n = 0, 1, 2, . . . the random vectors (A 1 , B 1 ), (A 2 , B 2 ), . . . are independent, and for all n ∈ N the conditional law of (A n , B n ) is temporally homogeneous and depends only on (j n−1 , j n ) ∈ S 2 . We write P j := P(·|M 0 = j) and P π = j∈S π j P j . Then, under the non-degeneracy conditions P π (A = 0) = 0 and P π (B = 0) < 1 (2.6) for a generic copy (A, B) of the (A n , B n ) under P π , it follows from [2, Thm. A k = 0 P π -a.s. and (2.7) are the return times of (M n ) n∈N 0 , and ℓ=1 A ℓ | ≥ 1) < 1 and hence the denominator in the integral in (2.7) is non-zero. We remark that in [2, Thm. 3.1 and Rem. 3.3] the authors state that (2.7) for some j ∈ S is equivalent to (2.7) for all j ∈ S. As we were not able to follow their argumentation in the discrete time setting or to derive a similar result in the continuous time setting, we stick to the stronger assumption here. Interestingly, by contrast with the case of i.i.d. sequences (A n , B n ) n∈N , if almost sure convergence fails, the perpetuity (2.5) can still converge in probability. More precisely, from [2,Thm. 3.4] we derive under the assumption (2.6) that P j (Z n ∈ ·) for some j ∈ S converges weakly to some probability measure Q j as n → ∞ if lim n→∞ τn(j) k=1 A k = 0 P π -a.s. and (2.9) In this case, there exists a random variable Z ∞ such that Q j (·) = P j (Z ∞ ∈ ·) and Z n P j → Z ∞ . Moreover, if (2.9) holds for some j ∈ S, convergence in probability is valid for all j ∈ S. Furthermore, if (2.9) is violated and the degeneracy condition

Bivariate Markov additive processes
The theory of Markov additive processes (MAPs) goes back to Ç inlar [10,11] and has been enhanced since then by various researchers (see e.g. [5,13,17,22]). In this paper we restrict to the most popular framework of MAPs, that is to Markov modulated Lévy processes similar to the setting in [5] and [22]. We refer to [4] for a standard modern treatment of the topic and set notation as follows.
Let (J t ) t≥0 be a right-continuous, ergodic, continuous time Markov chain with finite or countable state space S ⊆ N, intensity matrix Q = (q i,j ) i,j∈S and stationary law π = (π j ) j∈S . We denote the jump times of (J t ) t≥0 by {T n , n ∈ N 0 }, with T 0 := 0, while τ 0 (j) := 0, and τ n (j) := inf{T k > τ n−1 (j) : are its return times and are the corresponding exit times under P j . The sojourn time of (J t ) t≥0 in a state j ∈ S is denoted as and clearly under P j we have , j ∈ S, be bivariate Lévy processes with characteristic triplets (γ (j) , Σ (j) , ν (j) ) where γ (j) = (γ ξ (j) , γ η (j) ), ν (j) denotes the Lévy measure and .
The joint process (ξ t , η t , J t ) t≥0 is a MAP and we refer to (J t ) t≥0 as its Markovian component, while (ξ t , η t ) t≥0 is its additive component. Clearly the marginal processes (ξ t , J t ) t≥0 and (η t , J t ) t≥0 are MAPs as well.
We assume that the introduced processes are defined on a complete filtered probability space (Ω, F , F = (F t ) t≥0 , P) where F is the augmented natural filtration induced by (ξ t , η t , J t ) t≥0 . We write P j := P(·|J 0 = j) and P π = j∈S π j P j , with the corresponding expectations E j and E π defined accordingly.
Note that for simplicity we will sometimes abuse notation and -given J t = j -identify the processes X    Due to the switching Lévy process character of the first summand in the additive component it is not surprising that, given the Markovian component, these components admit a Lévy-Itô-type decomposition (see e.g. [32,Thm. 19.3]). Exemplarily we decompose (η t ) t≥0 as where (W t ) t≥0 is a standard Brownian motion and N η (j) are Poisson random measures with intensity measures ds ν η (j) (dx), respectively. Using this decomposition it is straightforward to define integration with respect to the additive component of a MAP given its Markovian component.
Another property of the additive components that carries over from Lévy processes and which will be of importance in our results is the well-known fact that Lévy processes in R either drift to ±∞ or oscillate. To formulate the analoguous result for MAPs, we introduce their long-term mean (here for the MAP (ξ t , J t ) t≥0 ) Whenever S is finite, κ ξ fully determines the long-term behaviour of (ξ t ) t≥0 as follows (see [4, Prop. XI.2.10]): For countable S, as noted in [19,Cor. 2.2], 0 < κ ξ < ∞ implies lim t→∞ ξ t /t = κ ξ P π -a.s. such that in particular lim t→∞ ξ t = ∞ P π -a.s.

Main theorem
Recall from (1.3) that given a bivariate Markov additive process (ξ t , η t , J t ) t≥0 with Markovian component (J t ) t≥0 as introduced above, we denote the exponential integral of (ξ t , η t ) t≥0 as The following theorem provides necessary and sufficient conditions for almost sure and weak convergence of E (ξ,η) (t) as t → ∞. To formulate the conditions, we set with A ξ (j) from (2.2), and Theorem 4.1. Assume lim t∈T j ,t→∞ ξ t = ∞ P j -a.s. for some hence all j ∈ S.
The proof of this theorem is given in Sections 4.4, 4.5 and 4.6 below, which are devoted to almost sure convergence, convergence in probability and divergence, respectively. In particular Lemma 4.10 proves equivalence of lim t∈T j ,t→∞ ξ t = ∞ P j -a.s. for some and for all j ∈ S.
Remark 4.2. If S is finite and (4.3) is not valid, but E (ξ,η) (t) converges in P j -probability, then E (ξ,η) (t) also converges P π -a.s., i.e. the two types of convergence are equivalent in this case. This is also true in the discrete setting as argued in [2, Rem. 3.8] and the argumentation given there carries over to the continuous-time setting studied here: By the above theorem, convergence in probability implies lim t∈T j ,t→∞ ξ t = ∞ a.s. and (4.2). From the proof of the convergence in probability part of Theorem 4.1 (Proposition 4.12 below) we will see, that this implies P j -a.s. convergence of the "conflated exponential integrals"Ê (ξ,η) (t) for all j ∈ S as they are defined in that proof. But for these one easily verifies because S is finite.

Examples
Note that by contrast with the standard Lévy case, ξ t → ∞ is not necessary for almost sure convergence of the exponential integral. This is further outlined by the following example.
Example 4.3. Let S = N and let (J t ) t≥0 be a continuous time petal flower Markov chain (see e.g. [3]) with intensity matrix for some q > 0 fixed and q 1,j = qp 1,j j = 2, 3, . . . for transition probabilities p 1,j > 0, j ∈ N\{1}. Then (J t ) t≥0 is an irreducible and recurrent Markov process with stationary distribution As additive component we choose ξ and η to be conditionally independent with Y (2) t ≡ 0, that is the second component of (ξ t , η t ) t≥0 has no common jumps with (J t ) t≥0 . Further We then directly observe that Nevertheless (ξ t ) t≥0 does not tend to ∞ as t → ∞. Indeed, as and as (ξ t ) t≥0 is constant between two jumps of (J t ) t≥0 , setting N t := n≥1 1 {Tn≤t} we clearly obtain lim sup This is consistent with the (only formal!) computation of κ ξ which would yield On the other hand which implies for any such that by the Borel-Cantelli lemma we conclude lim inf we observe that under P 1 the exponential integral The following example provides a scenario where the exponential integral converges in probability but not almost surely.
for some q > 0 fixed and q 1,j = qp 1,j j = 2, 3, . . . for transition probabilities p 1,j > 0, j ∈ N\{1}. Then (J t ) t≥0 is an irreducible and recurrent Markov chain. Further we assume the bivariate Lévy process (ξ ≡ 0, that is the first component of (ξ, η) has no common jumps with (J t ) t≥0 . As second component we assume and ν 0 η (dy) = ν η (0) (dy) and hence (4.2) is fulfilled by assumption for j = 0. Hence the exponential integral converges in P j -probability. Nevertheless, almost sure convergence is impossible as the integral oscillates with as can be shown again using the Borel-Cantelli lemma.

Degeneracy of E (ξ,η) (t)
Before we prove Theorem 4.1 we will discuss the possible degenerate behaviour of E (ξ,η) in more detail. This study of degeneracy will rely on a combination of results from [2] and [15].
Recall first that degeneracy in the classical Lévy case S = {1} is characterized by (2.4). To study degeneracy for larger state spaces S, note that at jump times of (J t ) t≥0 we can rewrite E (ξ,η) as is a sequence of random vectors modulated by a Markov chain (M n ) n∈N which is the discrete time jump chain of (J t ) t≥0 . W.l.o.g. we assume that (M n ) n∈N inherits the ergodicity from (J t ) t≥0 (see the proof of Prop. 5.5 for more details). Then its stationary law π M is equivalent to π and as shown in [2,Eq. (17) and Lemma 4.1] degeneracy of the Markov modulated perpetuity (4.4) in the sense of (2.10) is equivalent to the existence of a unique sequence {c j , j ∈ S} in R such that where in our setting Further, by [2,Prop. 4.6] validity of (4.5) implies (and is thus equivalent to) for any n ∈ N, which in turn is equivalent to (4.3) for all t ≥ 0 as will be shown in the following proposition.
Proposition 4.5. Assume there exists a sequence {c j , j ∈ S} in R such that (4.6) holds for any n ∈ N. Then (4.3) holds for all t ≥ 0.
Proof. We show this by contradiction and assume there exists a sequence {c j , j ∈ S} in R such that (4.6) holds for all n ∈ N, but for any sequence We choose the sequence {c j , j ∈ S} such that (4.6) holds for all n and then choose t ′ such that (4.7) holds for the given {c j , j ∈ S}.
We will first show that Assume there exists a sequence {c j , j ∈ S} such that then from (4.9) P π -a.s.
where the two factors on the right hand side are (conditionally on (J t ) t≥0 ) independent, while the left hand side is a constant. Thus we deducẽ which implies {c j , j ∈ S} = {c j , j ∈ S} in contradiction to (4.10), such that (4.8) is true. Finally, to prove the assertion of the proposition note that from (4.9) we have P π -a.s.
Conversely, from (4.8) and due to independence which yields the desired contradiction.
Remark 4.6. Note that uniqueness of the sequence {c j , j ∈ S} in (4.6) as shown in [2] directly implies uniqueness of the sequence {c j , j ∈ S} in (4.3).
The following proposition gives a necessary and sufficient condition for a degenerate behaviour of the exponential integral in terms of (ξ t , η t , J t ) t≥0 .
Proof. Recall first that by [29, Thm. II.37] for all j ∈ S it holds e −ξ (j) t = E(U (j) ) t for some Lévy processes U (j) defined by where (E(U (j) ) t ) t≥0 denotes the Doleans-Dade stochastic exponential of (U t , E(U (j) ) 0 = 1. (4.14) Now assume (4.3) for all t ≥ 0, then for all j ∈ S we observe immediately P j -a.s. for t < T 1 that is, in terms of U (j) from (4.13), and as (E(U (j) ) t ) t≥0 uniquely solves (4.14) this implies η s. for all t < T 1 which prolonges to t ≥ 0 due to the Lévy properties. Thus is a necessary condition for (4.3). Further, if S = {1}, then the computation leading to (4.15) extends to all t ≥ 0 and there is nothing more to show. Thus assume S consists of at least two different states such that T 1 < ∞ P π -a.s. due to the recurrency of (J t ) t≥0 . Then from (4.3) P j -a.s. for all j ∈ S with k = J(T 1 ) where Z Finally, to show (4.11) note first that by [29, Thm. II.37] it follows directly from the definition of (U t ) t≥0 in (4.12) that E(U) t = e −ξt . Further it is clear that Together with (4.15) this yields which is (4.11).
The converse can be shown by direct computation.
Remark 4.8. From Equation (4.11) it follows that in the case (X t ) ≡ 0 (that is ∆ξ Tn = 0 = ∆η Tn P π -a.s. for all n) the sequence {c i , i ∈ S} has to be constant equal to some c ∈ R and (4.11) can be simplified to η t = −cU t = −c Log(e −ξt ) for all t ≥ 0, which is equivalent to (2.4) in the Lévy case where S = {1} (see e.g. [8,Cor. 2.3]). If (4.11) holds and ∆ξ Tn = 0 P π -a.s. for all n, then F (i,j) η degenerates to an atom in c i − c j , while if ∆η Tn = 0 P π -a.s. for all n, then F (i,j) ξ degenerates to an atom in log c j c i which implies that this can only happen if either c i > 0, ∀i ∈ S, or c i < 0, ∀i ∈ S, (or c i ≡ 0 which would correspond to the trivial and excluded case η ≡ 0).

Proof of Theorem 4.1: Almost sure convergence
For the proof of the convergence statements in Theorem 4.1 we need the following lemma. Note that the introduced processξ is obtained from ξ by "conflating" the excursions of ξ for t ∈ T j to single jumps and identifying the n-th exit and n-th return time of j. Other appearing processes will be conflated likewise when this is necessary. Lemma 4.9. Fix j ∈ S and recall the sojourn time T j := {t ≥ 0 : J t = t}. Then under P j the conflated process (ξ t ) t≥0 := (ξ j t ) t≥0 given bŷ is a Lévy process with characteristic triplet (γ ξ (j) , σ 2 ξ (j) ,ν), wherê Proof. Clearlyξ 0 = 0 P j -a.s. since ξ 0 = 0 P j -a.s. Further for t ∈ T j the process ξ t equals in law a Lévy process with triplet (γ ξ (j) , σ 2 ξ (j) , ν ξ (j) ) and thusξ inherits independent increments and càdlàg paths of ξ. Stationarity of the increments follows from a standard property of MAPs, namely by [4, Eq. XI.2.1] Finally, the form of the jump measureν results from adding the jumps due to conflation which happen at rate −q j,j to the Lévy measure ν ξ (j) .
We also observe the following useful solidarity property.
Lemma 4.10. Consider the process (ξ j t ) t≥0 as in Lemma 4.9. Then lim t→∞ξ j t = ∞ P j -a.s. for some j ∈ S if and only if lim t∈T j ,t→∞ ξ t = ∞ P j -a.s. for all j ∈ S.
We now prove the statement on almost sure convergence in Theorem 4.1, that is we show: Proposition 4.11. Assume lim t∈T j ,t→∞ ξ t = ∞ P j -a.s. for some hence all j ∈ S. The exponential integral E (ξ,η) (t) converges P π -a.s. as t → ∞ to some random variable E ∞ (ξ,η) if and only if (4.1) holds for all j ∈ S.
Proof. We start with the "if" statement. Fix any j ∈ S and let N t = n∈N 1 τn(j)≤t for t ≥ 0. Then for any t ≥ 0 Here, I 1 is a classical perpetuity, namely e −ξ s− dη s ∈ dq < ∞ by our assumptions. To find an appropriate bound for I 2 note first that where (W n ) n∈N is an i.i.d. sequence. We will show that for some c > 0 lim t→∞ e cτ N t (j) e −ξ τ N t (j) W Nt = 0 P j -a.s., which is equivalent to state that Since lim t∈T j ,t→∞ ξ t = ∞ and since by Lemma 4.9 the conflated process (ξ j t ) t≥0 is a Lévy process under P j , it holds that 0 ) whenever the appearing expectation is finite, and set c = 1 otherwise. This then yields (in case of infinite expectation using Kesten's trichotomy [20]; also see [3, Proof of Thm. which again implies (4.18) since lim t→∞ ξ τ N t (j) /t > 0. Hence (4.17) follows and the growth of |I 2 | is bounded by e −ct which proves the almost sure convergence under P j . As j ∈ S was arbitrary this implies P π -a.s. convergence.
For the "and only if" statement, assume (4. and |I 2 | is not converging. Thus, by conditional independence given (J t ) t≥0 of I 1 and I 2 , also E (ξ,η) (t) does not converge P j -a.s. as we had to show.

Proof of Theorem 4.1: Convergence in probability
In this section we prove: Proposition 4.12. If lim t∈T j ,t→∞ ξ t = ∞ P j -a.s. and (4.2) hold for some j ∈ S, then for all j ∈ S E (ξ,η) (t) → E ∞ (ξ,η) in P j -probability.
Proof. Fix j ∈ S such that lim t∈T j ,t→∞ ξ t = ∞ and (4.2) hold. Under P j we split up the exponential integral as follows: 20) where N t := N t (j) := n∈N 1 τn(j)≤t and N − t := N − t (j) := n∈N 1 τ − n (j)≤t count the returns to and exits from j up to time t, respectively. DefineF then clearly (F k ) k∈N forms an i.i.d. sequence and the conflated process (F t ) t≥0 (in the same sense as in Lemma 4.9) is a compound Poisson process since the sojourn times τ − k (j) − τ k−1 (j), i.e. the interarrival times of (F t ) t≥0 , are exponentially distributed. Further Thus for any t ∈ T j e −ξ s− dF s , and the conflated version (Ê j (ξ,η) (t)) of this process (which is constant on T c j anyway) is an exponential integral of the bivariate Lévy process (ξ j t ,η j t +F t ) t≥0 , whereη is a variant ofη that has no jumps at conflation times. Sinceξ j t → ∞ a.s. by assumption, we see from [15,Thm. 2] (also see Equation (2.1)) that (Ê j (ξ,η) (t)) converges almost surely under P j if and only if (1,∞) log y Aξ j (log y) |νη j +F (dy)| < ∞ (4.21) which is equivalent to (4.2) by Lemma 4.9. It remains to show that for t / ∈ T j the appearing perturbation term (τ − N t +1 (j),t] e −ξ s− dη s in (4.20) is bounded appropriately. To see this, set This convolution integral has a distributional limit by the key renewal theorem [4, Thm. V.4.3], i.e. Hence dη s sup |·| converges in distribution converges in P j -probability as t → ∞ by Slutzky's theorem as claimed. Finally note that convergence under P j ′ follows due to the positive recurrency of (J t ): After reaching state j the exponential integral converges in probability as shown, while up to τ 1 (j) it cannot diverge as this would imply divergence under P j by the same argument.

Proof of Theorem 4.1: Divergence
We will prove divergence of the exponential integral E (ξ,η) in the two possible cases separately and start with: Proposition 4.13. Assume that the degeneracy condition (4.3) fails and lim inf t∈T j ,t→∞ ξ t < ∞ for some j ∈ S, then |E (ξ,η) (t)| Pπ −→ ∞.
Proof. As seen in (4.8) in the proof of Proposition 4.5, whenever (4.3) fails, we find u > 0 such that for any sequence {c i , i ∈ S} Now consider Here (A u k , B u k ) k∈N is a sequence of random vectors that is modulated by an ergodic Markov chain (J u n ) n∈N 0 which is a skeleton chain of (J t ) t≥0 . Denoting the k-th return time of (J u n ) n∈N to j as τ u k (j), we note that does not tend to 0 a.s. for k → ∞ due to our assumption. Together with (4.22) we thus conclude from [2,Thm. 3.4] that |Z u n | P π u −→ ∞, n → ∞, where the invariant distribution π u of (J u n ) n∈N 0 is equivalent to π. Further, with n t := sup{n ∈ N : nu ≤ t} and r t = t − n t u ∈ [0, u), under P π where (Z u nt ) ′ is a copy of Z u nt that is independent of the past up to time r t . Since |Z u nt | Pπ −→ ∞, t → ∞, while e −ξr t is bounded away from zero and (0,rt] e −ξ s− dη s is finite, we observe that To complete the proof of Theorem 4.1 it remains to show: Proposition 4.14. Assume that both the degeneracy condition (4.3) and (4.2) fail for all j, then Proof. Assume (4.2) fails for all j ∈ S. Fixing j we follow the lines of the proof of Proposition 4.12 up to failure of (4.21) and conclude by [15,Thm. 2 whenever the conflated integral is not degenerate, i.e. if there is no constant c j ∈ R such that E j (ξ,η) (t) = c j − c j e −ξt for all t ≥ 0 P j -a.s. (4.24) This follows from failure of (4.3) as (4.24) is either true for all j ∈ S or none. More precisely, (4.24) is equivalent toη j t = −c jÛ j t , which is a consequence of Proposition 4.7. This in turn is equivalent to η (j) t = −c j U (j) t P j -a.s. and However, if (4.24) fails for some j ′ = j, then (4.25) necessarily fails as well. By the same argumentation as at the end of the proof of Proposition 4.12, the divergence (4.23) implies divergence of |E (ξ,η) (t)| in P j -probability. As j was chosen arbitrarily this yields the result.

Sufficient conditions
Although Theorem 4.1 provides necessary and sufficient conditions for convergence of E (ξ,η) (t) it is hardly applicable as the given assumptions are difficult to check. Thus this section aims at additional, easy to check conditions for convergence of E (ξ,η) (t). In particular we will formulate conditions in terms of the long term mean κ ξ .
We now treat the two exponential integrals in (5.1) separately. First, to study E (1) we need the following technical lemma.
Following ideas from [15] we now show a.s. convergence of E (1) (t) as t → ∞ under rather weak conditions.
Proof. It is sufficient to prove convergence of the given integral over some interval (L, ∞), with possibly random L ∈ [0, ∞). To find a suitable L fix some constant c ∈ (0, κ) and set L := sup{t ≥ 0 : ξ t− − ct ≤ 0} if the set is not empty and L := 0 otherwise. Then L is a random variable such that ξ t ≥ ct for all t > L and it remains to consider e −λ s− d(W η s + Y s,η s ) P π -a.s.
By Lemma 5.1 the process W η s + Y s,η s is a square-integrable martingale with mean 0 and quadratic variation x 2 ν η (Js) (dx))ds =: Thus using Itô's isometry such that t → (0,t] e −λ s− d(W η s +Y s,η s ) is a martingale with bounded, converging second moments. It therefore converges P π -a.s. as t → ∞ which yields the claim.
Remark 5.3. The above obtained sufficient condition for convergence of E (1) is not optimal and only chosen for presentation here as it is easy to check and interpret. If needed, necessary and sufficient conditions for convergence of E (1) could as well be obtained by applying Theorem 4.1 in this case.
Clearly, for S finite, in Proposition 5.2 we can drop the assumptions (5.2) and κ ξ < ∞. Nevertheless, for countable S, (5.2) is not redundant as will be outlined by the following example.
Then ξ t → ∞ P π -a.s. as t → ∞ with Further, setting η t = (0,t] γ η (Js) ds with The next proposition gives conditions for almost sure convergence and convergence in probability of the exponential integral E (2) as defined in (5.1).
(i) The exponential integral E (2) (t) converges P π -a.s. to a finite random variable as t → ∞ if and only if (1,∞) log q P j sup ii) The exponential integral E (2) (t) converges in P j -probability to some random variable E Moreover, in this case convergence in P j -probability holds for all j ∈ S.
≡ 0 as otherwise E (2) (t) ≡ 0 a.s. and there is nothing to show. Let {T n , n ∈ N 0 } be the jump times of (Y b,η t + J t ) t≥0 withT 0 := 0 and setÑ t = ∞ n=1 1T n≤t . Then {T n , n ∈ N 0 } ⊆ {T n , n ∈ N 0 } and further {T n , n ∈ N 0 } contains all jump times of (Y b,η t + Y (2) t ) t≥0 . Thus we can reformulate where Tn ) n∈N is a sequence of random vectors modulated by a Markov chain (J n ) n∈N with state space S. HerebyJ is a retarded discrete time version of J whose return times τ 0 (j) := 0, andτ n (j) := inf{k >τ n−1 (j) :J k = j,J k−1 = j}, j ∈ S, fulfilTτ n(j) = τ n (j), n ∈ N, j ∈ S. FurtherJ inherits the positive recurrency from J and is necessarily aperiodic whenever Y b,η t ≡ 0 which implies thatJ has positive probability to stay in some state. If Y b,η t ≡ 0 andJ could be periodic, we artificially add a positive probability to stay in some state(s) and take corresponding extra jump times into account. Thus w.l.o.g.J is aperiodic and therefore ergodic and its stationary lawπ is equivalent to π. In our setting Pπ(A n = 0) = 0 and Pπ(B n = 0) < 1 are clearly fulfilled and we can apply [2, Thm. 3.1] to prove almost sure convergence of E (2) . Hereby holds since 0 < κ ξ < ∞ implies ξ t → ∞ P π -a.s., and due to the recurrency of (J t ) t≥0 we have lim n→∞ τ n (j) = ∞ P π -a.s. It remains to show equivalence of (5.4) and the second line of (2.7), which in our setting reads (1,∞) log q (0,log q) P j (ξ τ 1 (j) > x)dx P j (W j ∈ dq) < ∞ for some j ∈ S, (5.8) where from (2.8) As 0 < κ ξ < ∞, by dominated convergence where by [3, Eq. (10)] with N(j) ∈ N such that T N (j) = τ 1 (j). Applying Wald's equality twice yields and hence the denominator in the integral in (5.8) has a uniform upper bound and a uniform lower bound which is strictly positive. Thus it can be ignored.
To prove convergence in P j -probability of E (2) we apply [2, Thm. 3.4] on the Markov modulated perpetuity (5.6) and recall that the non-degeneracy condition (2.6) and (5.7) hold under the given conditions. It remains to show equivalence of the second line of (2.9) to (5.5) which can be done by the same arguments as in the case of almost sure convergence. That this convergence in P j -probability implies convergence in P j ′ -probability for all j ′ follows as in Proposition 4.12.
Remark 5.6. Alternatively to the given proof of Proposition 5.5 one could also apply Theorem 4.1 in the case η t = Y b,η t + Y (2) t to obtain similar conditions. We decided for a direct approach here as our resulting proofs were slightly shorter. The same is valid for Proposition 5.7 below.
In case of a finite state space S we can also show conditions for convergence of E (2) for infinite, well-defined κ. Observe that for finite state spaces and assuming non-degeneracy the two types of convergence are equivalent as stated in part (iii) of the following proposition.
To formulate our conditions we introducē which is in spirit of A ξ and A j ξ used in Sections 2.1 and 4, yet different.
(i) The exponential integral E (2) (t) converges P π -a.s. to a finite random variable as t → ∞ if and only if (1,∞) log q A ξ (log q) P j sup t )| ∈ dq < ∞ for all j ∈ S. (5.10) (ii) The exponential integral E (2) (t) converges in P j -probability to some random variable E log q A ξ (log q) P j Moreover, in this case convergence in P j -probability holds for all j ∈ S.
(iii) Given P π E (2) (t) = c J 0 − c Jt e −ξt for all t ≥ 0 < 1 (5.12) for all sequences {c i , i ∈ S} in R, the exponential integral E (2) (t) converges P π -a.s. as t → ∞ if and only if it converges in P j -probability for some/all j ∈ S.
Proof. We use the same notation as in the proof of Proposition 5.5 and follow its lines up to proving that (5.8) is equivalent to (5.10).
t )| as shown in the proof of Proposition 5.5 and thus the two expressions only differ in the appearing denominator. If κ ξ < ∞, then as shown in the proof of Proposition 5.5 the denominator appearing in (5.8) can be ignored. The same holds true under this assumption for the denominator in (5.10) as Thus assume κ ξ = ∞ such that ξ t tends to ∞. Let {T n , n ∈ N 0 } be the jump times of (Y b,ξ t + J t ) t≥0 withT 0 := 0 such that {T n , n ∈ N 0 } ⊆ {T n , n ∈ N 0 } and {T n , n ∈ N 0 } contains all jump times of (Y b,ξ t + X (2) t ) t≥0 . Repeating the computation and argumentation leading to (5.6) we note that this generates another retarded discrete time versionJ of J which is w.l.o.g. aperiodic and ergodic with stationary lawπ equivalent to π and such that its return timesτ n (j) satisfy Tτ n(j) = τ n (j). Then (0,log q) P j (ξ τ 1 (j) > x)dx = E j ξ + τ 1 (j) ∧ log q ≍ E π [ξ + T 1 ∧ log q] for q → ∞, by [3,Lemma 8.16], where we use the notation of f (x) ≍ g(x) whenever lim inf x→∞ f (x) g(x) > 0 and lim sup x→∞ where (W ξ t + Y s,ξ t ) t≥0 is a martingale and sup j∈S |γ ξ (j) | < ∞ for finite S. Thus (0,log q) P π γ ξ T 1 and hence (0,log q) which implies equivalence of (5.8) and (5.10). Again, the proof for convergence in P j -probability can be carried out analogously. That this implies convergence in P j -probability for all j follows as in Proposition 4.12. Finally (iii) follows from [2, Rem. 3.8] and applying the results from Section 4.3 on E (2) .
Remark 5.8. Note that (5.12) excludes degeneracy of E (2) , but this does not necessarily imply non-degeneracy of E (ξ,η) or vice versa. This would only be the case if one assumes additionally that E (1) is degenerate, i.e. if there exists a sequence {c j , j ∈ S} such that E (1) =c J 0 −c Jt e −ξt P π -a.s. for all t ≥ 0. (5.13) Indeed, given (5.13), (4.3) is equivalent to the existence of a (unique) sequence {č j , j ∈ S} such that E (2) =č J 0 −č Jt e −ξt P π -a.s. for all t ≥ 0, can be seen by direct computations.
The following corollary exemplarily summarizes results of Propositions 5.2 and 5.5. Similar statements for other scenarios can easily be formulated using the above statements.