On Time Reversal of Piecewise Deterministic Markov Processes

We study the time reversal of a general PDMP. The time reversed process is defined as $X_{(T-t)-}$, where $T$ is some given time and $X_t$ is a stationary PDMP. We obtain the parameters of the reversed process, like the jump intensity and the jump measure.


Introduction
The aim of this paper is to introduce the time reversal of a general piecewise deterministic Markov process (PDMP). Given a stationary version X t of such a process we let X t = X (T −t)− and study the characteristics of this reversed process.
The concept of reversing the direction of time for Markov processes can be found already in the works of Kolmogorov ( [34,35] and even earlier [51]). The general idea is to study the process X t = X (T −t)− , where T > 0 is either a fixed point in time or a suitably defined random time. However, it is not clear whether the process X t is a Markov process at all and if it is, whether properties like time-homogeneity, the strong-Markov property and others hold. Such questions have been answered until the 1970s in several publications regarding time reversion (see e.g. [43,44,17,54]). Other researchers applied time reversal to special classes of Markov processes, like Lévy processes ( [31]), stochastic networks ( [33]), birth and death processes ( [52]) and Markov chains ( [45]).
PDMPs evolve deterministically on an open subset of R d , interrupted by random jumps that happen either inside the state space or at the boundary. Piecewise deterministic paths can be observed for Markov processes in a variety of applications. We mention risk process ( [1,48,19,20,25]), growth collapse and stress release models ( [9,53,58,12,41]), queueing models ( [14,11]), earthquake models ( [46]), repairable systems ( [38]), storage models ( [15,27,28]) and TCP data transmission ( [39,40,24]). The mathematical framework that is used in this paper was introduced by Davis [22,21]. Other approaches to PDMPs and related processes can be found in [32,56,50,49], see also [8,18,23,47]. However, it seems that time-reversal has not been a subject of detailed study for PDMPs, at least not in a general framework (see [26] for time-reversal arguments for a special subclass). To fill this gap we study the reversed process of a general PDMP, allowing also for what is called a non-empty active boundary, i.e. forced jumps that occur whenever the process hits the boundary of its state space. This paper is divided into three sections. In the first section, after recalling the definition of PDMPs, we investigate the imbedded Markov chains (W k ) k=1,2,... and (Q k ) k=1,2,... obtained by observing the process just before and right after the jumps. Among other things we derive a formula for the stationary distribution of Z k = (W k , Q k ). Also in Section 1 we derive some sufficient conditions to ensure that the stationary distribution ν of the original PDMP is absolutely continuous.
Date: October 30, 2018. In Section 2 we define the reversed Markov process (X t ) t≥0 (we use an asterisk to denote variables related to the reversed process) and show that it is a PDMP. Moreover, we prove that π(A, B) = π (B, A), where π and π denote the stationary distribution of Z k and Z k . More specifically we find the crucial relation µ x (dy)(λ(x)ν(dx) + σ(dx)) = µ y (dx)(λ (y)ν(dy) + σ (dy)), (1) where σ denotes the measure that is equal to the average number of visits to the boundary. This formula can be used to derive the jump intensity λ (x) and the jump measure µ x (A) of the reversed process -one of the main aims of the paper. We also rediscover that in our setting the well known property of the generator A of X to be the adjoint of A holds. Section 3 is devoted to PDMPs on the real line, that is PDMPs with state space E ⊆ R. We derive an integral equation for the stationary distribution and study some special cases.
We will introduce certain conditions for the process to hold. More specifically, we have conditions (A) to ensure that X t is a proper PDMP, conditions (B) and (D) to be able to revert the process and condition (C) to obtain an absolutely continuous stationary distribution.

Preliminaries.
Throughout the article we use the following notations. We let d denote be the d-dimensional Lebesgue measure and write µ 1 µ 2 if a measure µ 1 is absolutely continuous with respect to another measure µ 2 . In this case the symbol dµ 1 /dµ 2 stands for the Radon-Nikodym derivative. If µ 2 is the Lebesgue-measure we shortly write µ 1 instead of dµ 1 /d . The same notation f is used for the Radon-Nikodym derivative of an absolutely continuous function f . Given some measure µ we denote µ| We use the abbreviations E µ and P µ for the expectation and probability, given that the process X t starts with initial probability distribution µ (which will often be the stationary distribution ν). In particular, if µ({x}) = 1 then we write E x and P x . Given a set A ⊆ R d , B(A) indicates the Borel-σ-field of subsets of A.
The typical feature of a PDMP is of course its eponymous path that consist of random jumps and piecewise deterministic segments. The jumps are steered by a jump intensity function, allowing the jump times to depend on the current state of the process, and a jump measure, determining the distribution of the destination of the random jumps. Additionally the process is allowed to change its state continuously between the jumps. In general, if ϕ(x, t) denotes the position of the process at time t, given that there were no jumps and that the process started in x, the appropriate condition on ϕ in order to obtain a deterministic time homogeneous Markov process is to form a flow, that is to fulfil the relation ϕ(ϕ(x, t), s) = ϕ(x, s + t) for s, t ≥ 0 (see e.g. [30,49]). If one wants to keep it general, this would be the only restriction to the deterministic paths and no further regularity and continuity conditions have to be imposed.
In this paper we instead follow the more restrictive (and more popular) set-up demonstrated in the book of Davis [22] (see also [18,21,48]), where the deterministic paths of the process are governed by a differential equation as follows.
. For differentiable functions f : E i → R we can represent the integral curve using the Lie derivative More generally, we understand the symbol X i f (x) as a solution of allowing f to be only absolutely continuous. The process (X t ) t≥0 evolves on a subset of the disjoint union of the E • i ∪ ∂E i and the elements of K represent the outer states of the process. In order to give a complete specification of the actual state space of the process we distinguish between the following subsets of the boundary ∂E i . First, the so called active boundary of points that can be reached from within and secondly, what we call (in lack of a more appropriate term) the passive boundary, the set of points on the boundary from where points in E • i can be reached: , of course, as the asterisk indicates, this set will serve as active boundary for the reversed process. We also define Before we proceed we agree on the following convention. We will throughout the paper omit the outer states, e.g. write x ∈ E instead of (x, i) ∈ E, or X f (x) instead of X i f (x). Only if the notations are necessary to avoid ambiguity we well indicate the outer states.
As explained in the introduction, the process X t jumps at certain random times (T i ) i=1,2,... . This will either happen from within E (voluntary jumps) or if the process hits the active boundary Γ (forced jumps). The times at which forced jumps occur are denoted by (T + i ) i=1,2,... . Let τ (x) = inf{t ≥ 0 : ϕ(x, t) ∈ Γ} denote the first time the latter happens, given the process starts in x ∈ E and no jumps occurred (with the usual convention that the infimum is equal to ∞ if there is no such t). Two random mechanisms determine the jumps: the jump intensity is a non-negative, continuous measurable function λ : E → [0, ∞), with the interpretation that the probability of a jump during [t, t + h], given that the process is in the state x, is λ(x)h + o(1) and the probability of more than one jump is o(1) as h → 0. Formally this will follow from the continuity of λ and from the representation of the probability distribution of the first jump time T 1 given by where we use the abbreviation Λ y (t) = exp(− t 0 λ(ϕ(y, u)) du). Secondly, the jump measure µ x (A) determines the probability of a jump from x ∈ E into a measurable set A ⊆ B(E). Let N t = sup{n : T n ≤ t} denote the number of jumps (forced and voluntary) and N + t = sup{n : T + n ≤ t} the number of forced jumps occurring during [0, t].
Throughout we assume the following standard conditions (c.f. [22]): Additionally we assume further conditions to ensure that we can define a proper reversed process and to be able to derive formulas for the parameters of the reversed process. In what follows we define for any set B ∈ B(E ∪ Γ) to denote the locations from which the process can reach B within h time units.

Condition B.
In this paper we always assume the following conditions.
Remark 1. Condition (B 1 ) assumes the existence of a stationary distribution of the process. This will be crucial for the definition of the time reversal in Section 2. Giving appropriate conditions for a general PDMP to possess a stationary distribution, to be ergodic, positive recurrent or Harris recurrent is a difficult if not impossible task. Different efforts have been made to solve these problems for special cases ( [36,58,18]). We take the liberty to keep away from these intricate questions and just assume our process to posses a unique stationary distribution, being aware of the difficulties that we leave untouched.
Remark 2. Condition (B 5 ) prevents the process from becoming cascading near the boundary, i.e. to jump more and more often from Γ into a smaller and smaller neighbourhood of Γ.
Remark 3. With these conditions holding there are no hybrid jumps, that is, we do not allow the situation where there is an x ∈ E such that X t jumps with a certain probability p(x) once it reaches x and with probability 1 − p(x) it stays on the flow. This restriction becomes important for the reversed process.
We will also need the following substitution rules. Let τ (x, y) = t if y = ϕ(x, t) for some t ≥ 0 and τ (x, y) = ∞ else. Then τ (x, y) represents the time the process needs to run from x to y if no jumps occur.
)} denote the curve segments starting from x and ending at y and at the active boundary, respectively. Then we can rewrite the line integral over Φ y x as where It follows in particular that τ (x, y) = Φ y x 1/|r(u)| du. Let M denote the class of measurable functions f : E → R and M the class of those members f ∈ M , for which t → f (ϕ(x, t)) is absolutely continuous (so f is absolutely continuous along the flow). We define the linear operator where A is the full generator of the Markov process X t and we denote by is also a martingale.

Stationarity.
The process X t− will hit the active boundary Γ at certain times (T + n ) n≥0 and is then forced to jump.
Many of the upcoming results of this paper are based on the following crucial relation. Taking expectations in (6) with respect to ν and taking into account that , so that we have the identity (see [22,Theorem 34.19]): Equation (8) is the usual starting point to find expressions of ν in terms of the parameters λ and µ x of the process. In general however there is not much hope to find explicit formulas and only for a few PDMP models the stationary distribution ν is explicitly known. Even in the one-dimensional case often all that can be done is to derive integro-differential equations for ν (see Section 3 for examples) and even these equations give rise to challenging problems themselves. Another interesting question is, whether it is possible to further describe the measure σ and express it in terms of the stationary measure ν on E • . In fact, σ is determined by the values of ν on arbitrary small neighbourhoods of Γ and for B ∈ B(Γ) the measure σ(B) is given by the limit probability to find X t in short distance to B (measured in time

Proposition 1. For B ∈ B(Γ) the measure σ(B) is given by the limit
Application of (8) to f h then yields Note that As h → 0 the integrals on the right tend to zero by dominated convergence, where we used condition (B 2 ) and ν(Γ) = 0.

The embedded processes.
To gain insight into the behaviour of the process it is useful to study the continuous time process (X t ) t≥0 sampled at the jump times (T i ) i=1,2,... . To this end we define the two-dimensional process Z k = (W k , Q k ), where W k = X T k− and Q k = X T k denote the embedded discrete time Markov processes, obtained by observing X t just after and right before the jumps. Note that due to the forced jumps the state space If X t is stationary then still W k and Q k are usually not stationary and vice versa. However, if W k is stationary then so is Q k . If ν is the stationary distribution of X t , then we will show how to find a stationary distribution for Z k .
Let for A ∈ B(E ) and B ∈ B(E) denote the probability that (under ν) X 0 ∈ A, X h ∈ B and the process is going to jump during the time interval [0, h].

Proposition 2. For all A ∈ B(E ) and B ∈ B(E) the limit p(h; A, B)/h exists as h → 0
and it is given by: Proof. Suppose first that B is open. The probability p(h; A, B) is a sum of the probabilities of the following four events: is not close to the boundary and a jump occurs within h time units, say at time s. Moreover, X s ∈ ∂ h−s B and no further jumps occur, so that the process will end up in B at time h, as desired. The probability of this event is given by The last equality is a consequence of the fact that 2) The event that X 0 ∈ ∂ h (A ∩ Γ), a unforced jump occurs at time s before the process reaches Γ, X s ∈ ∂ h−s B and no further jumps occur is given by Since τ (x) → 0 and ν(∂ h (A ∩ Γ)) → 0 as h → 0, it follows by similar arguments as before that , the process reaches Γ at time τ (x), X s ∈ ∂ h−s B and no further jumps occur. The probability is given by Hence This completes the proof for the case, where B is open. The general case follows using classical arguments.
It follows immediately that a stationary distributions of W k (the state of the process just before the jump) is given by for A ∈ B(E ). The restricted measure π W | E is absolutely continuous with respect to ν with Radon-Nikodym derivative λ, whereas on the active boundary π W | Γ is a constant multiple of the measure σ. Note that for an empty active boundary and a constant jump rate the above relation shows that the PASTA property (Poisson Arrivals See Time Average) π W (A) = ν(A) holds. By Theorem 1 a stationary distribution for the observations Q k of the process right after the jumps is given by for B ∈ B(E), confirming the results in [22,18]. Note that the formula for π Q follows from (14) after conditioning on the jump size.

Absolute continuity of the stationary measure.
It is interesting to ask under which conditions ν is an absolutely continuous measure (w.r.t. Lebesgue measure). This property jointly with absolutely continuity of the jump measure or stationary measure π Q simplifies the identification of the parameters of the reversed process. For one-dimensional models on the real line ν was found to be absolutely continuous under very mild conditions, see [58,9,10,36,37,18]. This is in accordance with countless observations of absolutely continuous stationary distributions for PDMP type stochastic models in the literature (e.g. [3,12,13,27,6,5,40]). However, in general, i.e. for higher dimensions, it seems very hard to give necessary and sufficient criteria. We introduce one more condition on the process X t .
Remark 5. The condition is satisfied in the vast majority of cases. The reason to introduce it is the following useful converse to (15). If E π Q (T 1 ) < ∞ then the stationary measure ν can be reconstructed from π Q by an argument from regeneration theory. Note that under π Q the time until the first jumps occurs forms a cycle, hence, by [3,Corollary VII 1.4], (c.f. with [18, equation (12)]) for bounded continuous functions f .
Recall that the state space E consists of a disjoint union of components The following proposition gives a criterion for absolute continuity of ν on one of these E i , i ∈ K. Proposition 3. Suppose that, additionally to Condition (C), r(x) = 0 for all x ∈ E i . If d(i) = 1 then ν is absolutely continuous on E i . In general, ν is absolutely continuous on E i if π Q is absolutely continuous there and r is continuously differentiable.
Proof. It follows from (16) that with some constant c > 0. If d(i) = 1 and r(x) = 0, then 1 (A ∩ Φ + x ) = 0 for all Lebesgue null sets A ∈ B(R) and in particular E x ( T 1 0 1 {Xs∈A} ds) = 0. It follows from (17) that ν(A) = 0, showing that ν is absolutely continuous. In the general case, i.e. if d(i) ≥ 1, suppose that π Q is absolutely continuous. We have where 1 {ϕ(x,s)∈A} ds = 0}. If we can show that C A is a Lebesgue null set, then it follows that π Q (C A ) = 0, since π Q is absolutely continuous. But then the right hand side of (18) is zero and absolutely continuity of ν follows from (16). To show that d(i) (C A ) = 0, suppose the converse. Then If r is continuously differentiable then ζ t : x → ϕ(x, −t) is also continuously differentiable (e.g. [29, p.95]). It follows that ζ t maps Lebesgue null sets to Lebesgue null sets (see [57,Proposition 26.3]) and that the integral of E i 1 {s<τ (x),ϕ(x,s)∈A} dx over the indicator function of the image ζ s (A) of A, must be zero, contradicting (19). Hence d(i) (C A ) = 0 and ν is absolutely continuous.
Remark 6. The condition on r can be relaxed. In fact it is enough to require that ζ t fulfils the weaker condition In that case ζ t fulfils what is called Lusin's condition (N), that is ζ t maps null sets to null sets ([57, Proposition 26.2]).

Definition.
We now assume X t to be stationary, that is, the process starts with initial distribution ν and then has the same distribution for all t ≥ 0. We pick a fixed time T > 0 and define the reversed process for t ∈ [0, T ] by X t = X (T −t)− (we indicate variables belonging to the reversed process with a star). Then X t is a right-continuous stationary stochastic process with state space E and initial distribution ν. Obviously the active boundary of the reversed process is given by Γ and conversely the passive boundary is now Γ. It is known (see [43] and [44, Theorem 2.1.1]) that X t , constructed in this way, is again a time homogeneous Markov process. Obviously X t inherits from X t the property to have piecewise deterministic paths with random jumps and no explosions. So X t would in fact be a PDMP if we could show that X t has a regular jump intensity λ , in the sense that the conditions (A 1 ) and (A 2 ) for λ have to be fulfilled also for λ (this is not self-evident, see the example below). Therefore, we have to impose two more conditions on the process X t , namely π Q has to be absolutely continuous on E • (allowing a mass on Γ = E \ E • ) with a locally integrable Radon-Nikodym derivative, which in Proposition 4 below will be seen to be our jump intensity function for the reversed process.

Condition D.
From now on we assume:

Proposition 4. Under Condition (D) the reversed process is a PDMP, fulfilling the conditions (A 1 )-(A 4 ), with intensity function given by λ (x) = ξ β(x).
Proof. According to (11) the probability p (h; B, E ) that X t ∈ B ⊆ E • and that the process has a voluntary jump during [t, t + h] is given by On the other hand, by construction of X t , p (h; B, E ) is equal to the probability p(h; E , B) and hence equal to yielding λ (x) = ξ β(x). Condition (D) ensures that λ fulfills (A 1 ). Moreover the condition (A 2 ), i.e. P (T 1 < ∞) = 1, certainly holds. Indeed λ (x) is a proper intensity function for the PDMP X t .
it follows that (B 2 ) always holds.
, so that jumps can go from Γ to 0 and 1/2 only, each with probability 1/2. It is not difficult to show that hence the stationary measure is absolutely continuous, but π Q is certainly not. What happens is that the reversed process X is not a proper PDMP since with a positive probability the process is forced to jump when reaching 1/2.

The parameters of the reversed process.
The following theorem shows that the stationary distribution of the reversed Markov chain Z k = (W k , Q k ) is given by the measure π, but with reversed arguments. have π(A, B) = π (B, A).
The terms in the parentheses in (22) coincide with the stationary distributions of W and W respectively. Hence we obtain the shorter representation µ x (dy)π W (dx) = µ y (dx)π W (dy).
In particular the stationary distributions of the imbedded Markov chains correspond to each other: π Q = π W and π W = π Q .
Frequently the jump measure µ x is absolutely continuous with respect to some other measure µ for every x ∈ E . If this is the case, the same is true for the jump measure of the reversed process as the following corollary of Theorem 2 shows.
The jump intensity of the reversed process fulfils For the boundary measure we have σ µ| Γ and Remark 9. If µ x is absolutely continuous for every x ∈ E and r is continuously differentiable then by Proposition 3 the stationary measure ν is absolutely continuous, which simplifies most calculations.
Proof. It follows from (15) that the stationary distribution of Q is absolutely continuous w.r.t. µ, with derivative given by: By Theorem 2, equation (23) and the fact that π W = π Q it follows that implying (24). From (22) and (24) it follows now that from which (25) and (26) can be deduced.
Then the Radon-Nikodym derivative is given by For example, if µ is the Lebesgue-measure then µ x is the uniform distribution on B x .
with det Dg x = 0 (here D is the Jacobian) and B 1 , B 2 , . . . are i.i.d. random variables with values in C = {z ∈ R d : g x (z) ∈ E, ∀x ∈ E} (we assume that E is such that C is non-empty) and an absolutely continuous distribution with density f . Then Suppose for example that the frequent situation where Q i = W i + B i is present, so that the jumps are random translations. Then g x (y) = x + y, C = {z : z + x ∈ E, ∀x ∈ E} and µ x is absolutely continuous with density According to the classic literature on reversed Markov processes (see [43,44]) one would expect that the generator A of X is the adjoint operator of A, that is for bounded functions f, g in D(A) ∩ D(A ). This is indeed true and in the case of PDMPs this result takes the following form: Proof. Using the representation Af (x) = X f (x) + λ(x)Qf (x), it follows that

Introduction.
In this section we assume that X t has values in R and that t → ϕ(x, t) is strictly increasing, that is X f (x) = r(x)f (x) with some locally Lipschitz-continuous function r : where it is allowed to have either w = −∞ (then E = (−∞, γ)) or γ = ∞ (then E = [w, ∞)) or both (in which case E = R). We assume that τ (x) < ∞ for some (and then all) x ∈ E, whenever γ < ∞, so that {γ} can be reached in finite time (otherwise one could transform the state space to obtain the γ = ∞ case). Hence, the active boundary is empty if γ = ∞ and is equal to {γ} else. The following figure shows typical sample paths for the case where λ(x) = 1 (N t is a Poisson process), r(x) = 4 − x i.e. ϕ(x, t) = 4 − (4 − x)e −t , γ = 2 (so Γ = {2} and E = (−∞, 2)) and µ x (dy) = e −y dy, i.e.the process jumps from x to x − Z, where Z is exponential with mean one.
Conditions for the existence of a stationary distribution ν can be found in [58,36]. The boundary measure σ is determined by the single value σ({γ}) which can be calculated as follows. According to (9) we have The following Theorem generalizes known formulas for the stationary density (see e.g. [ [w, γ). The density ν of ν fulfills the integro-differential equation:

Theorem 4. The stationary distribution ν is absolutely continuous on
where θ(x, z) Proof. Suppose that f ∈ M b . Then f is absolutely continuous and f (y) − f (x) = y x f (u) du, with a measurable f : E • → R. Hence, using Fubini's theorem, It follows from (8), applying Fubini once again, that Hence ν is absolutely continuous with density ν fulfilling (31).

The reversed process.
Given a PDMP on the real line and a fixed time T > 0, we define X t = X (T −t)− as before. As in the general case, under the Condition (D) this reversed process is a PDMP. Its deterministic path is decreasing with r (x) = −r(x) and the active boundary is {w} if w > −∞. In contrast to the higher dimensional setting, in the present situation the integro-differential equation (31) for ν enables us to express λ more explicitly.
Remark 12. In practical applications one may use the following sufficient conditions to ensure that the conditions (A 1 )-(D 1 ) hold.
Condition (D 2 ), local integrability of λ , has to be checked in each particular case. If for example µ x is bounded and 1/ν (z) is locally integrable, then (D 2 ) is fulfilled (see equation (33)).

Theorem 5.
If µ x is absolutely continuous for all x ∈ E with density µ x and r is absolutely continuous then so is ν (with derivative ν ) and if ν (x) = 0, then Proof. From Theorem 4 it follows that ν is absolutely continuous and by (25) and (15) we have: Hence from (31) it follows that which proves (34). The relation (35) follows from Proposition 5 since The proof of (36) is similar.
Remark 13. It follows from (34) that Suppose that we observe the stationary process in state x. If (log(r(x)ν (x))) is positive (negative) then it is more likely (less likely) to find a jump in the near past than to find a jump in the near future.
Let assume that the processes X t , W k , Q k have been continued to the left. This can easily be achieved by setting X t equal to Y −t for t < 0, where Y is the time reversal of an independent version (Y t ) t≥0 of the PDMP. Then Q −1 denotes the state of X t just after the last jump before time 0.

Corollary 1.
If µ x is absolutely continuous for all x ∈ E, then Proof. Dividing (34) by ν (u) and integrating yields Using r (x) = −r(x), we obtain: By (4) this is equivalent to which is equivalent to (38).
We assume in the following examples that the conditions (A 1 )-(B 4 ) and Condition (D) are satisfied. See Remark 12 for sufficient conditions. Consider the classic renewal process N t with renewal times T 1 , T 2 , . . .. If the distribution of T 1 is absolutely continuous with distribution F (x) and density f (x) then the backward recurrence time X t = t − T Nt is a PDMP with r(x) = 1 and λ(x) being equal to the hazard rate f (x)/(1 − F (x)). The associated jump measure is, independently of x, concentrated in zero and the active boundary is empty (hence w = 0 and γ = ∞). It follows from (31) that, provided that E(T 1 ) is finite, the density of the stationary distribution fulfills:

Renewal age process.
This leads to the well known fact that ν is equal to the equilibrium or stationary excess distribution of F : The reversed process X t is the forward recurrence time process of the same renewal process. Applying Theorem 2 we see that λ (y) = 0 for y ∈ E and that from which, upon integration, σ ({0}) = 1/E(T 1 ) follows (the average number of visits to 0) and hence µ 0 (dx) = f (x) dx, which is not surprising: the upward jumps of the reversed process are just i.i.d. jumps with distribution F . Hence, for the trivial case of the renewal process our formulas yield the correct results. In [40] a generalized model for the TCP window size has been studied (see also [2,24,16,39,4,42]). We assume that X t is a process with λ(x) = λx β , r(x) = rx α , β > α − 1 and µ x ((0, y]) = (y/x) γ , which means that Q k = U 1/γ K · W k , where the U k are independent random variables with uniform distribution on (0, 1). In other words, at jump times the process is multiplied by the random number U 1/γ k . Using Theorem 4 it can be shown that under the condition β > α −1 a unique stationary distribution ν exists and

Generalized TCP window size process.
(see (27) in [40] for the normalizing constant C). Surprisingly, it follows from that the jump intensity of the reversed process is of the same form as λ: As long as β > 0 the reversed process X cannot reach 0 in finite time. If β ≤ 0 then the process could reach 0, but since α − 1 < β ≤ −1, it follows that λ is not integrable on [0, ε) for any ε: the process will almost surely (and by construction even surely) jump before it reaches 0. The jump measure of the reversed process is of a different form than the one for the original process, namely for y ≥ x, so that we obtain a Weibull-type distribution If α = β then the upward jumps of X follow an exponential distribution with mean r/λ and the reversed process has additive jumps whereas the original process has multiplicative jumps (this has been observed in [55]). Suppose that µ z (A) = µ(A) for all z ∈ E and some absolutely continuous measure µ on [w, γ). Then it follows from (31) that

Jumps independent of the current state.
and hence, using ν (w) = 0, Then it follows from (34) and (41) that which is nothing else than (25) since π Q = µ here. Note that ξ can be found from the normalizing condition ν(E) = 1.

Reflection.
Suppose that X t is a real valued PDMP with decreasing paths with the additional feature that once the process reaches 0 the process stays in 0 for an exponentially distributed time with mean 1/λ(0). We assume that r(x) < 0 for all x > 0, so 0 can actually be reached. Moreover the time to reach zero has finite mean. Technically this process is not a PDMP on the real line. Instead the process can be modelled as a PDMP with two outer states. We let E • 1 = (0, ς] with active boundary Γ 1 = {0} and introduce a second component E • 2 which includes the point {0}. Also let r(x) = 0 for all x ∈ E • 2 . Then the state space of X t is {(x, i)|x ∈ E • i , i ∈ 1, 2}. Once X t hits (0, 1) it jumps to (0, 2) and stays there until it jumps back to E • 1 . However, we avoid this cumbersome notation and just write E = [0, ς] and Γ = {0} and thereby allow E ∩ Γ = ∅. Note that the reversed process X t , after jumping to 0, stays there for an exponentially distributed sojourn time until it starts decreasing again. Theorem 6. The stationary distribution ν is absolutely continuous on the interval (0, w). The density ν of ν fulfills the integro-differential equation: The parameters of the reversed process are given in Theorem 5 applied for −X t and γ = 0, w = −ς.

Fig.: Risk process
The workload (also virtual waiting time) process of a Markovian single server queue is defined as the amount of work left at the server at time t. This is a special case of the reflected process with ς = ∞, where r(x) = −1, λ(x) = λ is constant and µ z ((x, ∞]) = 1 − F (x − z) for z ≤ x, where F is a probability distribution on (0, ∞). We assume that = λ ∞ 0 (1 − F (u)) du < 1, so that it can be shown that unique stationary distribution ν exists. It then follows from (42) that Integrating The renewal type equation (43) can be solved at least for numerical purposes in form of an infinite series of convolutions (the Pollaczek-Khintchine formula in the queueing context). Alternatively one can take Laplace transforms and find an equivalent relation for the transform.
Provided that F is absolutely continuous with density function f , Theorem 5 implies: Remark 14. It is interesting to look for conditions to obtain a constant jump rate for the reversed process. We have λ = λ + C for some constant C, iff ν (x)/ν (x) = C, that is, if ν (x) = De Cx is an exponential density with additional mass at zero. Then depends only on (y −x). If we interpret g(y) = λ (λ+C) e Cy f (y) as the density of the workload of an M/G/1 queue, then it follows from the fact that ν is a mixture of an exponential distribution with a mass at zero (see e.g. [3], Theorem 9.1) that g(x) is also exponential with mean 1/β and β = λ − C = λ.
Remark 15. Specializing Remark 13, if we ask ourself whether it is more probable, if we observe X t in steady state, to have a claim in the near future than to have a claim in the near past, (44) shows that λ (x) > λ if log ν is increasing at x and λ (x) < λ if log ν is decreasing at x (no matter which claim size distribution is present).