The Sticky L´evy Process as a solution to a Time Change Equation

A BSTRACT . Stochastic Differential Equations (SDEs) were originally devised by Itˆo to provide a path-wise construction of diffusion processes. A less explored approach to represent them is through Time Change Equations (TCEs) as put forth by Doeblin. TCEs are a generalization of Ordinary Differential Equations driven by random functions. We present a simple example where TCEs have some advantage over SDEs. Werepresent sticky L´evy processes as the unique solution to a TCE driven by a L´evy process with no negative jumps. The solution is adapted to the time-changed ﬁltration of the L´evy process driving the equation. This is in contrast to the SDE describing sticky Brownian motion, which is known to have no adapted solutions as ﬁrst proved by Chitashvili. A known consequence of such non-adaptability for SDEs is that certain natural approximations to the solution of the corresponding SDE do not converge in probability, even though they do converge weakly. Instead, we provide strong approximation schemes for the solution of our TCE (by adapting Euler’s method for ODEs), whenever the driving L´evy process is strongly approximated.


INTRODUCTION AND STATEMENT OF THE RESULTS
Feller's discovery of sticky boundary behavior for Brownian motion on [0, ∞) (in [Fel52,Fel54]) is, undoubtedly, a remarkable achievement.The discovery is inscribed in the problem of describing every diffusion processes on [0, ∞) that behaves as a Brownian motion up to the time the former first hits 0. See [EP14] for a historical account and [IM63] for probabilistic intuitions and constructions.We now consider a definition for sticky Lévy processes associated Lévy processes which only jump upwards (also known as Spectrally Positive Lévy process and abbreviated SPLP).General information on SPLPs can be consulted in [Ber96, Ch.VII].
Definition 1.Let X be a SPLP and X 0 stand for X killed upon reaching zero.An extension of X 0 will be càdlàg a strong Markov process Z with values in [0, ∞) such that X and Z have the same law if killed upon reaching 0. We say that Z is a Lévy process with sticky boundary at 0 based on X (or a sticky Lévy process for short) if Z is an extension of X 0 for which 0 is regular and instantaneous and which spends positive time at zero.In other words, if Z 0 = 0 then where B is a standard Brownian motion, the stickiness parameter γ is strictly positive and I denotes the indicator function.This equation has no strong solutions, which means that any process satisfying (1) involves some extra randomness to that of Brownian motion B. This result was conjectured by Skorohod and initially proved by R. Chitashvili in [Chi89] (later published as [Chi97]) and [War97].
More recent proofs can be found in [EP14,Bas14] and [HCA17].In contrast to the representation of the sticky Brownian motion as a solution to an SDE, we propose a representation of any SPLP with a sticky boundary as a solution to a TCE.The particularity of our representation is that it does not require any extra randomness to that generated by the Lévy process driving the equation.In the Lévy process case, a fundamental hypothesis to construct sticky Lévy processes will be that the sample paths have unbounded variation on any interval.Equivalently, we can assume that either there is a Gaussian component or the sum of jumps is absolutely divergent (i.e.∑ s≤t |X s − X s− | = ∞ almost surely for some t > 0).
Theorem 1.Let X be a SPLP adapted to a right-continuous and complete filtration (F t ,t ≥ 0).Assume that the sample paths of X have unbounded variation.Given a parameter γ > 0 and a point z ≥ 0, there exists a unique pair of stochastic processes Z = (Z t ,t ≥ 0) and C = (C t ,t ≥ 0) satisfying (2) Z t = z + X C t + γ t 0 I(Z s = 0) ds, where C t = t 0 I(Z s > 0) ds, for every t ≥ 0. For the unique pair (Z,C) verifying Equation (2), it holds that C is a (F t )-time change and that Z is adapted to the time-changed filtration ( F t ,t ≥ 0) given by F t = F C t .Furthermore, Z is a sticky Lévy process based on X .
This result attempts to honor the memory of Wolfgang Doeblin, the pioneer of TCEs, because for historical reasons that can be consulted in [BY02], the representation of diffusion processes suggested by Doeblin using TCEs is less known than the one given by Kiyosi Itô via SDEs.In particular, the region of applicability of TCEs has not been as carefully delineated as the one for SDEs.Note, however, that TCEs a priori do not even need the notion of a stochastic integral to be stated and, as showed in [CPGUB17,CPGUB13], TCEs have much better stability properties than SDEs.
To explain the unbounded variation assumption, it implies that the Dini derivatives of X are infinite (as proved originally in [Rog68]; see [AHUB20] for an extension and further applications).In other words, at any given stopping time T (such as the hitting time of zero), we have This will aid in proving that 0 is regular and instantaneous for Z.The following (counter)example also indirectly shows its relevance: the equation The difficulty with a time-change equation such as (2) is the discontinuity of the indicator functions of (0, ∞) and of {0}.The success in its analysis follows from an explicit description of a solution in terms of reflection in the sense of Skorohod.This is done for a deterministic version of (2) in Proposition 3 of Section 2.2.Sticky Lévy processes are a one parameter family of processes built from the trajectories of X and are part of the notion of recurrent extensions of X 0 analyzed in [RUB22] in terms of three non-negative constants and a measure on (0, ∞).Such processes are called SPLP (with values) in [0, ∞).As in Feller's result, these parameters describe the domain of the infinitesimal generator L of the corresponding recurrent extension.A possible boundary condition describing such a domain is given by for some constant γ > 0. In the Brownian case, this condition corresponds to the so-called sticky Brownian motion with stickiness parameter γ.Generalizing the Brownian case, we will compute the boundary condition for the generator of the sticky Lévy process of Theorem 1 in Section 3.3.Generator considerations are also relevant to explain the assumption on X having no negative jumps: The generator L of such a Lévy process acts on functions defined on R, but immediately makes sense on functions only defined on [0, ∞).This last assertion is not true for the generator of a Lévy process with jumps of both signs.
Our second main result exposes a positive consequence of the adaptability of the solution to the TCE (2).In [Bas14], an equivalent system to the SDE (1) is studied.In particular, it is showed that the nonexistence of strong solutions prevent the convergence in probability of certain natural approximations to the solutions of the corresponding SDE, even though they converge weakly.In contrast, we present a simple (albeit strong!) approximation scheme for the solution to the TCE (2).To establish such a convergence result, we start from an approximation to the Lévy process X which drives the TCE (2).
Theorem 2. Let X be a SPLP with unbounded variation.Let (Z,C) denote the unique solution to the TCE (2).Consider (X n , n ≥ 1) a sequence of processes with càdlàg paths, such that each X n is the piecewise constant extension of some discrete-time process defined on N/n and starts at 0. Suppose that X n → X in the Skorohod topology, either weakly or almost surely.Let (z n , n ≥ 1) be a sequence of non-negative real numbers converging to a point z.Consider the processes C n and Z n defined by Then C n → C uniformly on compact sets and Z n → Z in the Skorohod topology.The type of convergence will be weak or almost sure, depending on the type of convergence of (X n , n ≥ 1).
Observe that the above procedure corresponds to an Euler-type approximation for the solution to the TCE (2).If we consider the same equation but now driven by a process for which we could not guarantee the existence of a solution, our approximation scheme might converge but the limit might not be solution, as shown in the following simple but illustrative example.Let X = − Id, z = 0 and γ = 1.Then the approximations proposed in (3) and (4) reduce to for each k ∈ N.These sequences converge to C * (t) = t/2 and Z * = 0, but clearly such processes do not satisfy TCE (2).In general, TCEs are very robust under approximations; the failure to converge is related to the fact that the equation that we just considered actually admits no solutions, as commented in a previous paragraph.
Weak approximation results for sticky Brownian motion or of Lévy processes of the sticky type have been given in [Yam94] and [HL81].In the latter reference, reflecting Brownian motion is used, while in the former, an SDE representation is used.In [BRHC20], the reader will find an approximation of sticky Brownian motions by discrete space Markov chains and by diffusions in deep-well potentials as well a numerical study and many references regarding applications.In particular, we find there the following phrase which highlights why Theorem 2 is surprising: ... there are currently no methods to simulate a sticky diffusion directly: there is no practical way to extend existing methods for discretizing SDEs based on choosing discrete time steps, such as Euler-Maruyama or its variants ... to sticky processes...It is argued that the Markov chain approximation can be extended to multiple sticky Brownian motions.In the setting of multiple sticky Brownian motions, one can consult [BR20] and [RS15].We are only aware of a strong approximation of sticky Brownian motion, in terms of time-changed embedded simple and symmetric random walks, in [Ami91].
The rest of this paper is structured as follows.We split the proof of Theorem 1 into several parts.In Section 2 we explore a deterministic version of the TCE (2), which is applied in Section 2.1 to show a monotonicity property, the essential ingredient to show uniqueness and convergence of the proposed approximation scheme (Section 2.3).In Section 2.2, we obtain conditions for the existence of the unique solution to the deterministic version of the TCE (2).The purpose of Section 3.1 is to apply the deterministic analysis to prove existence and uniqueness of the solution to the TCE (2) and the approximation Theorem 2. Then in Section 3.2, we verify that the unique process satisfying the TCE (2) is is measurable with respect to the time-changed filtration and that it is a sticky Lévy process.Finally in Section 3.3, using stochastic calculus instead of Theorem 2 from [RUB22], we analyze the boundary behavior of the solution to the proposed TCE to describe the infinitesimal generator of a sticky Lévy process.

DETERMINISTIC ANALYSIS
Following the ideas from [CPGUB13] and [CPGUB17], we start by considering a deterministic version of the TCE (2).
We will prove that every solution to the corresponding equation satisfies a monotonicity property, which will be the key in the proof of uniqueness.Assume that Z solves almost surely the TCE (2).Hence, its paths satisfy an equation of the type ( 5) where f : [0, ∞) → R is a càdlàg function without negative jumps starting at some non-negative value and g is an non-decreasing càdlàg function.(Indeed, we can take as f a typical sample path of t → z + X t − γt and g(t) = γt.)Recall that, f being càdlàg , we can define the jump of f at t, denoted ∆ f (t), as f (t) − f (t−).By a solution to (5), we might refer either to the function h (from which c is immediately constructed), or to the pair (h, c).
We first verify the non-negativity of the function h.
Proposition 1.Let f and g be càdlàg and assume that ∆ f ≥ 0, g is non-decreasing and f (0) + g(0) ≥ 0.Then, every solution h to the TCE (5) is non-negative.Furthermore, if g is strictly increasing, the function c given by c(t) = t 0 I(h(s) > 0) ds is also strictly increasing.Proof.Let h be a solution to (5) and suppose that it takes negative values.Note that h(0) = f (0) + g(0) ≥ 0 and that h is càdlàg without negative jumps.Hence, h reaches (−∞, 0) continuously.The right continuity of f (and then of h) ensures the existence of some non-degenerate interval on which h is negative.Fix ε > 0 small enough to ensure that τ defined by τ = inf{t ≥ 0 : h < 0 on (t,t + ε)} is finite.(Note that, with this definition and the fact that f decreases continuously, we have that h(τ) = 0. ) Given that h is negative on a right neighborhood of τ, then Hence, h is non-negative.
Assume now that g is strictly increasing.By definition, c is non-decreasing.We prove that c is strictly increasing by contradiction: assume that c(t) = c(s) for some s < t.Then, h = 0 on (s,t) and, by working on a smaler interval, we can assume that h(s) = h(t) = 0.However, we then get The contradiction implies that c is strictly increasing.
If f − (t) = f (t−), note that the above result and (a slight modification of) its proof also holds for solutions to the inequality ) and f and g satisfy the hypotheses of Proposition 1.These inequalities are natural when studying the stability of solutions to (5) and will come up in the proof of Theorem 2.
2.1.Monotonicity and Uniqueness.The following comparison result for the solutions to Equation (5) will be the key idea in the uniqueness proof of Theorem (1).Moreover, we pick up it in Section 2.3, where it also plays an essential role in the approximation of sticky Lévy processes.
Proposition 2. Let ( f 1 , g 1 ) and ( f 2 , g 2 ) be pairs of functions satisfying that f i and g i are càdlàg , ∆ f i ≥ 0, g i is strictly increasing and f i (0) + g i (0) ≥ 0. Suppose that f 1 ≤ f 2 and g 1 ≤ g 2 .If h 1 and h 2 satisfy for i = 1, 2, then we have the inequality c 1 ≤ c 2 .In particular, Equation (5) admits has at most one solution when g is strictly increasing.
Proof.Fix ε > 0 and define c ε (t) = c 2 (ε + t).Set To get a contradiction, suppose that τ < ∞.The continuity of c 1 and c ε guarantees that c 1 (τ) = c ε (τ) and c 1 is bigger than c ε at some point t of every right neighborhood of τ.At such points, the inequality The assumpions about g 1 and g 2 imply that g 1 (τ) < g 2 (ε + τ).Therefore Thanks to the right continuity of h 2 , we can choose t close enough to τ such that h 2 (ε + s) > 0 for every s ∈ [τ,t).Going back to the inequality (6), we see that which is a contradiction.Therefore τ = ∞ and we conclude the announced result by letting ε → 0.
In particular, if (h 1 , c 1 ) and (h 2 , c 2 ) are two solutions to (5) (driven by the same functions f and g), then the above monotonicity result (applied twice) implies c 1 = c 2 and therefore 2.2.Existence.The following variant of a well-known result of Skorohod (cf.[RY99, Chapter VI, Lemma 2.1]) will be helpful to verify the existence of the unique solution to the TCE (5).
Lemma 1.Let f : [0, ∞) → R be a càdlàg function with non-negative jumps and f (0) ≥ 0. Then there exists a unique pair of functions (r, l) defined on [0, ∞) which satisfies: r = f + l, r is non-negative, l is a non-decreasing continuous function that increases only on the set {s : r(s) = 0} and such that l(0) = 0.Moreover, the function l is given by Note that the lack of negative jumps of f is fundamental to obtain a continuous process l.With the above Lemma, we can give a deterministic existence result for equation (5).
Proposition 3. Assume that f is càdlàg , ∆ f ≥ 0 and f (0) ≥ 0. Let (r, l) be the pair of processes of Lemma 1 applied to f .If {t ≥ 0 : r(t) = 0} has Lebesgue measure zero, then, for every γ > 0 there exists a solution h to Equivalently, in terms of Equation ( 5), the function h satisfies where f γ (t) = f (t) − γt.
Proof.Applying Lemma 1 to f , we deduce the existence of a unique pair of processes (r, l) satisfying r(t) = f (t) + l(t) with r is a non-negative function and l a continuous function with non-decreasing paths such that l(0) = 0 and (9) To construct the solution to the deterministic TCE (7), let us consider the continuous and strictly increasing function a defined by a(t) = t + l(t)/γ for every t ≥ 0. Denote its inverse by c and consider the composition h = r • c.The hypothesis on f implies that t 0 I(r(s) = 0) ds = 0 for all t.Therefore, since r is non-negative, then Substituting the deterministic time t for c(t) in the previous expression and using that c is the inverse of a, we have Finally, the definition of a and its continuity imply l(t) = γ(a(t) − t), so that Hence, the identity h(t) = r(c(t)) can be written as as we wanted.
2.3.Approximation.It is our purpose now to discuss a simple method to approximate the solution to the TCE (7).Among the large number of existing discretization schemes, we choose a widely used method, an adaptation of that of Euler's.Again, the key to the proof relies deeply on our monotonicity result.
Proposition 4. Let f be càdlàg and satisfy ∆ f ≥ 0, and f (0) ≥ 0. Assume that Equation (7), or equivalently (8), admits a unique solution denoted by (h, c).Let f n be a sequence of càdlàg functions which converge to f and let f n = f n − γ⌊n•⌋/n.Let c n and h n be given by c n (0) = c n (0−) = 0, Then h n → h in the Skorohod J 1 topology and c n → c uniformly on compact sets.
Note that Propositions 2 and 3 give us conditions for the existence of a unique solution, which is the main assumption in the above proposition.Also, h n is piecewise on [(k − 1)/n, k/n) and, therefore, c n is piecewise linear on [(k − 1)/n, k/n] and, at the endpoints of this interval, c n takes values in N/n.Hence, c n (⌊tn⌋/n) = ⌊nc n (t)/n⌋.
The proof of Proposition 4 is structured as follows: we prove that the sequence (c n , n ≥ 1) is relatively compact.Given (c n j , j ≥ 1) a subsequence that converges to certain limit c * , we see that ((c n j , h n j ), j ≥ 1) also converges and its limit is given by (c * , h * ), where h * = f γ • c * + γ Id and we recall that f γ = f − γ Id.A slight modification of the proof of Proposition 2 implies that the limit (c * , h * ) does not depend on the choice of the subsequence (n j , j ≥ 1) and consequently the whole sequence Proof of Proposition 4. Since γ Id is continuous, then our hypothesiss f n → f implies that f n → f − γ Id. (Since addition is not a continuous operation on Skorohod space as in [Bil99, Ex. 12.2], we need to use Theorem 4.1 in [Whi80] or Theorem 12.7.3 in [Whi02].)Fix t 0 > 0. Note that Equation (10) can be written as This guarantees that the functions c n are Lipschitz continuous with Lipschitz constant equal to 1. Hence they are non-decreasing, equicontinuous and uniformly bounded on [0,t 0 ].It follows from Arzelà-Ascoli Theorem that (c n , n ≥ 1) is relatively compact.Let (c n j , j ≥ 1) be a subsequence which converges uniformly in the space of continuous function on [0,t 0 ], let us call c * to the limit, which is non-decreasing and continuous.Actually, c * is 1-Lipschitz continuous, so that c * (t) − c * (s) ≤ t − s for s ≤ t.This is a fundamental fact which will be relevant to proving that c = c * .Since c n j (⌊n j t⌋/n j ) = ⌊n j c n j (t)⌋/n j for every t ≥ 0, we can write h n j = f n j • c n j + γ⌊n j •⌋/n j .We now prove that: as j → ∞: (If a proof is needed, note that Proposition 3.6.5 in [EK86] tells us that the accumulation points of But now, arguing as in Proposition 1, we see that f γ − • c * + γ Id is non-negative and that c * is strictly increasing.Since c * is continuous and stricly increasing, Theorem 13.2.2 in [Whi80, p. 430] implies that the composition operation is continuous at ( f γ , c * ), so that f n j • c n j → f γ • c * .Since γ Id is continuous, we see that h n j → h * := f γ • c * + γ Id, as asserted.
Another application of Fatou's lemma gives Now, arguing as in the monotonicity result of Proposition 2, we get c ≤ c * .
Let us obtain the converse inequality c * ≤ c by a small adaptation of the proof of the aforementioned proposition, which then finishes the proof of Theorem 2. Let ε > 0, define c(t) = c(ε + t) and let τ = inf{t ≥ 0 : c * (t) > c(t)}.If τ < ∞, note that c * (τ) = c(τ) and, in every right neighborhood of τ, there exists t such that c * (t) > c(t).At τ, observe that Thanks to the right continuity of the right hand side, there exists a right neighborhood of τ on which h(• + ε) is strictly positive and on which, by definition of c, c grows linearly.Let t belong to that right-neighborhood and satisfy c * (t) > c(t).Since c * is 1-Lipschitz continuous, we then obtain the contradiction: Hence, τ = ∞ and therefore c * ≤ c.Since this inequality holds for any ε > 0, we deduce that c * ≤ c.
The above implies that c * = c and consequently h * = h.In other words, the limits c * and h * do not depend on the subsequence (n j , j ≥ 1) and then we conclude the convergence of the whole sequence ((c n , h n ), n ≥ 1) to the unique solution to the TCE (8).

APPLICATION TO STICKY L ÉVY PROCESSES
The aim of this section is to apply the deterministic analysis of the preceeding section to prove Theorems 1 and 2. The easy part is to obtain existence, uniqueness and approximation, while the Markov property and the fact that the solution Z to Equation (2) is a sticky Lévy process require some extra (probabilistic) work.We tackle the existence and uniqueness assertions in Theorem 1 and prove Theorem 2 in Subsection 3.1.Then, we prove the strong Markov property of solutions to Equation 2 in Subsection 3.2.This allows us to prove that solutions are sticky Lévy processes, thus finishing the proof of Theorem 1, but leaves open the precise computation of the stickiness parameter (or, equivalently, the boundary condition for its infinitesimal generator).We finally obtain the boundary condition in Subsection 3.3.We could use the excursion analysis of [RUB22] to obtain the boundary condition but decided to also include a different proof via stochastic analysis to make the two works independent.
3.1.Existence, Uniqueness and Approximation.We now turn to the proof of the existence and uniqueness assertions in Theorem 1.
Proof of Theorem 1, Existence and Uniqueness.Note that uniqueness of Equation ( 2) is immediate from Proposition 2 by replacing the càdlàg function f by the paths of x + X − γ Id and taking g = γ Id.
To get existence, note that applying Lemma 1 to the paths of X , we deduce the existence of a unique pair of processes (R, L) satisfying R t = z + X t + L t with R a non-negative process and L a continuous process with non-decreasing paths such that L 0 = 0 and t 0 I(R s > 0) dL s = 0.In fact, we have an explicit representation of L as (12) Note that R corresponds to the process X reflected at its infimum which has been widely studied as a part of the fluctuation theory of Lévy processes (cf.[Ber96, Ch.VI, VII], [Bin75] and [Kyp14]).
From the explicit description of the process L given in (12), it follows that P(R t = 0) = P(X t = X t ), where X t = inf s≤t (X s ∧ 0).Similarly, we denote X t = sup s≤t (X s ∨ 0).Proposition 3 from [Ber96, Ch.VI] ensures that the pairs of variables (X t − X t , −X t ) and (X t , X t − X t ) have the same distribution under P. Consequently The unbounded variation of X guarantees that 0 is regular for (−∞, 0) and for (0, ∞) (as mentioned, this result can be found in [Rog68] and has been extended in [AHUB20]).Hence, for any t > 0, X t > 0. We decude that P(X t = 0) = 1 − P(X s > 0 for some s ≤ t) = 0. Thus, Therefore, we can apply Proposition 3 to deduce the existence of solutions to Equation (2).
Let us now pass to the proof of 2.
Proof of Theorem 2. As we have stated in Theorem 2, we allow the convergence X n → X to be weak or almost surely.Using Skorohod's representation Theorem, we may assume that it is satisfied almost surely in some suitable probability space.The desired result follows immediately from Proposition 4 by considering the paths of f = z + X − γ Id and 3.2.Measurability details and the strong Markov property.In order to complete the proof of Theorem 1, it remains to verify the adaptability of the unique solution to the TCE (2) to the time changed filtration ( F t ,t ≥ 0) and that such a solution is, in fact, a sticky Lévy process based on X .This is the objective of the current section, which ends the proof of Theorem 1.
By construction the mapping t → C t is continuous and strictly increasing.Furthermore, given that C is the inverse of the map t → t + L t /γ, we can write for every t ≥ 0. In other words, the random time C t is a (F s )-stopping time, since the filtration is right-continuous.Therefore the process C is a (F s )-time change and Z is adapted to the time-changed filtration ( F t ,t ≥ 0).In this sense we say that Z exhibits no extra randomness to that of the original Lévy process.This contrasts with the SDE describing sticky Brownian motion (cf.[War97, Theorem 1]).
Let us verify that the unique solution Z to (2) is an extension of the killed process X 0 .By construction, we see that if Z 0 = z > 0, then Z equals X until they both reach zero.Hence Z and X have the same law if killed upon reaching zero.Let now Z be the unique solution of (2) with Z 0 = z = 0.The concrete construction which proves existence to (2) of Section 2.2 shows that We have already argued that the unbounded variation hypothesis implies that L t > 0 for any t > 0 and therefore L ∞ > 0 almost surely.As above, recalling that C is the inverse of Id +L/γ, we see that C ∞ = ∞.We conclude that L •C ∞ > 0 almost surely, so that Z spends positive time at zero.We will now use the unbounded variation of X to guarantee the regular and instantaneous character of 0 for Z.By construction, the unique solution Z to the TCE (2) is the process X reflected at its infimum by applying a continuous strictly increasing time change C to it, that is Since 0 is regular for (−∞, 0) thanks to the unbounded variation hypothesis (meaning that X visits (−∞, 0) immediatly upon reaching 0), we conclude the regularity of 0. Similarly, given the regularity of 0 for (0, ∞) for X , we have Thus, 0 is an instantaneous point.
To conclude the proof of Theorem 1, it now remains to prove the strong Markov property.From the construction of the unique solution to the TCE (2), we deduce the existence of a measurable mapping F s that maps the paths of the Lévy process X and the initial condition z to the unique solution to the TCE (2) evaluated at time s, that is, Z s = F s (X , z) for s ≥ 0. Let T be a ( F t )-stopping time.Approximating T by a decreasing sequence of ( F t )-stopping times (T n , n ≥ 1) taking only finitely many values, we see that C T is an (F t )-stopping time.From the TCE (2), we deduce that Consider the processes C, X and Z given by Cs = C(T + s) −C(T ), Xs = X C(T )+s − X C(T ) and Zs = Z T +s respectively.We can write the last equation as and C satisfies Cs = s 0 I( Zr > 0) dr for s ≥ 0. In other words, Z is solution to the TCE (2) driven by X with initial condition Z T .Consequently Zs = F s ( X, Z T ).Note that X has the same distribution as X and it is independent of F T .Hence, the conditional law of Z given F T is that of F(•, Z T ).(One could make appeal to Lemma 8.7 in [Kal21, p. 169] if needed.)This allows us to conclude that Z is a strong Markov process and concludes the proof of Theorem 1.

Stickiness and martingales.
In this section we aim at describing the boundary condition of the infinitesimal generator of the sticky Lévy process Z of Theorem 1 by proving the following result.
Proposition 5. Let X be a Lévy process of unbounded variation and no negative jumps and let L be its infinitesimal generator.For a given z ≥ 0, let Z be the unique (strong Markov) process satisfying the time-change equation (2): Then, for every f : [0, ∞) → R which is of class C 2,b and which satisfies the boundary condition γ f ′ (0+) = L f (0+), the process M defined by Theorem 2 from [RUB22] describes the domain of the infinitesimal generator of any recurrent extension of X 0 (which is proved to be a Feller process) by means of three non-negative constants p c , p d , p κ and a measure µ on (0, ∞).To describe such parameters we note a couple of important facts about the unique solution to (2).By construction we can see that it leaves 0 continuously.Indeed, if we consider the left endpoint g of some excursion interval of Z, then C g is the left endpoint of some excursion interval of the process reflected at its infimum R. Thanks to Proposition 2 from [RUB22], such excursions start at 0, so Z leaves 0 continuously.Thus, from [RUB22], p c > 0 and µ = 0. Note also that Z has infinite lifetime because R has it and C is bounded by the identity function, so p κ = 0. Finally, since Z spends positive time at 0, then p d > 0. Theorem 2 from [RUB22] ensures that every function f in the domain of the infinitesimal generator of Z satisfies Our proof of Proposition 5 does not require the results from [RUB22].The main intention is to give an application of stochastic calculus, since we recall that a classical computation of the infinitesimal generator for Lévy processes is based on Fourier analysis (cf.[Ber96]).Regarding the generator L , recall that it can be applied to C 2,b functions such as f and that L f is continuous (an explicit expression is forthcoming).The lack of negative jumps implies that L f is defined even if f is only defined and C 2,b on an open set containing [0, ∞).
Proof of Proposition 5. Let Z be the unique solution to the TCE (2) driven by the SPLP X .Itô's formula for semimartingales [Pro04, Chapter II, Theorem 32] guarantees that for every function f ∈ C 2 0 [0, ∞): In order to analyze this expression, we recall the so-called Lévy-Itô decomposition, which describes the structure of any Lévy process in terms of three independent auxiliary Lévy processes, each with a different type of path behaviour.Consider the Poisson point process N of the jumps of X given by Denote by ν the characteristic measure of N, which is called the Lévy measure of X and fulfills the integrability condition (0,∞) (1 ∧ x 2 ) ν(dx) < ∞.Then, we write the Lévy-Itô decomposition as X = X (1) + X (2) + X (3) , where X (1) = bt + σ B t is a Brownian motion independent of N, with diffusion coefficient σ 2 ≥ 0 and drift x N(ds, dx) is a compound Poisson process consisting of the sum of the large jumps of X and finally x (N(ds, dx) − ν(dx)ds) is a square-integrable martingale.
Assuming the Lévy-Itô decomposition of X and using the next result, whose proof is postponed, we will see that for some square-integrable martingale M.
Lemma 2. Let C be a (F t )-time change whose paths are continuous and locally bounded.Let X be a right-continuous local martingale with respect to (F t ,t ≥ 0).Then the time-changed process X C is a right-continuous local martingale with respect to the time-changed filtration ( F t ,t ≥ 0).Lemma 2 ensures that the time-changed process (σ B + X (3) ) • C remains a local martingale.According to Theorem 20 from [Pro04, Chapter II], square-integrable local martingales are preserved under stochastic integration provided that the integrand process is adapted and has càdlàg paths.Consequently the stochastic integral Thanks to Corollary 27.3 from [Pro04, Chapter II], we know that a necessary and sufficient condition for a local martingale to be a square-integrable martingale is that its quadratic variation is integrable.Let us verify that E[[M, M] t ] < ∞ for every t ≥ 0. Theorem 10.17 from [Jac79] implies the quadratic variation of the time-changed process coincides with the time change of the quadratic variation Given that the Brownian motion B is independent of X (3) , the quadratic variation is σ This verifies the decomposition (15).Later we will deal with the last term of this decomposition.
Coming back to Itô's formula (14), we need to calculate the term corresponding to the integral with respect to the continuous part of the quadratic variation of Z. First, we decompose the variation as for every s ≥ 0. The first term is [X , X ] C s .Given the finite variation of γ(Id −C) and the continuity of C, Theorem 26.6 from [Kal02] implies that almost surely the other two terms are zero.Thereby Making the change of variable r = C s , the sum of the jumps in (14) can be written as (16) where A denotes the inverse of C. We claim that A is a ( F t )-time change.Indeed, splitting in the cases r < t and r ≥ t, we see that {A t ≤ s} ∩ {C s ≤ r} = {t ≤ C s ≤ r} ∈ F r for any r ≥ 0. Exercise 1.12 from [RY99, Chapter V] ensures that the time-changed filtration ( F A t ,t ≥ 0) is in fact (F t ,t ≥ 0).Thus, for any continuous function g, the process (g(Z − A t ),t ≥ 0) is (F t )-predictable.We return to (15) to put together the sum of the jumps in (16) and the stochastic integral ( f ′ • Z − ) • (X (2) •C).For this purpose, it is convenient to rewrite the last integral as ( f ′ • Z − • A •C) • (X (2) •C) and apply Lemma 10.18 from [Jac79] to deduce that ( on C 2,b functions and that the extended generator of X 0 is given by L f on C 2,b functions f on [0, ∞) which vanish (together with its derivatives) at 0 and ∞.Note that L f (z) is bounded.Define L f (0) by x I(x ∈ (0, 1)) ν(dx).
Given that L f (0) = L f (0+) − γ f ′ (0+), we can write the martingale M + M C as We deduce that if a function f ∈ C 2 [0, ∞) satisfies the boundary condition L f (0) = 0 or equivalently γ f ′ (0+) = L f (0+), then f (Z t ) − f (z) − t 0 L f (Z s ) ds is a martingale.By hypothesis, the last term is bounded by a linear function of t, so that E[ f (Z t )] is differentiable at zero and the derivative equals L f (z).
We conclude this section with the proof of Lemma 2.
Proof.(Lemma 2) Let (β n , n ≥ 1) be localizing sequence for X , then β n → ∞ as n → ∞ and for each n ≥ 1, the process X β n I(β n > 0) is a uniformly integrable martingale.Keeping the notation A for the inverse of C, we will prove that (A(β n ), n ≥ 1) is a sequence of ( F t )-stopping times that localizes to X C .The property of being (F t )-stopping time is deduced by observing that Hence (Z •C) A(β n ) is a ( F t )-martingale.Moreover A(β n ) → ∞ as n → ∞ since C ≤ Id.
[Z, Z] s = [X , X ] C s for every s ≥ 0 ′′ (Z − s )(1 − I(Z s = 0)) ds.Now we analyze the last term on the right-hand side from (14), which corresponds to the jump part.Let us note that the discontinuities of f • Z derive from the discontinuities of Z, which are caused by the jumps of X •C, in other words{s ≤ t : |∆ f (Z s )| > 0} ⊆ {s ≤ t : ∆Z s > 0} = {s ≤ t : ∆(X •C) s > 0}.