Approximating diffusion reflections at elastic boundaries

We show a probabilistic functional limit result for one-dimensional diffusion processes that are reflected at an elastic boundary which is a function of the reflection local time. Such processes are constructed as limits of a sequence of diffusions which are discretely reflected by small jumps at an elastic boundary, with reflection local times being approximated by $\varepsilon$-step processes. The construction yields the Laplace transform of the inverse local time for reflection. Processes and approximations of this type play a role in finite fuel problems of singular stochastic control.


Introduction
The classical Skorokhod problem is that of reflecting a path at a boundary. It is a standard tool to construct solutions to SDEs with reflecting boundary conditions. The fundamental example is Brownian motion with values in [0, ∞) being reflected at a constant boundary at zero, solved by Skorokhod [Sko61]. Starting with Tanaka [Tan79], well-known generalizations concern diffusions in multiple dimensions with normal or oblique reflection at the boundary of some given (time-invariant) domain in the Euclidean space of certain smoothness or other kinds of regularity, cf. e.g. [LS84,DI93]. Other generalizations admit for an a-priori given but time-dependent boundary, see for instance [NÖ10]. Our contribution is a functional limit result for reflection at a boundary which is a function of the reflection local-time L, for general one-dimensional diffusions X. Because of the mutual interaction between boundary and diffusion, see Figure 1a, we call the boundary elastic. Such elastic boundaries appear typically in solutions to singular control (a) X against real time t.
(b) X against local time L. problems of finite fuel type, where the optimal control is the reflection local time that keeps a diffusion process within a no-action region, cf. Karatzas and Shreve [KS86]. In order to explicitly construct the control (pathwise via Skorokhod's Lemma), finite fuel studies typically assume that the dynamics of the diffusion can be expressed without reference to the control (see e.g. [Kob93,EKK91]). This is different to our setup, where the non-linear mutual interdependence between diffusion and control (local time) subverts direct construction by Skorokhod's lemma, already for OU processes [WG03, Remark 1]. We relate to a concrete application in context of optimal liquidation for a financial asset position in Remark 3.4.
A natural idea for approximation is to proxy 'infinitesimal' reflections by small ε-jumps ∆L ε , thereby inducing jumps of the elastic reflection boundary, see Figure 2. This allows to express excursion lengths of the approximating diffusion X ε in terms of independent hitting times for continuous diffusions, what naturally leads to an explicit expression (3.9) for the Laplace transform of the inverse local time of X. In our singular control context, L ε is asymptotically optimal at first order if L is optimal, see Remark 3.4. Our main result is Theorem 3.2. We prove ucp-convergence of (X ε , L ε ) to (X, L) by showing in Section 4 tightness of the approximation sequence (X ε , L ε ) ε and using Kurtz-Protter's notion of uniformly controlled variations (UCV), introduced in [KP91].

Elastic reflection: Model and notation
We consider a filtered probability space Ω, F, (F t ) t≥0 , P with one-dimensional (F t )-Brownian motion W and filtration (F t ) satisfying the usual conditions of right-continuity and completeness. Let σ : R → (0, ∞) and b : R → R be Lipschitz-continuous and such dx is regular and recurrent. Moreover, let X be a (b, σ)-diffusion with reflection at an elastic boundary. This means that for a given non-decreasing g ∈ C 1 ([0, ∞)), the processes (X, L) satisfy with the reflection local time L being a continuous non-decreasing process L that only grows when X is at the (local-time-dependent) boundary g(L), i.e.
Note that the reflecting boundary is not deterministic in real time and space coordinates. Instead, the boundary g(L), at which the diffusion X is being reflected, is elastic in the sense that it is itself a stochastic process which retracts when being hit, cf. Figure 1b. Strong existence and uniqueness of (X, L) follow from classical results (cf. Remark 3.3) and are also an outcome of our explicit construction below, see Lemma 4.9.
We are particularly interested (see Remark 3.4) in the inverse local time 2), is the symmetric local time of the continuous semimartingale X at given level a ∈ R, i.e. L t = lim ε 0 1 2ε t 0 1 (a−ε,a+ε) (X s ) d X, X s . We denote by H y the first hitting time of a point y by a (b, σ)-diffusion, and write H x → y for the hitting time when the diffusion starts in x. Note that P[H x → y < ∞] = 1 for all x, y by our assumption on the diffusion being regular and recurrent.

Approximation by small ε-reflections
We construct solutions to (2.1) -(2.2) and derive an explicit representation (3.9) of the Laplace transform of the inverse local time at boundary g by approximating reflection by jumps in the following system of SDEs: As soon as process X ε hits the boundary, it is reflected by a jump of fixed size ε > 0. We will speak of L ε as discrete local time, as it is approximating L in the sense of Theorem 3.2. Since the target reflected diffusion X starts at the boundary g, we now have X ε 0 = g(0) − ε after an initial jump ∆L ε 0 = ε away from X ε 0− := g(0).
It suffices to show τ ∞ = ∞ (a.s.). To this end, let g ∞ := lim n g(nε) ∈ R ∪ {∞}. In the case g ∞ < ∞ , one can find x, y ∈ R with g ∞ − ε < x < y < g ∞ . By recurrence of (b, σ)-diffusions, we have (a.s.) finite times τ y 0 := inf{t > 0 | X ε t = y}, τ x n := inf{t > τ y n−1 | X ε t = x}, τ y n := inf{t > τ x n | X ε t = y}, for n ∈ N. The durations τ y n − τ x n , n ∈ N, for upcrossings of the interval [x, y] are i.i.d., by the strong Markov property of the time-homogeneous diffusion. Moreover, X ε is continuous on all [[τ x n , τ y n ]]. By the law of large numbers, 1 n n i=1 exp(−λ(τ y i −τ x i )) converges almost surely for n → ∞ to the Laplace transform E x [exp(−λH y )], λ ≥ 0, of the time H y for hitting y by the (b, σ)-diffusion process (started at x). This expectation is strictly less than 1 for λ > 0, as H y > 0 P x -a.s. for y > x, whereas the limit of 1 , let τ n := inf{t > τ n−1 | X ε t− = g((n−1)ε)}, for n ≥ 1, so that τ n−1 < τ n ≤ τ n and X ε τ n − = g((n − 1)ε) = X ε (τ n−1 )− . Using the time change ϕ t := with k ∈ N, and τ ε kε is the k-th jump time of X ε and L ε within period (0, ∞). For = kε, the approximating process X ε is a continuous − the length of the (k-th) excursion of X ε away from the boundary. Note that this excursion length is independent of F ε τ ε ( −) and its (conditional) distribution is The Laplace transform of first hitting times H x → z is well-known, see e.g. [RW00, V.50]: for x, z ∈ R and λ > 0, where functions Φ λ,± are uniquely determined up to a constant factor as the increasing (Φ λ,− ) respectively decreasing (Φ λ,+ ) positive solutions Φ of the differential equation Since we assume the boundary function g to be non-decreasing, only Φ λ,− is of interest for our purpose.
Due to independence of Brownian increments over disjoint time intervals, the Laplace transform of the inverse local time can be calculated from a sum of (independent) excursion lengths at (discrete) local times n := εn as Therefore, we obtain Intuitively, this already suggests the formula (3.9) when taking ε → 0.
Theorem 3.2. The approximations (X ε t , L ε t ) t≥0 from (3.1)-(3.2) converge uniformly in probability for ε → 0 to a pair (X t , L t ) t≥0 of continuous adapted processes with nondecreasing L, which is the unique strong solution (globally on [0, ∞)) to the reflected SDE (2.1)-(2.2). The inverse local time τ := inf{t > 0 | L t > } has the Laplace transform where Φ λ,− is the (up to a constant factor) unique positive increasing solution of the differential equation GΦ = λΦ, for G denoting the generator of the (b, σ)-diffusion.
Proof. Existence and uniqueness of (X, L) is shown in Lemma 4.9 below. Corollary 4.10 gives uniform convergence in probability. Using dominated convergence for the right-hand side of equation For the left-hand side, it suffices to prove weak convergence τ ε ⇒ τ as ε → 0 for all ≥ 0. This is done in Corollary 4.11 below.
Remark 3.3. Existence and uniqueness for (X, L) can also be concluded from classical results, cf. [DI93, suitably extended to non-bounded domains], by considering the pair (X, L) as a degenerate diffusion in R 2 with oblique reflection in direction (−1, +1) at a smooth boundary, see Figure 1b. This uses an iteration argument involving the Skohorod-map and yields another approximation by a sequence of continuous processes. Yet, these do not satisfy the target diffusive dynamics inside the domain, except at the limiting fixed point (unless (b, σ) are constant). In contrast, (X ε , L ε ) adheres to the same dynamics as (X, L) between jump times, cf. (2.1) and (3.1), is Markovian and has a natural interpretation.
Remark 3.4. An application example for (3.9) and elastically reflected diffusions is the optimal execution for the sale of a financial asset position if liquidity is stochastic, see [BBF18]. A large trader with adverse price impact seeks to maximize expected proceeds from selling θ risky assets in an illiquid market. His trading strategy A (predictable, càdlàg, non-decreasing) affects the asset price for an increasing function f , and Brownian motions (B, W ) with correlation ρ. The gains to maximize in expectation are The optimal strategy turns out to be the local time L of a reflected Ornstein-Uhlenbeck process X (with b(x) := ρσσ − βx and σ(x) = σ > 0) at a suitable elastic boundary g, as in (2.1)-(2.2), see [BBF18,Section 3]. After a change of measure argument, one can write the expected proceeds from such strategies as E[G ∞ (L)] = θ 0 f g( ) E e −δτ d . To find the optimal free boundary g, one can then apply (3.9) to express the proceeds as a functional of the boundary g, and optimize over all possible boundaries by solving a calculus of variations problem. This is key to the proof in [BBF18]. The discrete local time L ε has a natural interpretation as step process which approximates the continuous optimal strategy L by doing small block trades, as they would be realistic in an actual implementation, with identical (no-)action region. The approximation is asymptotically optimal for the control problem. Indeed, straightforward calculations similar to the derivation of (3.8) show that L ε is asymptotically optimal in first order,

Tightness and convergence
To show convergence of (τ ε ) ε , we will prove that the pair of càdlàg processes (X ε , L ε ) forms a tight sequence in ε → 0. Applying weak convergence theory for SDEs by Kurtz and Protter [KP96], we show that any limit point (for ε → 0) satisfies (2.1) and (2.2). Uniqueness in law for solutions of (2.1) -(2.2) will then allow to conclude Theorem 3.2.
Let (ε n ) n∈N be a sequence with ε n → 0 and consider the sequence (X εn , L εn ) n . To show tightness, we will apply the following criterion due to Aldous. (a) The sequences J T (Y n ) n and (Y n 0 ) n are tight (in R, resp. E) for any T ∈ (0, ∞), with J T (Y n ) := sup 0<t≤T Y n t − Y n t− denoting the largest jump until time T . (b) For any T ∈ (0, ∞) and ε 0 , η > 0 there exist δ 0 > 0 and n 0 ∈ N such that for all n ≥ n 0 , all (discrete) Y n -stopping timesτ ≤ T and all δ ∈ (0, δ 0 ] we have To get tightness one needs to control both jump size and, regarding (L ε n ) n , the frequency of jumps simultaneously. As we are considering processes with jumps of size ±ε n → 0, so only the latter is not yet clear. To this end, the next lemma provides a technical bound on X εn , L εn , while a second lemma constricts the probability that X εn (respectively L εn ) performs a number of N n jumps in a time interval of fixed length. Proof. Consider a continuous (b, σ)-diffusion Y that starts at time t = 0 at g(0). For n ∈ N and k = 0, 1, 2, . . ., let (n, k) := kε n . By induction over k, using comparison for diffusion SDEs, cf. [KS91, Theorem 5.2.18], one obtains that (a.s.) X εn t ≤ Y t for t ∈ [0, τ εn (n,k) ) for all k ≥ 1, and hence X εn ≤ Y on [0, ∞) (a.s.) because lim k→∞ τ εn (n,k) = ∞ for any n by  . Fix T ∈ (0, ∞), ε 0 , η > 0, and set N n := ε 0 /ε n . Then there exists δ > 0 and n 0 ∈ N such that for every bounded stopping timeτ ≤ T we have P J εn τ ,δ ≥ N n ≤ η for all n ≥ n 0 , where J εn τ ,δ := inf{k | L εn τ + kε n ≥ L εn τ +δ } is the number of jumps of X εn , respectively L εn , in time ]]τ ,τ + δ]].
Proof. We will first find an estimate for the jump count probability for arbitrary but fixed δ > 0, n ∈ N, N n ∈ N andτ ≤ T . Only in part 2) of the proof we will consider (N n ) n∈N as stated, to study the limit n → ∞. More precisely, we will show in part 1) that, given Fτ , for every λ > 0 there exist k n,λ ∈ {0, 1, . . . , N n − 1} s.t. for x n := g(L εn τ + ε n k n,λ ), (4.1) 1) In this part, fix arbitrary δ > 0, n ∈ N, N n ∈ N andτ ≤ T . We enumerate the jumps and estimate the sum of excursion lengths by δ. Let k := L εn τ + kε n be the (discrete) local time at the k-th jump after timeτ . If X εn has at least N n jumps in the interval ]]τ ,τ + δ]], it is doing at least N n − 1 complete excursions (cf. (3.4)), so that, noting that τ εn L εn t −εn ≤ t < τ εn L εn t (for all t ≥ 0) and Nn−1 + ε n ≤ L εn τ +δ , we have with the last equality being in distribution conditionally on Fτ , for H k being conditionally independent and distributed as H g( k−1 )−εn → g( k ) . Clearly, k is Fτ -measurable. By the Laplace transform (3.5) of H k and the Markov inequality, we get for λ > 0 Nn−1 where x n := g( k ) for the index k = k n,λ attaining the maximum.
2) For given δ > 0 andτ ≤ T , let us now consider the sequence N n = ε 0 /ε n , n ∈ N. To investigate the limit n → ∞, first observe that by Taylor expansion where r(·, ε n ) → 0 converges uniformly on compacts for ε n → 0. Sinceτ + δ ≤ T + δ is bounded, Lemma 4.2 yields a constant M ∈ R such that P ∃n : x n > M ≤ η 2 for the x n from above. On the event {∀n : x n ∈ I} with compact I := [g(0), M ], we have uniform convergence of r(x n , ε n ) and thereby get lim sup By compactness of I and Dini's theorem there exists λ = λ ε 0 ,η,M such that for δ := 1/λ we have on the event {x n ≤ M for all n}. By equation (4.1) and P[∃n : x n > M ] ≤ η/2, this completes the proof.
Next we show boundedness of (X εn ) n , needed for Lemma 4.6 to prove tightness. Proof. By Lemma 4.2, for every n ∈ N the process X εn on [0, T ] is bounded from above by a constant M with probability at least 1 − η/2 . It remains to show that it is also bounded from below with high probability. To this end, we will construct a process Y that is a lower bound for all X εn and then argue for Y . Forε := sup n ε n consider a (b, σ)-diffusion Y which is discretely reflected by jumps of size −ε at a constant boundary c := g(0)−ε, with Y 0 = y := g(0)−2ε. Such Y is a special case of (3.1)-(3.2), for a constant boundary function: is a continuous (b, σ)-diffusion starting in y. Now for fixed n, ε := ε n , note that X ε τ ε mε = g((m − 1)ε) − ε ≥ c ≥ Y τ ε mε by monotonicity of g. As τ ε mε → ∞ for m → ∞ by Lemma 3.1, induction over the inverse (discrete) local times τ ε mε , m ∈ N, Thm. 5.2.18]. Since X ε 0 ≥ Y 0 , the latter follows by induction over k. As τ Y k → ∞ for k → ∞ by Lemma 3.1, we get X εn ≥ Y on [0, ∞) for all n. So it suffices to show P[inf t∈[0,T ] Y t < −M ] < η/2 for some M , which directly follows from the càdlàg property of Y .
Lemma 4.6 (Tightness of the reflected diffusion approximations). The sequence (X εn ) n of càdlàg processes from (3.1) and (3.2) satisfies Aldous' criterion and thus is tight.
Proof. Condition (a) of Proposition 4.1 holds. To verify part (b), let η > 0, T ∈ (0, ∞), andτ ≤ T be a stopping time. By Lemma 4.5, |X εn τ | is with a probability of at least 1 − η/4 bounded by some constant M (not depending on n andτ ). Let us consider the , uniformly for all n large enough. We estimate the latter in (4.4) using the probability of a down-crossing in time δ of intervals [x − ε 0 , x − 2ε] by a continuous diffusion. Covering x [x − ε 0 , x − 2ε] by finitely many intervals [y k , y k+1 ] in (4.5) then allows us to choose δ > 0 sufficiently small.
To this end, chooseε ≤ ε 0 /4 and n large enough such that ε n ≤ε, and let (Y ξ t ) t≥0 be the (b, σ)-diffusion w.r.t. the Brownian motion (Wτ +t − Wτ ) t≥0 with Y ξ 0 = ξ − 2ε, which is discretely reflected by jumps of size −ε at a constant boundary at level ξ −ε. More precisely, Global existence and uniqueness of (Y ξ , K ξ ) follows from proof of Lemma 3.1. By comparison arguments and induction as in the proof of Lemma 4.5, one verifies Y ξ t ≤ X εn τ +t for t ∈ [0, ∞). Indeed, [KS91,Theorem 5 [[ by induction for all jump times τ k of (Y ξ · , X εn τ +· ). Using Y ξ δ ≤ X εn τ +δ and the strong Markov property of Y ξ w.r.t. (Fτ +t ) t≥0 , we get By construction Y ξ depends on n and τ (through ξ), while the right-hand side of (4.3) does not. Thus one only needs to bound the probability of an (ε 0 − 2ε)-displacement of diffusions Y x with starting points x − 2ε from a compact set, which are reflected (by (−ε)-jumps) at constant boundaries x −ε. By the arguments in the proof of since for the event under consideration, the process Y x would have to move at least once (in at most N occasions) continuously from x − 2ε to x − ε 0 . Let d := (ε 0 − 2ε)/2 ≥ ε 0 /4 > 0, K := 2M/d and y k : For a sufficiently small δ = δ 1 ∈ (0, δ 0 ] the right-hand side of (4.5) can be made smaller than η/4. The above holds for all n such that ε n ≤ε, meaning that there is some n 0 such that is holds for all n ≥ n 0 . Note that δ 1 only depends on T (via M and K) and on n 0 but not on n. Hence, for all δ ∈ (0, δ 1 ], all n ≥ n 0 and allτ ≤ T we have (4.6) 2) For the alternative second case X εn τ +δ ≥ X εn τ + ε 0 , consider the solution As in (4.5) we find a δ 2 > 0 such that for all δ ∈ (0, δ 2 ] the right side of (4.7) is bounded by η/4. Hence we have P[X εn τ +δ ≥ X εn τ +ε 0 ] ≤ η/2, so with (4.6), Proposition 4.1 applies. Now, to prove joint tightness of (X εn , L εn ) n , we can utilize the fact that both processes satisfy Aldous' criterion and that their jump times and jump magnitudes are identical.
Proof. In view of Proposition 4.1, choose the space E := R 2 equipped with Euclidean norm |·| and let Y n := (X εn , L εn ) ∈ D [0, ∞), E . Then Y n 0 = (−ε n , ε n ) and J T (Y n ) = √ 2ε n form tight sequences in E and R, respectively. Furthermore, Hence Y n also satisfies Aldous's criterion and therefore is tight.
Tightness only implies weak convergence of a subsequence. It remains to show (in Lemma 4.9) that every limit point satisfies (2.1) and (2.2) and that uniqueness in law holds. The latter will follow from pathwise uniqueness results for SDEs with reflection, while for the former we apply results from [KP96] on weak converges of SDEs. For that purpose, note that the approximated local times form a good sequence of semimartingales (cf. [KP96,Definition 7.3]), as shown in the following Lemma 4.8. The sequence (L εn ) n is of uniformly controlled variation and thus good.
Proof. Let δ := sup n ε n . Then all processes L εn have jumps of size at most δ < ∞. Fix some α > 0. By tightness, there exists some C ∈ R such that P[L εn α > C] ≤ 1/α. So the stopping time τ n,α := inf{t ≥ 0 | L εn t > C} satisfies P[τ n,α ≤ α] = P[L εn α > C] ≤ 1/α . Moreover, by monotonicity of L εn we have E We have gathered all necessary results to prove convergence of our approximating diffusion and local time to the continuous counterpart.
Proof. By Prokhorov's theorem, tightness of (X εn , L εn , W ) n implies weak convergence of a subsequence to some limit point, (X εn k , L εn k , W ) k ⇒ (X,L,W ) ∈ D [0, ∞), R 3 . Continuity of (X,L) is clear since ε n → 0 is the maximum jump size. First we prove that (X,L) satisfies the asserted SDEs. Afterwards, we will prove uniqueness of the limit point. To ease notation, let w.l.o.g. the subsequence (n k ) be (n).
By [KP96,Theorem 8.1] we get that (X,L) satisfy (2.1) for the semimartingaleW . ThatW is a Brownian motion follows from standard arguments, cf. [NÖ10, proof of Theorem 1.9]. As D [0, ∞), R 3 is separable we find, by an application of the Skorokhod representation theorem, thatL is non-decreasing andX t ≤ g(L t ) for all t ≥ 0, P-a.s. because these properties already hold for (X εn , L εn ).
To prove thatL grows only at times t withX t = g(L t ), we have to approximate the indicator function by continuous functions. For δ > 0 define  Proof. Convergence L εn ⇒ L implies L εn t ⇒ L t at all continuity points of L, i.e. at all points, hence P τ εn ≤ t = P L εn t ≥ → P[L t ≥ ] = P[τ ≤ t] .
This completes the proof of Theorem 3.2.