On the Optimal Stopping of a One-dimensional Diffusion

We consider a one-dimensional diffusion which solves a stochastic differential equation with Borel-measurable coefficients in an open interval. We allow for the endpoints to be inaccessible or absorbing. Given a Borel-measurable function $r$ that is uniformly bounded away from 0, we establish a new analytic representation of the $r$-potential of a continuous additive functional of the diffusion. We also characterize the value function of an optimal stopping problem with general reward function as the unique solution of a variational inequality (in the sense of distributions) with appropriate growth or boundary conditions. Furthermore, we establish several other characterisations of the solution to the optimal stopping problem, including a generalisation of the so-called"principle of smooth fit".

in the interior int I = ]α, β[ of a given interval I ⊆ [−∞, ∞], where b, σ : int I → R are Borel-measurable functions and W is a standard one-dimensional Brownian motion. We allow for the endpoints α and β to be inaccessible or absorbing. Given a Borelmeasurable function r : I → R + that is uniformly bounded away from 0, we establish a new analytic representation of the r(·)-potential of a continuous additive functional of X. Furthermore, we derive a complete characterisation of differences of two convex functions in terms of appropriate r(·)-potentials, and we show that a function F : I → R + is r(·)-excessive if and only if it is the difference of two convex functions and − 1 2 σ 2 F ′′ + bF ′ − rF is a positive measure. We use these results to study the optimal stopping problem that aims at maximising the performance index over all stopping times τ , where f : I → R + is a Borel-measurable function that may be unbounded. We derive a simple necessary and sufficient condition for the value function v of this problem to be real-valued. In the presence of this condition, we show that v is the difference of two convex functions, and we prove that it satisfies the variational inequality in the sense of distributions, where f identifies with the upper semicontinuous envelope of f in the interior int I of I. Conversely, we derive a simple necessary and sufficient condition for a solution to (3) to identify with the value function v. Furthermore, we establish several other characterisations of the solution to the optimal stopping problem, including a generalisation of the so-called "principle of smooth fit". In our analysis, we also make a construction that is concerned with pasting weak solutions to (1) at appropriate hitting times, which is an issue of fundamental importance to dynamic programming.

Introduction
We consider the one-dimensional diffusion X that satisfies the SDE (1) in the interior int I = ]α, β[ of a given interval I ⊆ [−∞, ∞]. We assume that b, σ : int I → R are Borel-measurable functions satisfying appropriate local integrability and non-degeneracy conditions ensuring that (1) has a weak solution that is unique in the sense of probability law up to a possible explosion time at which X hits the boundary {α, β} of I (see Assumption 1 in Section 2). If the boundary point α (resp., β) is inaccessible, then the interval I is open from the left (resp., open from the right), while, if α (resp., β) is not inaccessible, then it is absorbing and the interval I is closed from the left (resp., closed from the right).
In the presence of Assumption 1, a weak solution to (1) can be obtained by first timechanging a standard one-dimensional Brownian motion and then making an appropriate state space transformation. This construction can be used to prove all of the results that we obtain by first establishing them assuming that the diffusion X identifies with a standard one-dimensional Brownian motion. However, such an approach would hardly simplify the formalism because the data b (resp., σ) appear in all of the analysis exclusively (resp., mostly) though the operators L, L ac defined by (36)-(37) below. Furthermore, deriving the general results, which are important because many applications assume specific functional forms for the data b and σ, by means of this approach would require several time changes and state space transformations, which would lengthen the paper significantly.
Given a point z ∈ int I, we denote by L z the right-sided local time process of X at level z (see Revuz and Yor [32,Section VI.1] for the precise definition of L z and its properties). Also, we denote by B(J ) the Borel σ-algebra on any given interval J ⊆ [−∞, ∞]. With each signed Radon measure µ on int I, B(int I) such that σ −2 is locally integrable with respect to |µ|, we associate the continuous additive functional where T α (resp., T β ) is the first hitting time of α (resp., β). It is worth noting that (4) provides a one-to-one correspondence between the continuous additive functionals of the Markov process X and the signed Radon measures on int I, B(int I) (see Theorem X.2.9, Corollary X.2.10 and the comments on Section 2 at the end of Chapter X in Revuz and Yor [32,Section X.2]). We also consider a discounting rate function r : I → R + , we assume that this is a Borel-measurable function that is uniformly bounded away from 0 and satisfies a suitable local integrability condition (see Assumption 2 in Section 2), and we define Λ t ≡ Λ t (X) = t 0 r(X s ) ds.
Given a signed Radon measure µ on int I, B(int I) , we consider the r(·)-potential of the continuous additive functional A µ , which is defined by We recall that a function F : int I → R is the difference of two convex functions if and only if its left-hand side derivative F ′ − exists and its second distributional derivative is a measure, and we define the measure LF by In the presence of a general integrability condition ensuring that the potential R µ is welldefined, we show that it is the difference of two convex functions, the measures LR µ and −µ are equal, and where C > 0 is an appropriate constant, p : int I → R is the scale function of X, and ϕ, ψ : int I → ]0, ∞[ are C 1 functions with absolutely continuous with respect to the Lebesgue measure derivatives spanning the solution space of the ODE 1 2 σ 2 (x)g ′′ (x) + b(x)g ′ (x) − r(x)g(x) = 0, and such that ϕ (resp., ψ) is decreasing (resp., increasing) (see Theorem 6). If the signed measure µ h is absolutely continuous with respect to the Lebesgue measure with Radon-Nikodym derivative given by a function h, then the potential R µ h admits the expressions (see Corollary 8 for this and other related results). Conversely, we show that, under a general growth condition, a difference of two convex functions F : int I → R is such that (a) both limits lim y↓α F (y)/ϕ(y) and lim y↑β F (y)/ψ(y) exist, (b) F admits the characterisation and (c) an appropriate form of Dynkin's formula holds true (see Theorem 7). With a view to optimal stopping, we use these results to show that a function F : I → R + is r(·)-excessive if and only if it is the difference of two convex functions and −LF is a positive measure (see Theorem 9 for the precise result). If r is constant, then general theory of Markov processes implies the existence of a transition kernel u r such that R µ (x) = ]α,β[ u r (x, s) µ(ds) (see Meyer [27] and Revuz [31]). If X is a standard Brownian motion, then (see Revuz and Yor [32,Theorem X.2.8]). The general expression for this kernel provided by (7) is one of the contributions of this paper. On the other hand, the identity in (8) is well-known and can be found in several references (e.g., see Borodin and Salminen [8,II.4.24]). Also, Johnson and Zervos [20] prove that the potential given by (6) admits the analytic expression (7) and show that the measures LR µ and −µ are equal when both of the endpoints α and β are assumed to be inaccessible. The representation of differences of two convex functions given by (9) is also new. Such a result is important for the solution to one-dimensional infinite time horizon stochastic control as well as optimal stopping problems using dynamic programming. Indeed, the analysis of several explicitly solvable problems involve such a representation among their assumptions. For constant r, Salminen [34] considered more general one-dimensional linear diffusions than the one given by (1) and used Martin boundary theory to show that every r-excessive function admits a representation that is similar to but much less straightforward than the one in (9). Since a function on an open interval is the difference of two convex functions if and only if it is the difference of two excessive functions (see Ç inlar, Jacod, Protter and Sharpe [11]), the representation derived by Salminen [34] can be extended to differences of two convex functions. However, it is not straightforward to derive such an extension of the representation in Salminen [34] from (9) or vice-versa when the underlying diffusion satisfies (1) and r is constant.
The result that a function F is r(·)-excessive if and only if it is the difference of two convex functions and −LF is a positive measure is perhaps the simplest possible characterisation of excessive functions because it involves only derivative operators. In fact, we show that this result is equivalent to the characterisations of excessive functions derived by Dynkin [15] and Dayanik [12] (see Corollary 10).
We use the results that we have discussed above to analyse the optimal stopping problem that aims at maximising the performance criterion given by (2) over all stopping times τ , assuming that the reward function f is a positive Borel-measurable function that may be unbounded (see Assumption 2 in Section 2). We first prove that the value function v is the difference of two convex functions and satisfies the variational inequality (3) in the sense of distributions, where f is defined by if β is absorbing and x = β (see Definition 1 and Theorem 12.(I)-(II) in Section 6). This result provides simple criteria for deciding which parts of the interval I must be subsets of the so-called waiting region. Indeed, the derived regularity of v implies that all points at which the reward function f is discontinuous as well as all "minimal" intervals in which f cannot be expressed as the difference of two convex functions (e.g., intervals throughout which f has the regularity of a Brownian sample path) should be parts of the closure of the waiting region. Similarly, the support of the measure (Lf ) + in all intervals in which Lf is well-defined should also be a subset of the closure of the waiting region. We then establish a verification theorem that is the strongest one possible because it involves only the optimal stopping problem's data. In particular, we derive a simple necessary and sufficient condition for a solution w to (3) in the sense of distributions to identify with the problem's value function (see Theorem 13.(I)-(II)).
These results establish a complete characterisation of the value function v in terms of the variational inequality (3). Indeed, they imply that the restriction of the optimal stopping problem's value function v in int I identifies with the unique solution to the variational inequality (3) (29)). Also, it is worth stressing the precise nature of these boundary conditions. The limits on the left-hand sides are taken from inside the interior int I of I and they indeed exist. On the other hand, the limsups on the right-hand sides are taken from inside I itself. Therefore, if, e.g., α is absorbing, then we are faced either with for all x ∈ int I (see Theorem 13.(III)). In fact, this characterisation can be used as a verification theorem as well (see also the discussion further below).
In the generality that we consider, an optimal stopping time might not exist (see Examples 1-4 in Section 8). Moreover, the hitting time of the so-called "stopping region", which is given by may not be optimal (see Examples 2 and 4). In particular, Example 2 shows that τ * may not be optimal and that an optimal stopping time may not exist at all unless f satisfies appropriate boundary / growth conditions. Also, Example 4 reveals that τ ⋆ is not in general optimal if f = f . In Theorem 12.(III), we obtain a simple sequence of ε-optimal stopping times if f is assumed to be upper semicontinuous, and we show that τ ⋆ is an optimal stopping time if f satisfies an appropriate growth condition. Building on the general theory, we also consider a number of related results and characterisations. In particular, we obtain a generalisation of the so-called "principle of smooth fit" (see part (III) of Corollaries 15, 16 and 17 in Section 7).
In view of the version of Dynkin's formula (98) in Corollary 8, we can see that, if h is any function such that R µ h given by (8) is well-defined, then Therefore, all of the results on the optimal stopping problem that we consider generalise most trivially to account for the apparently more general optimal stopping problem associated with (13). The various aspects of the optimal stopping theory have been developed in several monographs, including Shiryayev [35], Friedman [17,Chapter 16], Krylov [23], Bensoussan and Lions [7], El Karoui [16], Øksendal [28,Chapter 10] and Peskir and Shiryaev [30]. In particular, the solution of optimal stopping problems using classical solutions to variational inequalities has been extensively studied (e.g., see Friedman [17,Chapter 16], Krylov [23] and Bensoussan and Lions [7]). Results in this direction typically make strong regularity assumptions on the problem data (e.g., the diffusion coefficients are assumed to be Lipschitz continuous). To relax such assumptions, Øksendal and Reikvam [29] and Bassan and Ceci [4] have considered viscosity solutions to the variational inequalities associated with the optimal stopping problems that they study. Closer to the spirit of this paper, Lamberton [24] proved that the value function of the finite version of the problem we consider here satisfies its associated variational inequality in the sense of distributions.
Relative to the optimal stopping problem that we consider here when r is constant, Dynkin [14] and Shiryaev [35,Theorem 3.3.1] prove that the value function v identifies with the smallest r-excessive function that majorises the reward function f if f is assumed to be lower semicontinuous. Also, Shiryaev [35,Theorem 3.3.3] proves that the stopping time τ ⋆ defined by (12) is optimal if f is assumed to be continuous and bounded, while Salminen [34] establishes the optimality of τ * assuming that the smallest r-excessive majorant of f exists and f is upper semicontinuous. Later, Dayanik and Karatzas [13] and Dayanik [12], who also considers random discounting instead of discounting at a constant rate r, addressed the solution of the optimal stopping problem by means of a certain concave characterisation of excessive functions. In particular, they established a generalisation of the so-called "principle of smooth fit" that is similar to, though not the same as, the one we derive here.
There are numerous special cases of the general optimal stopping problem we consider that have been explicitly solved in the literature. Such special cases have been motivated by applications or have been developed as illustrations of various general techniques. In all cases, their analysis relies on some sort of a verification theorem. Existing verification theorems for solutions using dynamic programming and variational inequalities typically make strong assumptions that are either tailor-made or difficult to verify in practice. For instance, Theorem 10.4.1 in Øksendal [28] involves Lipschitz as well as uniform integrability assumptions, while, Theorem I.2.4 in Peskir and Shiryaev [30] assumes the existence of an optimal stopping time, for which, a sufficient condition is provided by Theorem I.2.7. Alternatively, they assume that the so-called stopping region is a set of a simple specific form (e.g., see Rüschendorf and Urusov [33] or Gapeev and Lerche [18]).
Using martingale and change of measure techniques, Beibel and Lerche [5,6], Lerche and Urusov [26] and Christensen and Irle [10] developed an approach to determining an optimal stopping strategy at any given point in the interval I. Similar techniques have also been extensively used by Alvarez [1,2,3], Lempa [25] and references therein. To fix ideas, we consider the following representative cases that can be associated with any given initial condition x ∈ I. If there exists a point d 1 > x such that then v(x) = C 1 ψ(x) and the first hitting time of {d 1 } is optimal. Alternatively, if there exist points κ ∈ ]0, 1[ and c 2 < x < d 2 such that then v(x) = κC 2 ψ(x) + (1 − κ)C 2 ϕ(x) and the first hitting time of {c 2 , d 2 } is optimal. On the other hand, if x is a global maximiser of the function f /(Aψ + Bϕ), for some A, B ≥ 0, then x is in the stopping region and v(x) = f (x). It is straightforward to see that the conclusions associated with each of these cases follow immediately from the representation (11) of the value function v (see also Corollary 14 and part (II) of Corollaries 15,16 and 17). Effectively, this approach, which is summarised by (11), is a verification theorem of a local character. Indeed, its application invariably involves "guessing" the structure of the waiting and the stopping regions. Also, e.g., (14) on its own does not allow for any conclusions for initial conditions x > d 1 (see Example 5). It is also worth noting that, if f is C 1 , then this approach is effectively the same as application of the so-called "principle of smooth fit": first order conditions at d 1 (resp., c 2 , d 2 ) and (14) (resp., (15)) yield the same equations for d 1 , C 1 (resp. c 2 , d 2 , κ, C 2 ) as the one that the "principle of smooth fit" yields (see also the generalisations in part (III) of Corollaries 15,16 and 17).
In stochastic analysis, a filtration can be viewed as a model for an information flow. Such an interpretation gives rise to the following modelling issue. Consider an observer whose information flow identifies with a filtration (H t ). At an (H t )-stopping time τ , the observer gets access to an additional information flow, modelled by a filtration (G t ), that "switches on" at time τ . In this context, we construct a filtration that aggregates the two information sources available to such an observer (see Theorem (19)). Building on this construction, we address the issue of pasting weak solutions to (1), or, more, generally, the issue of pasting stopping strategies for the optimal stopping problem that we consider, at an appropriate stopping time (see Theorem (20) and Corollary 21). Such a rather intuitive result is fundamental to dynamic programming and has been assumed by several authors in the literature (e.g., see the proof of Proposition 3.2 in Dayanik and Karatzas [13]).
The paper is organised as follows. In Section 2, we develop the context within which the optimal stopping problem that we study is defined and we list all of the assumptions we make. Section 3 is concerned with a number of preliminary results that are mostly of a technical nature. In Section 4, we derive the representation (7) for r(·)-potentials and the characterisation (9) of differences of two convex functions as well as a number of related results. In Section 5, we consider analytic characterisations of r(·)-excessive functions, while, in Section 6, we establish our main results on the optimal stopping problem that we consider. In Section 7, we present several ramifications of our general results on optimal stopping, including a generalisation of the "principle of smooth fit". In Section 8, we consider a number of illustrating examples. Finally, we develop the theory concerned with pasting weak solutions to (1) in the Appendix.

The underlying diffusion and the optimal stopping problem
We consider a one-dimensional diffusion with state space an interval of the form for some endpoints −∞ ≤ α < β ≤ ∞. Following Definition 5.20 in Karatzas and Shreve [21,Chapter 5], a weak solution to the SDE (1) in the interval I is a collection S x = (Ω, F , F t , P x , W, X) such that (Ω, F , F t , P x ) is a filtered probability space satisfying the usual conditions and supporting a standard one-dimensional (F t )-Brownian motion W and a continuous (F t )-adapted I-valued process X. The process X satisfies and for all t ≥ 0 and α <ᾱ < x <β < β, P x -a.s.. Here, as well as throughout the paper, we denote by T y the first hitting time of the set {y}, which is defined by with the usual convention that inf ∅ = ∞. The actual choice of the interval I from among the four possibilities in (16) depends on the choice of the data b and σ through the resulting properties of the explosion time T α ∧ T β at which the process X hits the boundary {α, β} of the interval I. If the boundary point α (resp., β) is inaccessible, i.e., if then the interval I is open from the left (resp., open from the right). If α (resp., β) is not inaccessible, then it is absorbing and the interval I is closed from the left (resp., closed from the right). In particular, The following assumption ensures that the SDE (1) has a weak solution in I, as described above, which is unique in the sense of probability law (see Theorem 5.15 in Karatzas and Shreve [21,Chapter 5]).
This assumption also implies that, given c ∈ int I fixed, the scale function p, given by is well-defined, and the speed measure m on int I, B(I) , given by is a Radon measure. At this point, it is worth noting that Feller's test for explosions provides necessary and sufficient conditions that determine whether the solution of (1) hits one or the other or both of the boundary points α, β in finite time with positive probability (see Theorem 5.29 in Karatzas and Shreve [21,Chapter 5]). We consider the optimal stopping problem, the value function of which is defined by where the discounting factor Λ is defined by (5) in the introduction, and the set of all stopping strategies T x is the collection of all pairs (S x , τ ) such that S x is a weak solution to (1), as described above, and τ is an associated (F t )-stopping time.
We make the following assumption, which also implies the identity in (24).
In the presence of Assumptions 1 and 2, there exists a pair of C 1 with absolutely continuous first derivatives functions ϕ, ψ : I → R + such that ϕ (resp., ψ) is strictly decreasing (resp., increasing), and for every solution S x to (1). Also, if α is absorbing, then ϕ(α) := lim x↓α ϕ(x) < ∞ and ψ(α) := lim if β is absorbing, then ϕ(β) := lim and, if α (resp., β) is inaccessible, then lim An inspection of these facts reveals that, in all cases, The functions ϕ and ψ are classical solutions to the homogeneous ODE and satisfy and p is the scale function defined by (22). Furthermore, given any solution S x to (1), the processes e −Λt ϕ(X t ) and e −Λt ψ(X t ) are local martingales.
The existence of these functions and their properties that we have listed can be found in several references, including Borodin and Salminen [

Preliminary considerations
Throughout this section, we assume that a weak solution S x to (1) has been associated with each initial condition x ∈ int I. We first need to introduce some notation. To this end, we recall that, if g : int I → R is a function that is the difference of two convex functions, then its left-hand side first derivative g ′ − exists and is a function of finite variation, and its second distributional derivative g ′′ is a measure. We denote by the Lebesgue decomposition of the second distributional derivative g ′′ (dx) into the measure g ′′ ac (x) dx that is absolutely continuous with respect to the Lebesgue measure and the measure g ′′ s (dx) that is mutually singular with the Lebesgue measure. Note that the function g ′′ ac identifies with the "classical" sense second derivative of g, which exists Lebesgue-a.e.. In view of these observations and notation, we define the measure Lg on int I, B(int I) and the function L ac g : int I → R by and Given a Radon measure µ on int I, B(int I) such that σ −2 is locally integrable with respect to |µ|, we consider the continuous additive functional A µ defined by (4) in the introduction. Given any t < T α ∧ T β , A µ t is well-defined and real-valued because α < inf s≤t X s < sup s≤t X s < β and the process L z increases on the set {X s = z}. Also, since L z is an increasing process, A µ (resp., −A µ ) is an increasing process if µ (resp., −µ) is a positive measure. The following result is concerned with various properties of the process A µ that we will need. Lemma 1 Let µ be a Radon measure on int I, B(int I) such that σ −2 is locally integrable with respect to |µ|, consider any increasing sequence of real-valued Borel-measurable functions (ζ n ) on I such that and denote by µ n the measure defined by A |µ| is a continuous increasing process, and Proof. The process A |µ| is continuous and increasing because this is true for the local time process L z for all z ∈ I. Also, (40) can be seen by a simple inspection of the definition (4) of A µ . To prove (41), we have to show that, given any x ∈ int I, where To this end, we note that (38) and the monotone convergence theorem imply that the Also, we use the integration by parts formula to calculate In view of these observations and the monotone convergence theorem, we can see that and Combining these results with the fact that the positive processes I (n) are increasing, we can see that Tα∧T β for all n ≥ 1 and It follows that lim n→∞ I (n) Tα∧T β = I Tα∧T β , which, combined with monotone convergence theorem, implies (42) and the proof is complete.
We will need the results derived in the following lemma, the proof of which is based on the Itô-Tanaka-Meyer formula.
Lemma 2 If F : int I → R is a function that is the difference of two convex functions, then the following statements are true: (I) The increasing process A |LF | is real-valued, and If F is C 1 with absolutely continuous with respect to the Lebesgue measure first derivative, i.e., if LF (dx) = L ac F (x) dx in the notation of (36)-(37), then Proof. In view of the Lebesgue decomposition of the second distributional derivative F ′′ (dx) of F as in (35) and the occupation times formula we can see that the Itô-Tanaka-Meyer formula Combining this expression with the definition (37) of L ac , we can see that Using the occupation times formula once again and the definitions (36), (37) of L, L ac , we can see that The validity of Itô-Tanaka-Meyer's and the occupation times formulae and (49)-(50) imply that the process A LF is well-defined and real-valued. Also, (46) follows from the definition (5) of the process Λ, (49)-(50) and an application of the integration by parts formula. If LF (dx) = L ac F (x) dx, the definition of A LF and the occupation times formula imply that and (47) follows.
The next result is concerned with a form of Dynkin's formula that the functions ϕ, ψ satisfy as well as with a pair of expressions that become useful when explicit solutions to special cases of the general optimal stopping problem are explored (see Section 7).

Lemma 3
The functions ϕ, ψ introduced by (26), (27) satisfy for all stopping times τ and all pointsᾱ < x <β in I. Furthermore, and Proof. Combining (46) with the fact that Lϕ = 0, we can see that where In view of (28) and the fact that the positive function ϕ is decreasing, we can see that sup y∈[ᾱ,β] ϕ(y) < ∞. Therefore, M Tᾱ∧Tβ is a uniformly integrable martingale because it is a uniformly bounded local martingale. It follows that E x M τ ∧Tᾱ∧Tβ = 0 and (54) implies the first identity in (51). The second identity in (51) can be established using similar arguments. Finally, (52) and (53) follow immediately once we observe that they are equivalent to the system of equations which holds true thanks to (51) for τ ≡ ∞.
We conclude this section with a necessary and sufficient condition for the value function of our optimal stopping problem to be finite.
Lemma 4 Consider the optimal stopping problem formulated in Section 2, and let f be defined by (10) in the introduction. If If any of the conditions in (55) is not true, then v(x) = ∞ for all x ∈ int I.
Proof. If (55) is true, then we can see that for all x, y ∈ I.
In view of (34), the processes e −Λt ϕ(X t ) and e −Λt ψ(X t ) are positive supermartingales. It follows that, given any stopping strategy (S x , τ ) ∈ T x , To show the first identity in (56), we note that (57) implies that Combining this calculation with (31), we obtain which implies that lim sup y↓α v(y)/ϕ(y) ≤ lim sup y↓α f (y)/ϕ(y). The reverse inequality follows immediately from the fact that v ≥ f . The second identity in (56) can be established using similar arguments.
If the problem data is such that the first limit in (55) is infinite, then we consider any initial condition x ∈ int I and any sequence (y n ) in I such that y n < x for all n ≥ 1 and lim n→∞ f (y n )/ϕ(y n ) = ∞. We can then see that where S x is any solution to (1). Similarly, we can see that 4 r(·) r(·) r(·)-potentials and differences of two convex functions Throughout this section, we assume that a weak solution S x to (1) has been associated with each initial condition x ∈ int I. Accordingly, whenever we consider a stopping time τ , we refer to a stopping time of the filtration in the solution S x . We first characterise the limiting behaviour at the boundary of I of a difference of two convex functions on int I, and we show that such a function satisfies Dynkin's formula under appropriate assumptions.
Lemma 5 Consider any function F : int I → R that is a difference of two convex functions and is such that then both of the limits lim y↓α F (y)/ϕ(y) and lim y↑β F (y)/ψ(y) exist.
(III) Suppose that F satisfies (60), If x ∈ int I is an initial condition such that (60) is true, then for every stopping time τ ; in the last identity here, we assume that Proof. Throughout the proof, τ denotes any stopping time. Recalling (46) in Lemma 2, we write where M is the stochastic integral defined by We consider any decreasing sequence (α n ) and any increasing sequence (β n ) such that α < α n < x < β n < β for all n ≥ 1, lim n→∞ α n = α and lim Also, we define where we adopt the usual convention that inf ∅ = ∞, and we note that the definition and the construction of a weak solution to (1) (see Definition 5.5.20 in Karatzas and Shreve [21]) imply that these stopping times satisfy The function F ′ − is locally bounded because it is of finite variation. Therefore, we can use Itô's isometry to calculate which implies that the stopped process M τ ∧τ ℓ (αm,βn) is a uniformly integrable martingale. Combining this observation with (63), we can see that In view of (66) and the local boundedness of F , we can pass to the limit using the dominated convergence theorem to obtain the last identity following thanks to (26)- (27).
Proof of (I). If −LF is a positive measure, then −A LF = A −LF = A |LF | is an increasing process. Therefore, we can use (66), (64) and the monotone convergence theorem to calculate lim m,n→∞ Combining this with assumption (58), the inequalities and (68) for τ = ∞, we can see that Proof of (II). We now fix any initial condition x ∈ int I such that (60) is true and we assume that the sequence (α m ) has been chosen so that In light of (40) in Lemma 1 and (66), we can see that the dominated convergence theorem implies that The continuity of F and (58) imply that there exists a constant C 1 > 0 such that Also, (34) implies that the processes e −Λt ϕ(X t ) and e −Λt ψ(X t ) are positive supermartingales, therefore, we can see that the dominated convergence theorem implies that In view of these results, we can pass to the limit m → ∞ in (68) to obtain the second equality following by an application of the dominated convergence theorem. These identities prove that the limit lim y↓α F (y)/ϕ(y) exists because (α m ) has been an arbitrary sequence satisfying (71) and the function F/ϕ is continuous.
Proving that the limit lim y↑β F (y)/ψ(y) exists follows similar symmetric arguments. Proof of (III). The event {T α < ∞} has strictly positive probability if and only if α is an absorbing boundary point, in which case, (28) and (61) imply that lim y↓α F (y) = 0. In view of this observation and a similar one concerning the boundary point β, we can see that the first identity in (62) holds true. Finally, the second identity in (62) follows immediately once we combine (61) with (69) and (72)-(74).
The assumptions of the previous lemma involve the measure LF that we can associate with a function on int I that is the difference of two convex functions. We now address the following inverse problem: given a signed measure µ on int I, B(int I) , determine a function F on int I such that F is the difference of two convex functions and LF = −µ. Plainly, the solution to this problem is not unique because Lϕ = Lψ = 0. In view of this observation, the solution R µ that we now derive and identifies with the r(·)-potential of the continuous additive functional A µ is "minimal" in the sense that it has the limiting behaviour captured by (80).
for all x ∈ I, if and only if for all α <ᾱ <β < β and all x ∈ I. In the presence of these integrability conditions, the function R µ : int I → R defined by where C > 0 is the constant appearing in (33), identifies with the r(·)-potential of A µ , namely, it is the difference of two convex functions, and LR µ (dx) = −µ(dx) and LR |µ| (dx) = −|µ|(dx). Furthermore, Proof. First, we note that, if the integrability condition (75) is true for some x ∈ I, then it is true for all x ∈ I. If µ is a measure on int I, B(int I) satisfying (75), then the function R µ given by (77) is well-defined, it is the difference of two convex functions, and it satisfies the corresponding identity in (79). To see these claims, we consider the left-continuous function H : int I → R given by H(γ) = 0 and where γ is any constant in int I. Given any pointsᾱ,β ∈ int I such thatᾱ < γ <β, we can use the integration by parts formula to see that for all x ∈ [ᾱ,β]. It follows that the function R µ defined by (77) admits the expression for all α <ᾱ ≤ x ≤β < β. This result, the left-continuity of H and (33) imply that for all α <ᾱ ≤ x ≤β < β. Furthermore, we can see that the restriction of the measure (R µ ) ′′ in ]ᾱ,β[, B(]ᾱ,β[) has Lebesgue decomposition that is given by in the notation of (35). Combining these expressions with (81)-(82) and the definition (22) of the scale function p, we can see that the restrictions of the measures LR µ and −µ in ]ᾱ,β[, B(]ᾱ,β[) are equal. It follows that the measures LR µ and −µ on int I, B(int I) are equal becauseᾱ <β have been arbitrary points in int I. Similarly, we can check that the function R |µ| that is defined by (77) with |µ| in place of µ is the difference of two convex functions and satisfies the corresponding identity in (79).
To proceed further, we consider any Radon measure µ on int I, B(int I) . Given monotone sequences (α n ) and (β n ) as in (64), we define if σ 2 (z) ≥ 1 n and α n ≤ z ≤ β n , σ 2 (z), if σ 2 (z) < 1 n and α n ≤ z ≤ β n , and we consider the sequence of measures (µ n ) that are defined by (39). The functions R |µn| , defined by (77) with |µ n | in place of µ, are real-valued and satisfy Combining this calculation with (31), we can see that R |µn| satisfies the corresponding limits in (80). Since −LR |µn| = |µ n | = |LR µn | is a positive measure, part (I) of Lemma 5 implies that This identity, the fact that LR |µn| = −|µ n | and (40) imply that the function R |µn| that is defined as in (77) satisfies Since the sequence of functions (ζ n ) is monotonically increasing to the identity function, the monotone convergence theorem implies that If (75) is satisfied, then σ −2 is locally integrable with respect to |µ|, namely, the first condition in (76) holds true, thanks to the continuity of the functions ϕ, ψ and p ′ . In this case, (41) in Lemma 1 and (83) imply that because (ζ n ) satisfies (38). Combining this result with (84) and the fact that (75) implies that R |µ| (x) < ∞, we can see that  (84) and (85), we can see that R |µ| (x) < ∞, and (75) follows. If µ satisfies the integrability conditions (75)-(76), then the function R µ given by (77) is well-defined and real-valued. Furthermore, it satisfies (78) thanks to (40), (86) with µ + and µ − in place of |µ|, and the linearity of integrals.
To establish (80), we consider any sequences (α n ), (β n ) as in (64), and we calculate the third identity following from (62) for τ = T αm ∧ T βn . Since R |µ| is a positive function, each of the two limits on the right-hand side of this expression is equal to 0. We can therefore see that the first of these limits implies that which proves that lim y↓α R |µ| (y)/ϕ(y) = 0 because (α m ) has been arbitrary. We can show that lim x↑β R |µ| (x)/ψ(x) = 0 using similar arguments. Finally, the function |R µ | satisfies the corresponding limits in (80) because |R µ | ≤ R |µ| .
The result we have just established and Lemma 5 imply the following representation of differences of two convex functions that involves the operator L and the functions ϕ, ψ.

Proof.
In the presence of the assumption that LF satisfies (75) The measure LF and the potential R −LF have central roles in the characterisation of differences of two convex functions we have established above. The following result is concerned with the potential R −LF when LF is absolutely continuous with respect to the Lebesgue measure.
Corollary 8 Consider any function h : I → R that is locally integrable with respect to the Lebesgue measure, and let µ h be the measure on int I, B(int I) defined by If µ h satisfies the equivalent integrability conditions (75)-(76), which are equivalent to then the function R µ h : int I → R defined by (77) or, equivalently, by admits the probabilistic expression This function, as well as the function defined bỹ is C 1 with absolutely continuous first derivative and satisfies the ODE The functions R µ h andR µ h satisfỹ where for every stopping time τ and all initial conditions x ∈ int I, in which expression, R µ h (α) = 0 (resp., R µ h (β) = 0) if α (resp., β) is absorbing, consistently with (96)-(97).
To prove (95), we first note that In view of the definition (5) of Λ, we can see that, if α is absorbing, then otherwise, this expectation is plainly 0. Similarly, we can see that and (95) follows.
5 Analytic characterisations of r(·) r(·) r(·)-excessive functions The following is the main result of this section.
Theorem 9 A function F : I → R + is r(·)-excessive, namely, it satisfies for all stopping times τ and all initial conditions x ∈ I, if and only if the following statements are both true: (I) the restriction of F in the interior int I of I is the difference of two convex functions and the associated measure −LF on int I, B(int I) is positive; (II) if α (resp., β) is an absorbing boundary point, then F (α) ≤ lim inf y∈int I, y↓α F (y) (resp., F (β) ≤ lim inf y∈int I, y↑β F (y)).
Proof. First, we consider any function F : I → R + with the properties listed in (I)-(II).
The assumption that −LF is a positive measure implies that −A LF = A −LF is an increasing process. Therefore, (88) in Theorem 7 implies that, given any pointsᾱ < x <β in I and any stopping time τ such thatᾱ = α and τ = τ ∧ T α (resp.,β = β and τ = τ ∧ T β ) if α (resp., β) is absorbing, the second inequality following from the assumption that F satisfies the inequalities in (II). If α (resp., β) is inaccessible, then we can pass to the limitᾱ ↓ α (resp.,β ↑ β) using Fatou's lemma to obtain (99) thanks to the choices ofᾱ andβ that we have made. It follows that F is r(·)-excessive.
To establish the reverse implication, we first show that an r(·)-excessive function is lower semicontinuous and its restriction in int I is continuous. Given an initial condition x ∈ int I and a point y ∈ I, we can use (99) to calculate This calculation and the continuity of the functions ϕ, ψ imply that F (x) ≥ lim sup y→x F (y), which proves that F is upper semicontinuous in int I. The same arguments but with points x ∈ I and y ∈ int I and their roles reversed imply that It follows that F (x) ≤ lim inf y∈int I, y→x F (y), and the lower semicontinuity of F in I has been established. In particular, part (II) of the proposition is true.
To prove that an r(·)-excessive function satisfies (I), we define the function F q by where q > 0 is a constant, and we note that If we consider the change of variables u = qt, then we can see that In view of (102), the continuity properties of the function F and the continuity of the process X, this expression implies that Given its definition in (101), Corollary 8 implies that the function F q is C 1 with absolutely continuous first derivative and that it satisfies the ODE in the interior of I. In view of (102), we can see that This inequality implies that where p is the scale function of the diffusion X, which is defined by (22).
To proceed further, we introduce the antiderivatives A 1 and A 2 of a function g that is locally integrable in I, which are defined by respectively, where c ∈ I is a fixed point that we can take to be the same as the point appearing in the definition (22) of the scale function p. Inequality (104) then implies that the function F q /p ′ − A 1 ((1/p ′ ) ′ F q ) − A 2 ((2rF q )/(σ 2 p ′ )) is concave, which, combined with (103), implies that the function G := F/p ′ − A 1 ((1/p ′ ) ′ F ) − A 2 ((2rF )/(σ 2 p ′ )) is concave. The concavity of G and the equality imply that F/p ′ is absolutely continuous and This expression shows that F ′ has finite variation. Furthermore, taking distributional derivatives, we can see that which proves that F has the properties listed in part (I) thanks to the concavity of G.
Proof. Given a measure µ on int I, B(int I) , we mean that −µ is a positive measure whenever we write µ(dx) ≤ 0 in the proof below. In view of Theorem 9, the result will follow if we show that either of the functions given by (105), (106) is well-defined, real-valued and increasing if and only if the restriction of F in int I is the difference of two convex functions and LF ≤ 0. To this end, we note that the functions given by (105), (106) are well-defined and real-valued if and only if F ′ − exists and is real-valued, in which case, The function −D − ψ/ϕ (F/ϕ) is increasing if and only if its first distributional derivative is a positive measure, namely, if and only if the second distributional derivative of F is a measure and In view of the definition (22) of the scale function p and the fact that p and C are both strictly positive, we can see that this is true if and only if 6 The solution of the optimal stopping problem Before addressing the main results of the section, we prove that the value function v is excessive.
Lemma 11 Consider the optimal stopping problem formulated in Section 2 and suppose that its value function is real-valued. The value function v is r(·)-excessive, i.e., for all initial conditions x ∈ I and every stopping strategy where f is given by (10).
Proof. To prove the r(·)-excessivity of v, we first show that v is continuous in int I and lower semicontinuous in I. To this end, we consider any points x, y ∈ int I. Given the stopping strategy (S x , T y ) ∈ T x and any stopping strategy (S y , τ ) ∈ T y , we denote by (Ŝ x ,τ ) a stopping strategy that is as in Corollary 21, so that Since (S y , τ ) is arbitrary, we can use the dominated convergence theorem to see that this inequality implies that which proves that v is upper semicontinuous in int I.
Repeating the same arguments with the roles of the points x, y ∈ int I reversed, we can see that lim inf If both α and β are absorbing, then we can use (26)- (29) to calculate lim inf while, if α is absorbing and β is inaccessible, then lim inf If β is absorbing, then we can see that lim inf x∈int I, x↑β v(x) ≥ v(β) similarly. It follows that v is lower semicontinuous in I.
To show that v satisfies (107), we consider any stopping strategy (S x , τ ) ∈ T x . We assume that X τ 1 {τ <Tα∧T β } takes values in a finite set {a 1 , . . . , a n } ⊂ int I. For each i = 1, . . . , n, we consider an ε-optimal strategy (S ε a i , τ ε i ) ∈ T a i . If we denote by (S ε x , τ ε ) ∈ T x a stopping strategy that is as in Corollary 21, then where the last inequality follows from the fact that f (X Tα∧T β ) = v(X Tα∧T β ) and the εoptimality of the strategies (S ε a i , τ ε i ). Since ε > 0 is arbitrary, it follows that and (107) follows in this case. Now, we consider any stopping strategy (S x , τ ) ∈ T x , and we define where (a n ) is any sequence that is dense in int I. Such a sequence of stopping times is such that Therefore, lim n→∞ τ n ∧ T α ∧ T β = τ ∧ T α ∧ T β . Our analysis above has established that (107) holds true for each of the stopping strategies (S x , τ n ) ∈ T x . Combining this observation with Fatou's lemma and the fact that v is lower semicontinuous, we can see that which establishes (107). Finally, we note that the continuity properties of v and the inequality v ≥ f imply that v ≥ f . This observation and the r(·)-excessivity of v imply that where L is defined by (36) and f is defined by (10).
We now prove that the value function v satisfies the variational inequality (109) in the sense of this definition. Also, we establish sufficient conditions for the existence of ε-optimal as well as optimal stopping strategies. It is worth noting that the requirements (118)-(119) are not really needed: the only reason we have adopted them is to simplify the exposition of the proof.
Theorem 12 Consider the optimal stopping problem formulated in Section 2. The following statements are true. (I) If the problem data is such that f (y) = ∞, for some y ∈ I, or lim sup then v(x) = ∞ for all x ∈ I, otherwise, v(x) < ∞ for all x ∈ I.
(II) If the problem data is such that then the value function v satisfies the variational inequality (109) in the sense of Definition 1, and v(α) = f (α) resp., v(β) = f (β) if α resp., β is absorbing.
(III) Suppose that (113) is true and that f = f . Given an initial condition x ∈ int I consider any monotone sequences (α n ), (β n ) in I such that if α is absorbing and f (α) = lim sup y↓α f (y), then α n = α for all n ≥ 1, and if β is absorbing and f (β) = lim sup y↑β f (y), then β n = β for all n ≥ 1.
Also, let S x be any weak solution to (1), and define the associated stopping times Then Furthermore, the stopping strategy and, given any stopping strategy (S x , τ ) ∈ T x , , if x ∈ int I, lim y∈int I, y↓α v(y), if α is absorbing and x = α, lim y∈int I, y↓α v(y), if β is absorbing and x = β.
We can establish the second identity in (114) similarly.
With each initial condition x ∈ int I, we associate any sequence of stopping strategies (see (108) in Lemma 11). If α is absorbing and α < α n (see (118)), then we may assume without loss of generality that τ ℓ < T α , P ℓ x -a.s.. To see this claim, suppose that α is absorbing and α < α n , which is the case when f (α) < lim sup y↓α f (y). Since In view of this observation and the dominated convergence theorem, we can see that If P ℓ x (τ ℓ = T α ) > 0, then the right-hand side of this identity is strictly positive, and there exists k ≥ 1 such that Given such a k, we can see that and the claim follows. Similarly, we may assume that τ ℓ < T β , P ℓ x -a.s., if β is absorbing and β n < β.
In light of the above observations and (118)-(119), we can use the monotone convergence theorem to calculate which implies that, for all ℓ ≥ 1, there exists n ℓ such that It follows that, if we define then the stopping strategy (S ℓ In view of (127) and (131), we can see that The first term on the right-hand side of this identity is clearly positive, while, the second one is positive because −Lv is a positive measure and −A Lv is an increasing process (see also (40) in Lemma 1). This observation and (132)-(133) imply that Proof of (II). To prove that v satisfies the variational inequality (109) in the sense of Definition 1, and thus complete the proof of part (II) of the theorem, we have to show that (112) holds true because v ≥ f and −Lv is a positive measure. To this end, we consider any interval [α,β] ⊆ {x ∈ int I | v(x) > f (x)} and we note that there exists ξ > 0 such that because the restrictions of v − f and v in int I are lower semicontinuous and continuous, respectively. In view of this observation, we can see that These inequalities and (134) imply that The first of these limits implies that Now, (127) and (131) In view of (135)-(137), we can pass to the limit as ℓ → ∞ to obtain the second identity following from (52)-(53) in Lemma 3. Since this identity is true for all x ∈ ]α,β[ and Lϕ = Lψ = 0, it follows that the restriction of the measure Lv in x ∈ ]α,β[ vanishes, which establishes (112). Proof of (III). We now assume that f = f and we consider the stopping times τ ⋆ and τ ⋆ n that are defined by (120) on any given weak solution S x to (1). In view of (127) and the fact that v satisfies (112), we can see that Combining this result with the identities we obtain (121).
To establish the optimality of (S x , τ ⋆ ) if f = f and (122)-(123) are satisfied, we first note that if α is inaccessible, then Similarly, if β is inaccessible, then In view of (118)- (119) and (123), we can see that, if α (resp., β) is absorbing, then α n = α (resp., β n = β) and In light of these observations and the monotone convergence theorem, we can see that and the optimality of (S x , τ ⋆ ) follows thanks to (121).
It is straightforward to see that the variational inequality (109) does not have a unique solution. In the previous result, we proved that the value function v satisfies (109) as well as the boundary / growth conditions (114). We now establish a converse result, namely a verification theorem, which shows that v is the minimal solution to (109).
Theorem 13 Consider the optimal stopping problem formulated in Section 2 and suppose that (113) holds true. The following statements are true. Proof. A function w : int I → R + that is as in the statement of part (I) of the theorem satisfies all of the requirements of Theorem 7. Therefore, if I is not open and we identify w with its extension on I that is given by w(α) = lim y↓α w(y) (resp., w(β) = lim y↑β w(y)) if α (resp., β) is absorbing, then for every stopping strategy (S x , τ ) ∈ T x , where (α n ), (β n ) are any monotone sequences in I satisfying (116). Combining this identity with the fact that −Lw is a positive measure, which implies that −A Lw is an increasing process, we can see that This inequality and Fatou's lemma imply that which, combined with the inequality w ≥ f , proves that v(x) ≤ w(x).
If the function w satisfies (138) as well, then we choose any monotone sequences (α n ), (β n ) as in (116)-(119) and we note that (128)-(129) hold true with the extension of w on I considered at the beginning of the proof in place of v. If we consider the stopping strategies then we can see that (130) with w in place of v and (141) imply that It follows that v(x) ≥ w(x) thanks to (108) in Lemma 11, which, combined with the inequality v(x) ≤ w(x) that we have established above, implies that v(x) = w(x).
To show part (III) of the theorem, we first note that, given any constants A, B ∈ R, the function Aϕ + Bψ satisfies the variational inequality (109) if and only if Aϕ + Bψ ≥ f . Combining this observation with part (I) of the theorem, we can see that v(x) is less than or equal to the right-hand side of (139). To establish the reverse inequality, we first use (51) in Lemma 3 and (127) with τ ≡ ∞ to obtain for all pointsᾱ <x <β in int I and all constants A, B ∈ R. Also, we fix any point x ∈ int I and we consider any monotone sequences (α n ), (β n ) in int I such that α n < x < β n for all n ≥ 1 and lim If we define then we can check that A n ϕ(α n ) + B n ψ(α n ) = v(α n ) and A n ϕ(β n ) + B n ψ(β n ) = v(β n ), and observe that the identity Also, given any y ∈ ]β n , β[, we can see that (143) withᾱ = α n ,x = β n andβ = y yields v(y) − A n ϕ(y) − B n ψ(y) E βn e −Λ Ty 1 {Ty<Tα n } = E βn Tα n ∧Ty 0 e −Λu dA Lv u , which implies that A n ϕ(y) + B n ψ(y) ≥ v(y) for all y ∈ ]β n , β[.
Combining these results with (31), we can see that ≥ 0 for all n ≥ 1.
If we consider any sequence (n ℓ ) such that lim ℓ→∞ A n ℓ exists, then the positivity of the constants A n , B n and (145) imply that lim ℓ→∞ B n ℓ also exists and that both limits are positive and finite. In particular, (145) and (146) and v(y) ≤ lim ℓ→∞ A n ℓ ϕ(y) + lim ℓ→∞ B n ℓ ψ(y) for all y ∈ int I \ {x}.
It follows that v(x) is greater than or equal to the right-hand side of (139). The existence of constantsÃ,B such that the identity in (140)  It follows thatÃϕ(y)+Bψ(y) ≥ v(y) ≥ f (y) because −A Lv = A −Lv is a continuous increasing process. We can show thatÃϕ(y) +Bψ(y) ≥ f (y) for all y ∈ ]α, c], if ]α, c] = ∅, similarly, and the inequality in (140) has been established.
7 Ramifications including a generalisation of the "principle of smooth fit" Throughout the section, we assume that (113) is true, so that the value function is realvalued, and that f = f . We can express the so-called waiting region W as a countable union of pairwise disjoint open intervals because it is an open subset of int I. In particular, we write where and we adopt the usual convention that ]c, c[ = ∅ for c ∈ I ∪ {α, β}. Since the measure Lv does not charge the waiting region W, for some constants A ℓ and B ℓ . Our first result in this section is concerned with a characterisation of the value function if the problem data is such that W = int I. Example 1 in Section 8 provides an illustration of this case.
We next study the special case that arises when a portion of the general problem's value function has the features of the value function of a perpetual American call option, which has been extensively studied in the literature.

Corollary 15
Consider the optimal stopping problem formulated in Section 2, and suppose that (113) is true and f = f . If W ℓ = ]α, d ℓ [, for some ℓ ≥ 1 and d ℓ ∈ int I, then and Proof. The identities in (151) follow immediately from the fact that v(x) is given by (149) for all x ∈ W ℓ = ]α, d ℓ [, (31) and (114). The first two inequalities in (152) are trivial. Given any x ∈ ]α, d ℓ [, the fact that v(x) is given by (149) and part (III) of Theorem 13 imply that A ℓ ϕ(x) + B ℓ ψ(x) ≥ f (y) for all y ∈ int I, and the last inequality in (152) follows.
Using similar symmetric arguments, we can establish the following result that arises in the context of a perpetual American put option.
Corollary 16 Consider the optimal stopping problem formulated in Section 2, and suppose that (113) is true and f = f . If W ℓ = ]c ℓ , β[, for some ℓ ≥ 1 and c ℓ ∈ int I, then and The final result in this section focuses on a special case in which a component of the waiting region W has compact closure in int I, which is a case that can arise in the context of the valuation of a perpetual American straddle option.

Corollary 17
Consider the optimal stopping problem formulated in Section 2, and suppose that (113) is true and f = f . If W ℓ = ]c ℓ , d ℓ [, for some ℓ ≥ 1 and c ℓ , d ℓ ∈ int I, then and Proof. The expressions in (155) follow immediately from the continuity of the value function. The first two inequalities in (156) are a consequence of the definition of the waiting region W, while the last one is an immediate consequence of part (II) of Theorem 13.
Our final result is concerned with a generalisation of the "principle of smooth fit".

Corollary 18
Consider the optimal stopping problem formulated in Section 2, and suppose that (113) is true and f = f . Also, consider any point y ∈ int I such that y / ∈ W. If f admits right and left-hand derivatives at y, then Proof. The inequality v ′ + (y) ≤ v ′ − (y) is an immediate consequence of the fact that Lv ≤ 0. The inequalities f ′ + (y) ≤ v ′ + (y) and v ′ − (y) ≤ f ′ − (y) follow from the fact that v − f has a local minimum at y.

Examples
We assume that an appropriate weak solution S x to (1) has been associated with each initial condition x ∈ int I in all of the examples that we discuss in this section. The following example shows that an optimal stopping time may not exist if (122) is not satisfied. In this example, the stopping region I \ W is empty.
Example 1 Suppose that I = ]0, ∞[ and X is a geometric Brownian motion, so that for some constants b and σ. Also, suppose that r is a constant. In this case, it is well-known that where m < 0 < n are the solutions to the quadratic equation In this context, if the reward function f is given by for some constants κ, λ > 0, then where (α j ) and (β j ) are any sequences in ]0, ∞[ such that α j < x < β j for all j, lim j→∞ α j = 0 and lim In particular, there exists no optimal stopping time.
The next example shows that an optimal stopping time may not exist if (122) is not satisfied, while the stopping region I \ W is not empty.

Example 2
In the context of the previous example, suppose that the reward function f is given by In view of straightforward considerations, we can see that v(x) = x n for all x > 0.
In this case, i.e., τ ⋆ is the first hitting time of {1}, and where (β j ) is any sequence in ]x, ∞[ such that lim j→∞ β j = ∞.
The following example shows that an optimal stopping time may not exist if (123) is not satisfied. In this example, the stopping region I \ W is empty.
Example 3 Suppose that I = R + , X is a standard one-dimensional Brownian motion starting from x > 0 and absorbed at 0 and r is a constant. In this case, we can see that and .
If the reward function f is given by In particular, v(x) = lim j→∞ E x e −r(Tα j ∧T β j ) f (X Tα j ∧T β j ) for all x > 0, where (α j ), (β j ) are any sequences in ]0, ∞[ satisfying (160), and there exists no optimal stopping time.
The following example shows that an optimal stopping time may not exist if f = f . In particular, the first hitting time τ ⋆ of the stopping region I \ W may not be optimal.
Example 4 Suppose that X is a standard Brownian motion, namely, I = R and dX t = dW t , and that r = 1 2 . In this context, it is straightforward to verify that ϕ(x) = e −x and ψ(x) = e x .
Also, consider the reward function Given an initial condition x < 1 and an associated solution S x to (1), we note that defines an (F t )-stopping time because we have assumed that the filtration in S x satisfies the usual conditions. However, X τ ⋆ = 1, P x -a.s., and E x e −rτ ⋆ f (X τ ⋆ ) = e x−1 < v(x) for all x < 1.
In view of these considerations, we can see that there is no optimal stopping time for initial conditions x < 1.
The final example that we consider illustrates that a characterisation such as the one provided by (152) in Corollary 15 has a local rather than global character.

Example 5
In the context of the previous example, we consider the reward function and we note that the calculation implies that the function f /ψ is strictly increasing in ] − ∞, 0[ and strictly decreasing in ]0, ∞[. A first consideration of the associated optimal stopping problem suggests that the value function v could identify with the function u given by In particular, we can check that u(x) u(y) ≥ min ϕ(x) ϕ(y) , ψ(x) ψ(y) for all x, y ∈ R.
However, the function u is not excessive because where δ 0 is the Dirac probability measure that assigns mass 1 on the set {0}, which implies that Lu [c, d] > 0 for all 1 ≤ c < d ≤ 2, and suggests that [1,2] should be a subset of the waiting region W. In this example, the value function v is given by where a l = 1 + √ 2 + 2 ln √ 2 − 1 ≃ 0.651 and a r = 1 + √ 2 ≃ 2.414.
These values for the boundary points a l , a r arise by the requirements that a l ∈ ]0, 1[, a r > 2 and v should be C 1 along a l , a r (see Corollary 18), which are associated with the system of equations a l = a r + 2 ln (a r − 2) , a 4 r − 4a 3 r + 4a 2 r − 1 ≡ (a r − 1) 2 a r − 1 − √ 2 a r − 1 + √ 2 = 0.
In particular, we can check that the function given by the right-hand side of (161) satisfies all of the requirements of the verification Theorem 13.(II) and therefore identifies with the value function v.

Appendix: pasting weak solutions of SDEs
The next result is concerned with aggregating two filtrations, one of which "switches on" at a stopping time of the other one.
Theorem 19 Consider a measurable space (Ω, F ) and two filtrations (H t ), (G t ) such that H ∞ ∪ G ∞ ⊆ F . Also, suppose that (G t ) is right-continuous and let τ be an (H t )-stopping time. If we define then (F t ) is a filtration such that F τ +t = H τ +t ∨ G t for all t ≥ 0 (163) and F t∧τ = H t∧τ ∨ G 0 for all t ≥ 0.
Proof. First, it is straightforward to verify that H t ⊆ F t and G 0 ⊆ F t for all t ≥ 0.
To prove that (F t ) is indeed a filtration, we consider any times u < t and any event A ∈ F u . Using the definition of F u , we can see that It follows that A ∈ F t . To establish (163), we first show that G t ⊆ F τ +t , which amounts to proving that, given any t ≥ 0 and A ∈ G t , To this end, we note that Also, given any s, u ≥ 0 such that s ∈ ]u − t, u], while, given any s, u ≥ 0 such that s ∈ [0, u − t], These observations and the definition (162) of (F t ) imply that (166) holds true and G t ⊆ F τ +t . Combining this result with the fact that H τ +t ⊆ F τ +t , which follows from (165), we can see that H τ +t ∨ G t ⊆ F τ +t . To prove that F τ +t ⊆ H τ +t ∨ G t , we consider any A ∈ F τ +t and we show that Since A ∩ {τ + t ≤ū} ∈ Fū for allū ≥ 0, A ∩ {τ ≤ū} ∈ Fū +t for allū ≥ 0. Combining this observation with the definition (162) of (F t ) we can see that A ∩ {τ ≤ū} ∩ {s ≤ τ } ∈ Hū +t ∨ Gū +t−s for allū ≥ 0 and s ≤ū + t.
In view of this result, we can see that, given any u > t, n .

It follows that
the equality being true thanks to the right continuity of (G t ). Combining this result with the fact that which follows from (168) forū = s = 0, we obtain (167).
To prove (164), we first note that (165) implies that H t∧τ ∨ G 0 ⊆ F t∧τ . To establish the reverse inclusion, we consider any A ∈ F t∧τ and we show that A ∩ {t ∧ τ ≤ u} ∈ H u ∨ G 0 for all u ≥ 0.
In view of this observation, we can see that A ∩ ju n ≤ τ ≤ (j + 1)u n ∈ H u ∨ G u n for all u ∈ ]0, t[.

It follows that
H u ∨ G u n = H u ∨ G 0 for all u ∈ ]0, t[ because (G t ) is right-continuous. In particular, this implies that Combining this observation with the fact that which follows from (169) forū = s = t, we can see that From (170)-(171), it follows that A ∈ H t∧τ ∨ G 0 .
The following result is concerned with the pasting of two stopping strategies, in particular, two weak solutions to (1), at an appropriate stopping time.