1 Formulation of the Problems

The main aim of this paper is to present closed-form solutions to the discounted optimal double stopping problems with the values:

$$\begin{aligned} {\overline{V}}_1&= \sup _{\tau \le \zeta } E \Big [ e^{- r \tau } \, \Big ( L_1 \, X_{\tau } - \min _{0 \le t \le \tau } X_t \Big ) + e^{- r \zeta } \, \Big ( \max _{0 \le t \le \zeta } X_t - K_1 \, X_{\zeta } \Big ) \Big ] \end{aligned}$$
(1)

and

$$\begin{aligned} {\overline{V}}_2&= \sup _{\tau \le \zeta } E \Big [ e^{- r \tau } \, \Big ( \max _{0 \le t \le \tau } X_t - K_2 \, X_{\tau } \Big ) + e^{- r \zeta } \, \Big ( L_2 \, X_{\zeta } - \min _{0 \le t \le \zeta } X_t \Big ) \Big ] \end{aligned}$$
(2)

for some given constants \(L_i \ge 1 \ge K_i > 0\), for \(i = 1, 2\). Here, for a precise formulation of the problem, we consider a probability space \((\Omega , \mathcal{F}, P)\) with a standard Brownian motion \(B = (B_t)_{t \ge 0}\). We assume that the process \(X = (X_t)_{t \ge 0}\) is defined by:

$$\begin{aligned} X_t = x \, \exp \Big ( \big ( \mu - {\sigma ^2}/{2} \big ) \, t + \sigma \, B_t \Big ) \end{aligned}$$
(3)

so that it solves the stochastic differential equation:

$$\begin{aligned} dX_t = \mu \, X_t \, dt + \sigma \, X_t \, dB_t \quad (X_0=x) \end{aligned}$$
(4)

where \(\mu < r\), \(r > 0\) and \(\sigma > 0\) are given constants, and \(x > 0\) is fixed. In our application, the process X describes the current state of technological progress, which changes over time due to the active process of research and development in a branch of the industry, where r is the discount rate. The running maximum \(S = (S_t)_{t \ge 0}\) and minimum \(Q = (Q_t)_{t \ge 0}\) of the process X defined by:

$$\begin{aligned} S_t = s \vee \max _{0 \le u \le t} X_u \quad \text {and} \quad Q_t = q \wedge \min _{0 \le u \le t} X_u \end{aligned}$$
(5)

for arbitrary \(0 < q \le x \le s\), respectively, can be interpreted as the best and the worst market valuation of technology achieved so far. Suppose that the suprema in (1) and (2) are taken over all stopping times \(\tau\) and \(\zeta\) with respect to the natural filtration \((\mathcal{F}_t)_{t \ge 0}\) of the process X, and the expectations there are taken with respect to the risk-neutral probability measure P. In this case, the values of (1) and (2) can be interpreted as the rational (or no-arbitrage) values of (perpetual) real lookback compound options with present values, which are linear in the running maximum or minimum of X, as well as sunk cost investment amounts, which are constant or linear in X, in the Black-Merton-Scholes model, respectively (see, e.g. Dixit and Pindyck [Dixit and Pindyck (1994); Chapter X] for the examples of standard compound real options).

The problem of (1), which has its dual of (2), is a typical valuation problem for financial lookback options. It has, however, a broader interpretation related to capital budgeting of real investment decisions, that is, real lookback options. In particular, by utilising the approach presented in this paper, decision makers are able to quantify the financial value of investments in new promising technologies, as well as of the policy mechanisms that can be used to incentivise such investments. More specifically, we have in mind that the current state of technological progress is observable and is described by the process X, whereas r is the discount rate.

In order to adopt a new technology at time \(t \ge 0\), a firm is required to pay sunk costs which are proportional to the value of the technology in the market denoted by \(X = (X_t)_{t \ge 0}\). This implies that more valuable developments are also associated with higher investment cost (for example, due to competition for suppliers). Typically, however, there is a time lag between a technological breakthrough and the actual realisation of the full potential of a new technology. The latter often requires additional investment in complementary technologies, changes in business processes or proper infrastructure (see Brynjolfsson and Hitt (2003) and Brynjolfsson et al. (2017)). In our problem, this feature is encapsulated by a second option available to the firm upon adoption of the new technology. We call this feature the commercialisation option. More specifically, by paying the sunk costs \(K_1 X\), the firm is able to realise the full potential of the technology, given by the running maximum S of the process X. These sunk costs are proportional to the current market value of the technology’s potential due to, for example, the need to develop technology-specific infrastructure. Ideally, the firm would like to enter the market, adopt the technology, when the cost of acquisition is low, and undertake further investments, once the technology is valuable enough. In other words, a firm would like to identify future winners early on, as once the market identifies the winning technologies, the costs of their adoption will be high. Hence, if the firm realises the potential of a technology too late, it pays larger sunk costs and delays further investments in developing this technology. Consequently, some emerging technologies reach their productive potential later than it might be desirable from a social welfare point of view. In this paper, we propose a valuation framework that allows to quantify the value associated with implementing a specific incentive mechanism inspired by financial lookback options, which stimulates innovations by reducing the firm’s regret of missing out on investment opportunities. This mechanism takes the form of an investment cost subsidy, which is equal to the difference between the current market value of the technology given by X, and the minimal value it has achieved until this moment Q. Then, upon the technology adoption, the firm receives the value \(L_1 X\) which is proportional to its current value X, whereas the costs paid by the firm are equal to Q, which reduces its regret associated with failing to time the market.

From the derived closed-from solutions we conclude that under such a support mechanism, firms have an incentive to adopt a technology when X is moving away from its running minimum Q. In this case, the technology is more valuable. Upon adoption the following two cases can occur. If the technology is sufficiently promising, then the firm will wait with commercialisation. This happens, because the probability that a new, higher, maximum S will be reached soon is large and, thus, there is a larger potential for a higher payoff which induces the firm to wait with commercialisation. However, if the market value is small relative to its running maximum S, then the firm commercialises the technology immediately after adoption as it is now unlikely that the technology will improve in comparison to its best performance achieved so far to warrant waiting for a higher maximum. These results show that double lookback options allow a social planner to subsidise the most efficient technologies without having to pick winners ex ante. Rather, the benefit of such a subsidy is a direct support for the realised winners, that is, the most desirable technologies as evidenced by market value, which could result in a considerable welfare increase.

Discounted optimal stopping problems for certain reward functionals depending on the running maxima and minima of continuous Markov (diffusion-type) processes were initiated by Shepp and Shiryaev (1993) and further developed by Pedersen (2000); Guo and Shepp (2001); Gapeev (2007); Guo and Zervos (2010); Peskir (20122014); Glover et al. (2013); Rodosthenous and Zervos (2017); Gapeev (20192020); Gapeev et al. (2021); Gapeev and Li (2021); Gapeev and Al Motairi (2021); Gapeev (2022) among others. The main feature in the analysis of such optimal stopping problems was that the normal-reflection conditions hold for the value functions at the diagonals of the state spaces of the multi-dimensional continuous Markov processes having the initial processes and the running extrema as their components. It was shown, by using the established by Peskir (1998) maximality principle for solutions of optimal stopping problems, which is equivalent to the superharmonic characterisation of the value functions, that the optimal stopping boundaries are characterised by the appropriate extremal solutions of certain (systems of) first-order nonlinear ordinary differential equations. Other optimal stopping problems in models with spectrally negative Lévy processes and their running maxima were studied by Asmussen et al. (2003); Avram et al. (2004); Ott (2013); Kyprianou and Ott (2014) among others.

We further consider the problems of (1) and (2) as the associated double (two-step) optimal stopping problems of (6) and (7) for the three-dimensional continuous Markov processes having the process X as well as its running maximum S and minimum Q as their state space components. The resulting problems turn out to be necessarily three-dimensional in the sense that they cannot be reduced to optimal stopping problems for Markov processes of lower dimensions. The original optimal double stopping problems are reduced to the appropriate sequences of single optimal stopping problems which are solved as the equivalent free-boundary problems for the value functions which satisfy the smooth-fit conditions at the optimal stopping boundaries and the normal-reflection conditions at the edges of the state space of the three-dimensional processes. The multiple (multi-step) optimal stopping problems for one-dimensional diffusion processes have recently drawn a considerable attention in the related literature. Duckworth and Zervos (2000) studied an investment model with entry and exit decisions alongside a choice of the production rate for a single commodity. The initial valuation problem was reduced to a double (two-step) optimal stopping problem which was solved through the associated dynamic programming differential equation. Carmona and Touzi (2008) derived a constructive solution to the problem of pricing of perpetual swing contracts, the recall components of which could be viewed as contingent claims with multiple exercises of American type, using the connection between optimal stopping problems and the associated with them Snell envelopes. Carmona and Dayanik (2008) then obtained a closed form solution of a multiple (multi-step) optimal stopping problem for a general linear regular diffusion process and a general payoff function among others. The problem of pricing of American compound standard put and call options in the classical Black-Merton-Scholes model was explicitly solved in Gapeev and Rodosthenous (2014a). The same problem in the more general stochastic volatility framework was studied by Chiarella and Kang (2009), where the associated two-step free-boundary problems for partial differential equations were solved numerically, by means of a modified sparse grid approach.

The rest of the paper is organised as follows. In Sect. 2, we embed the original problems with the values \({\overline{V}}^*_i\), for \(i = 1, 2\), in (1) and (2) into the optimal multiple stopping problems for the values functions \(V^*_i(x, s, q)\), for \(i = 1, 2\), in (6) and (7) for the three-dimensional continuous Markov process (XSQ) defined in (3) and (5), respectively. It is shown that the optimal exercise times \(\tau ^*_1(S, Q)\) and \(\tau ^*_2(S, Q)\) are the first times at which the process X reaches some upper or lower boundaries \(b^*(S, Q)\) or \(a^*(S, Q)\) depending on the current values of either the processes S and Q, respectively. In Sect. 3, we derive closed-form expressions for the candidate value functions for \(V^*_i(x, s, q)\), for \(i = 1, 2\), as solutions to the equivalent free-boundary problems and apply the normal-reflection conditions at the edges of the three-dimensional state space for (XSQ) to characterise the candidate optimal stopping boundaries for \(b^*(S, Q)\) and \(a^*(S, Q)\) or as the minimal and maximal solutions of the appropriate first-order nonlinear ordinary differential equations, respectively. In Sect. 4, by applying the change-of-variable formula with local time on surfaces from Peskir (2007), it is verified that the resulting solutions to the free-boundary problem provide the expressions for the value functions and the optimal stopping boundaries for the underlying asset price process in the original problems. In Sect. 5, we recall the explicit solutions of the inner optimal stopping problems with the value functions \(U^*_1(x, s)\) and \(U^*_2(x, q)\) from (72). The main results of the paper are stated in Theorem 4.1. The resulting method is presented in Corollary 4.2 and described in Remark 4.3.

2 Preliminaries

In this section, we describe the structure of the three-dimensional optimal stopping problems of (1) and (2) which are related to the floating sunk costs real double lookback option pricing problems and formulate the equivalent free-boundary problems.

2.1 The Two-step Optimal Stopping Problems

It is seen that the problems of (1) and (2) can naturally be embedded into the optimal double stopping problems for the (time-homogeneous strong) Markov process \((X, S, Q)=(X_t, S_t, Q_t)_{t \ge 0}\) defined in (3) and (5) with the values:

$$\begin{aligned}&{\overline{V}}_1 = \sup _{\tau \le \zeta } E \big [ e^{- r \tau } \, (L_1 \, X_{\tau } - Q_{\tau }) + e^{- r \zeta } \, (S_{\zeta } - K_1 \, X_{\zeta }) \big ] \end{aligned}$$
(6)

and

$$\begin{aligned}&{\overline{V}}_2 = \sup _{\tau \le \zeta } E \big [ e^{- r \tau } \, (S_{\tau } - K_2 \, X_{\tau }) + e^{- r \zeta } \, (L_2 \, X_{\zeta } - Q_{\zeta }) \big ] \end{aligned}$$
(7)

for some \(L_i \ge 1 \ge K_i > 0\), for \(i = 1, 2\), fixed, where the suprema are taken over all stopping times \(\tau\) and \(\zeta\) with respect to the filtration \((\mathcal{F}_t)_{t \ge 0}\). In this case, by virtue of the strong Markov property of the process (XSQ), the original problems of (6) and (7) can be reduced to the optimal stopping problems with the values:

$$\begin{aligned}&{\overline{V}}_i = \sup _{\tau } E \big [ e^{- r \tau } \, G_i(X_{\tau }, S_{\tau }, Q_{\tau }) \big ] \end{aligned}$$
(8)

where the suprema are taken over all stopping times \(\tau\) of (XSQ), and we set:

$$\begin{aligned} G_1(x, s, q) = L_1 \, x - q + U^*_1(x, s) \quad \text {and} \quad G_2(x, s, q) = s - K_2 \, x + U^*_2(x, q) \end{aligned}$$
(9)

for some \(L_1 \ge 1 \ge K_2 > 0\) fixed, respectively. Here, the functions \(U^*_1(x, s)\) and \(U^*_2(x, q)\) represent the values of the optimal stopping problems formulated in (72), where the optimal stopping times \(\eta ^*_i\), for \(i = 1, 2\), have the form of (73), for some boundaries \(0< g^*(s) \equiv \lambda _* s < s\) and \(h^*(q) \equiv \nu _* q> q > 0\) determined in Corollary 5.1 below.

2.2 The Outer Optimal Stopping Problems

Let us first transform the rewards in the expressions of (8) and (9) with the aim to formulate the associated optimal stopping problems. For this purpose, we use standard arguments based on an application of Itô’s formula (see, e.g. [Liptser and Shiryaev (2001); Theorem 4.4] or [Revuz and Yor (1999); Chapter II, Theorem 3.2]) to show that the infinitesimal operator \(\mathbb L\) of the process (XSQ) from (4) and (5) acts on an arbitrary function V(xsq) from the class \(C^{2,1,1}\) on E according to the rule:

$$\begin{aligned}&(\mathbb LV)(x, s, q) = \mu \, x \, \partial _x V(x, s, q) + \frac{\sigma ^2 x^2}{2} \, \partial _{xx} V(x, s, q) \;\;\; \text {in} \;\;\; 0< q< x < s \end{aligned}$$
(10)

while we should also assume that:

$$\begin{aligned}&\partial _q = 0 \quad \text {at} \quad 0< x = q< s \quad \text {and} \quad \partial _s = 0 \quad \text {at} \quad 0< q < x = s \end{aligned}$$
(11)

in order to have the operator \(\mathbb L\) well-defined at \(d_1\) and \(d_2\), respectively (see, e.g. [Peskir (1998); Subsection 3.1]). We first recall from the results of Beibel and Lerche (1997); Pedersen (2000) and Guo and Shepp (2001) (as well as Gapeev (2020)) on the expressions in (85)-(86) for the value functions \(U^*_1(x, s)\) and \(U^*_2(x, q)\) in (72) which solve the free-boundary problems in (74)-(80) that the processes \(e^{- r t} U^*_1(X_t, S_t)\) and \(e^{- r t} U^*_2(X_t, Q_t)\) admit the representations:

$$\begin{aligned}e^{- r t} \, U^*_1(X_t, S_t) = & \ U^*_1(x, s) + \int _0^t e^{- r u} \, (\mathbb LU^*_1 - r \, U^*_1) (X_u, S_u) \, I \big ( X_u< S_u \big ) \, du \\&+ \int _0^t e^{- r u} \, \partial _s U^*_1(X_u, S_u) \, I \big ( X_u = S_u \big ) \, dS_u \\&+ \int _0^t e^{- r u} \, \partial _x U^*_1(X_u, S_u) \, I \big ( X_u < S_u \big ) \, dB_u \end{aligned}$$
(12)

and

$$\begin{aligned}e^{- r t} \, U^*_2(X_t, Q_t) =& \ U^*_2(x, q) + \int _0^t e^{- r u} \, (\mathbb LU^*_2 - r \, U^*_2)(X_u, Q_u) \, I \big ( X_u> Q_u \big ) \, du \\&+ \int _0^t e^{- r u} \, \partial _q U^*_2(X_u, Q_u) \, I \big ( X_u = Q_u \big ) \, dQ_u \\&+ \int _0^t e^{- r u} \, \partial _x U^*_2(X_u, Q_u) \, I \big ( X_u > Q_u \big ) \, dB_u \end{aligned}$$
(13)

where the stochastic integrals with respect to the standard Brownian motion \(B = (B_t)_{t \ge 0}\) are continuous square-integrable martingales. Let us now apply Itô’s formula to the processes \(e^{- r t} G_i(X_t, S_t, Q_t)\), for \(i = 1, 2\), to obtain:

$$\begin{aligned}e^{- r t} \, G_1(X_t, S_t, Q_t) =& \ G_1(x, s, q) \\&+ \int _0^t e^{- r u} \, H_1(X_u, S_u, Q_u) \, I \big ( Q_u< X_u < S_u \big ) \, du \\&- \int _0^t e^{- r u} \, I \big ( X_u = Q_u \big ) \, dQ_u + N^{1}_t \end{aligned}$$
(14)

with

$$\begin{aligned}H_1(x, s, q) =& \ (\mathbb LG_1 - r G_1) (x, s, q) \equiv \big ( r \, q - (r - \mu ) \, L_1 \, x \big ) \, I \big (x > g^*(s) \big ) \\&- \big ( r \, (s - q) + (r - \mu ) \, (L_1 - K_1) \, x \big ) \, I \big ( x \le g^*(s) \big ) \end{aligned}$$
(15)

and

$$\begin{aligned}e^{- r t} \, G_2(X_t, S_t, Q_t) =& \ G_2(x, s, q) \\&+ \int _0^t e^{- r u} \, H_2(X_u, S_u, Q_u) \, I \big ( Q_u< X_u < S_u \big ) \, du \\&+ \int _0^t e^{- r u} \, I \big ( X_u = S_u \big ) \, dS_u + N^2_t \end{aligned}$$
(16)

with

$$\begin{aligned}H_2(x, s, q) =& \ (\mathbb LG_2 - r G_2) (x, s, q) \equiv \big ( (r - \mu ) \, K_2 \, x - r \, s \big ) \, I \big ( x < h^*(q) \big ) \\& - \big ( r \, (s - q) + (r - \mu ) \, (L_2 - K_2) \, x \big ) \, I \big (x \ge h^*(q) \big ) \end{aligned}$$
(17)

for each \(0< q< x < s\), and all \(t \ge 0\), where \(I(\cdot )\) denotes the indicator function. Here, \(\mathbb L\) is the infinitesimal operator of the process (XSQ) having the form of (10)-(11) above, and the processes \(N^{i} = (N^{i}_t)_{t \ge 0}\), for \(i = 1, 2\), defined by:

$$\begin{aligned} N^{i}_t&= \int _0^t e^{-r u} \, \partial _x G_i(X_u, S_u, Q_u) \, I \big ( Q_u< X_u < S_u \big ) \, \sigma \, X_u \, dB_u \end{aligned}$$
(18)

for all \(t \ge 0\), are continuous square-integrable martingales under the probability measure P. It also follows from the explicit expressions in (3)-(4) for the process X under the assumption \(\mu < r\) as well as from the properties of the partial derivatives \(\partial _x G_i(x, s, q)\), for \(i = 1, 2\), and the structure of the other processes included into the expressions of (14) and (16) that the processes \(N^i\), for \(i = 1, 2\), defined in (18) are uniformly integrable. Note that the processes S and Q may change their values only at the times when \(X_t = S_t\) and \(X_t = Q_t\), for \(t \ge 0\), respectively, and such times accumulated over the infinite horizon form the sets with zero Lebesgue measure, so that the indicators in the expressions of (14) and (16) as well as (18) can be ignored (see also Proof of Theorem 4.1 below for more explanations and references). Then, inserting \(\tau\) in place of t and applying Doob’s optional sampling theorem (see, e.g. [Liptser and Shiryaev (2001); Chapter III, Theorem 3.6] or [Revuz and Yor (1999); Chapter II, Theorem 3.2]) to the expressions in (14) and (16), we get that the equalities:

$$\begin{aligned}&E \big [ e^{- r \tau } \, G_1(X_{\tau }, S_{\tau }, Q_{\tau }) \big ] = G_1(x, s, q) + E \bigg [ \int _0^{\tau } e^{- r u} \, H_1(X_u, S_u, Q_u) \, du - \int _0^{\tau } e^{- r u} \, dQ_u \bigg ] \end{aligned}$$
(19)

and

$$\begin{aligned}&E \big [ e^{- r \tau } \, G_2(X_{\tau }, S_{\tau }, Q_{\tau }) \big ] = G_2(x, s, q) + E \bigg [ \int _0^{\tau } e^{- r u} \, H_2(X_u, S_u, Q_u) \, du + \int _0^{\tau } e^{- r u} \, dS_u \bigg ] \end{aligned}$$
(20)

hold, for any stopping time \(\tau\) with respect to the filtration \((\mathcal{F}_t)_{t \ge 0}\). Hence, taking into account the expressions in (19) and (20), we conclude that the optimal stopping problems with the values of (8) are equivalent to the optimal stopping problems with the value functions:

$$\begin{aligned}&V^*_1(x, s, q) = \sup _{\tau } E_{x, s, q} \bigg [ \int _0^{\tau } e^{- r u} \, H_1(X_u, S_u, Q_u) \, du - \int _0^{\tau } e^{- r u} \, dQ_u \bigg ] \end{aligned}$$
(21)

and

$$\begin{aligned}&V^*_2(x, s, q) = \sup _{\tau } E_{x, s, q} \bigg [ \int _0^{\tau } e^{- r u} \, H_2(X_u, S_u, Q_u) \, du + \int _0^{\tau } e^{- r u} \, dS_u \bigg ] \end{aligned}$$
(22)

where the functions \(H_i(x, s, q)\), for \(i = 1, 2\), are defined in (15) and (17), for \((x, s, q) \in E\), respectively. Here, we denote by \(E_{x, s, q}\) the expectation with respect to the probability measures \(P_{x, s, q}\) under which the three-dimensional (time-homogeneous strong Markov) processes (XSQ) starts at \((x, s, q) \in E\), and by \(E = \{ (x, s, q) \in \mathbb R^3 \, | \, 0 < q \le x \le s \}\) the state spaces of (XSQ). We further obtain solutions to the optimal stopping problems in (21) and (22) and verify below that the value functions \(V^*_i(x, s, q)\), for \(i = 1, 2\), are the solutions of the problems in (8), and thus, give the solutions of the original multiple optimal stopping problems in (1) and (2), under \(s = q = x\).

It follows from the results of general theory of optimal stopping problems for Markov processes (see, e.g. [Peskir and Shiryaev (2006); Chapter I, Subsection 2.2]) that the continuation and stopping regions of the optimal stopping problems in (8) have the form:

$$\begin{aligned} C^*_i&= \big \{ (x, s, q) \in E \; \big | \; V^*_i(x, s, q) > 0 \big \} \quad \text {and} \quad D^*_i = \big \{ (x, s, q) \in E \; \big | \; V^*_i(x, s, q) = 0 \big \} \end{aligned}$$
(23)

for every \(i = 1, 2\), respectively. It is seen from the results of Theorem 4.1 below that the value function \(V^*_i(x, s, q)\) is continuous, so that the set \(C^*_i\) is open and \(D^*_i\) is closed, for every \(i = 1, 2\).

2.3 The Structure of Optimal Stopping Times

Let us now specify the structure of the optimal stopping times in the outer optimal stopping problems of (21) and (22).

  1. (i)

    It follows from the structure of the second integrals of (21) and (22) as well as the facts that the process S is increasing and the process Q is decreasing that it is not optimal to exercise the outer parts of the contracts (or exercise the double lookback options for the first time), whenever the appropriate integrands are positive. In other words, the diagonals \(d_{1} = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< x = q < s \}\) and \(d_{2} = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< q < x = s \}\) belong to the continuation regions \(C^*_{1}\) and \(C^*_{2}\) in (23), respectively. Moreover, it follows from the structure of the first integrals of (21) and (22) that it is not optimal to exercise the outer parts of the contracts (or exercise the double lookback options for the first time) when the inequalities \(H_i(X_t, S_t, Q_t) \ge 0\), for \(i = 1, 2\) hold, which are equivalent to \(0< g^*(S_t) < X_t \le r Q_t/((r - \mu ) L_1)\) and \(0< r S_t/((r - \mu ) K_2) \le X_t < h^*(Q_t)\) with \(Q_t< X_t < S_t\), for all \(t \ge 0\), respectively. In other words, these facts mean that the set \(\{ (x, s, q) \in E \, | \, 0< q \vee g^*(s) < x \le r q/((r - \mu ) L_1) \wedge s \}\) belongs to the continuation region \(C^*_{1}\), while the set \(\{ (x, s, q) \in E \, | \, 0< q \vee r s/((r - \mu ) K_2) \le x < h^*(q) \wedge s \}\) belongs to the continuation region \(C^*_{2}\) in (23).

  2. (ii)

    We now observe that it follows from the definition of the process (XSQ) in (3) and (5) and the structure of the rewards in (21) and (22) that, for each \(s > 0\) fixed, there exist \(0 < q \le x\) such that x is sufficiently close to s and the point (xsq) belongs to the stopping region \(D^*_i\), for every \(i = 1, 2\). Moreover, for each \(q > 0\) fixed, there exist \(0 < x \le s\) such that x is sufficiently close to q and the point (xsq) belongs to \(D^*_i\), for \(i = 1, 2\). By virtue of arguments similar to the ones applied in [Dubins et al. (1993); Subsection 3.3] and [Peskir (1998); Subsection 3.3], these properties can be explained by the facts that the costs of waiting until the process X coming from either such a large \(x > 0\) decreases to the current value of the running minimum process Q or such a small \(x > 0\) increases to the current value of the running maximum process S may be too large due to the presence of the discounting factors in the reward functionals of (21) and (22). Furthermore, by virtue of the asymptotic distributional properties of the running maximum S and minimum Q from (5) of the geometric Brownian motion X from (3)-(4) on the infinitesimally small time intervals (see, e.g. [Dubins et al. (1993); Subsection 3.3] for similar arguments applied to the running maxima of the Bessel processes, [Peskir (1998); Proposition 2.1] for similar properties of the running maxima of general diffusion processes, and [Gapeev and Li (2021); Theorem 2.1, Part (i)] for similar properties of the running maxima and minima of geometric Brownian motions), it follows that the reward functionals in (21) and (22) infinitesimally increase when \(X_t = Q_t\) or \(X_t = S_t\), for each \(t \ge 0\). This fact also shows that all points (xsq) from the diagonals \(d_1 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< x = q < s \}\) and \(d_2 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< q < x = s \}\) belong to the continuation regions \(C^*_i\), for \(i = 1, 2\), in (23), respectively.

    On the one hand, if we take some \((x, s, q) \in D^*_{1}\) from (23) such that \(x > (1 \vee r/((r - \mu ) L_1)) q\) and use the fact that the process (XSQ) started at some \((x', s, q)\) such that \(0< (1 \vee r/((r - \mu ) L_1)) q< x< x' < s\) passes through the point \((x, s', q)\), for some \(s' \ge s\), before hitting the plane \(d_1 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< x = q < s \}\), then the representation of (19) for the reward functional in (21) implies that \(V^*_1(x', s, q) \le V^*_1(x, s, q) = 0\) holds, so that \((x', s, q) \in D^*_{1}\). Moreover, if we take some \((x, s, q) \in D^*_{2}\) from (23) such that \(0< x < (r/((r - \mu ) K_2) \wedge 1) s\) and use the fact that the process (XSQ) started at some \((x'', s, q)\) such that \(0< q< x''< x < (r/((r - \mu ) K_2) \wedge 1) s\) passes through the point \((x, s, q'')\), for some \(0 < q'' \le q\), before hitting the diagonal \(d_2 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< q < x = s \}\), then the representation of (20) for the reward functional in (22) implies that \(V^*_2(x'', s, q) \le V^*_2(x, s, q) = 0\) holds, so that \((x'', s, q) \in D^*_2\). Thus, we may conclude that the stopping regions \(D^*_{i}\), for \(i = 1, 2\), from (23) have the right-hand and left-hand parts, respectively.

    On the other hand, if we take some \((x, s, q) \in C^*_{1}\) from (23) and use the fact that the process (XSQ) started at (xsq) passes through some point \((x'', s'', q)\) such that \(0< q< x''< x < s \le s''\) before hitting the plane \(d_1\), then the representation of (19) for the reward functional in (21) implies that \(V^*_1(x'', s, q) > V^*_1(x, s, q) = 0\) holds, so that \((x'', s, q) \in C^*_{1}\). Moreover, if take some \((x, s, q) \in C^*_{2}\) from (23) and use the fact that the process (XSQ) started at (xsq) passes through some point \((x', s, q')\) such that \(0< q' \le q< x< x' < s\) before hitting the plane \(d_2\), then the representation of (20) for the reward functional in (22) implies that \(V^*_2(x', s, q) > V^*_2(x, s, q) = 0\) holds, so that \((x', s, q) \in C^*_{2}\).

  3. (iii)

    We may therefore conclude that there exist functions \(b^*(s, q)\) and \(a^*(s, q)\) such that the inequalities \(H_i(x, s, q) < 0\), for \(i = 1, 2\), hold, for \((x, s, q) \in E\) such that \(x \ge b^*(s, q)\) or \(x \le a^*(s, q)\), respectively. In this respect, the continuation regions \({C}^*_i\), for \(i = 1, 2\), in (23) have the form:

    $$\begin{aligned}&C^*_1 = \big \{ (x, s, q) \in E \; \big | \; x < b^*(s, q) \big \} \quad \text {and} \quad C^*_2 = \big \{ (x, s, q) \in E \; \big | \; x > a^*(s, q) \big \} \end{aligned}$$
    (24)

    while the stopping regions \({D}^*_i\), for \(i = 1, 2\), in (23) are given by:

    $$\begin{aligned}&D^*_1 = \big \{ (x, s, q) \in E \; \big | \; x \ge b^*(s, q) \big \} \quad \text {and} \quad D^*_2 = \big \{ (x, s, q) \in E \; \big | \; x \le a^*(s, q) \big \}. \end{aligned}$$
    (25)

    (Figs. 12 on this page illustrate computer drawings of the optimal stopping boundaries \(b^*(s, q)\) and \(a^*(s, q)\).)

  4. (iv)

    Let us finally clarify the location of the boundaries \(b^*(s, q)\) and \(a^*(s, q)\) in relation to the optimal stopping boundaries \(g^*(s)\) and \(h^*(q)\) from (73) for the optimal stopping problems with the value functions \(U^*_1(x, s)\) and \(U^*_2(x, q)\) in (72) below. For this purpose, we use the notations of the functions \(F_i(x, s, q)\), for \(i = 1, 2\), from (36) and (37) below. Suppose that either the inequality \(b^*(s, q) < h^*(q)\) or \(a^*(s, q) > g^*(s)\) holds, for some \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\). In this case, for each point (xsq) such that \(x \in (b^*(s, q), h^*(q))\) and \(0< q< x \le g^*(s) < s\), we would have \(L_1 x - q + U^*_1(x, s) > L_1 x - q + s - K_1 x \equiv s - q + (L_1 - K_1) x = V^*_1(x, s, q) + F_1(x, s, q)\) contradicting the fact that \(L_1 x - q + U^*_1(x, s) \le V^*_1(x, s, q) + F_1(x, s, q)\), for all \((x, s, q) \in E\). Also, for each point (xsq) such that \(x \in (g^*(s), a^*(s, q))\) and \(0< q< h^*(q) \le x < s\), we would have \(s - K_2 x + U^*_2(x, q) > s - K_2 x + L_2 x - q \equiv s - q + (L_2 - K_2) x = V^*_2(x, s, q) + F_2(x, s, q)\) contradicting the fact that \(s - K_2 x + U^*_2(x, q) \le V^*_2(x, s, q) + F_2(x, s, q)\), for all \((x, s, q) \in E\). Hence, we may conclude that the inequalities \(b^*(s, q) \ge h^*(q)\) and \(a^*(s, q) \le g^*(s)\) should hold, for all \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\), respectively.

  5. (v)

    Recall that the problem of (1) can be interpreted as the combined problem of finding the optimal time to adopt a new technology in the presence of an investment cost subsidy and then to determine when it is optimal to commercialise it. The subsidised technology adoption occurs when the process X hits the boundary \(b^*(S, Q)\) from above, implying that the first investment occurs when X moves away from its running minimum Q and the technology becomes more valuable. Upon adoption the following two cases can occur. If the technology is sufficiently promising, that is, if X is sufficiently large, so that \(X > g^*(S)\) holds, then the firm will wait with commercialisation. This is because the probability that a new, higher, maximum S will be reached soon is large, and thus, there is a larger potential for a higher payoff which induces the firm to wait with commercialisation. If however, X is relatively small, then the firm commercialise the technology immediately after adoption as it is now unlikely that the technology will improve in comparison to its best performance achieved so far to warrant waiting for a higher maximum.

Fig. 1
figure 1

A computer drawing of the optimal exercise boundary \(b^*(s, q)\)

Fig. 2
figure 2

A computer drawing of the optimal exercise boundary \(a^*(s, q)\)

2.4 The Free-boundary Problems

In order to find analytic expressions for the unknown value functions \(V^*_i(x, s, q)\), for \(i = 1, 2\), from (21) and (22) with the unknown boundaries \(b^*(s, q)\) and \(a^*(s, q)\) from (24)-(25), let us use the results of general theory of optimal stopping problems for Markov processes (see, e.g. [Peskir and Shiryaev (2006); Chapter IV, Section 8]) as well as optimal stopping problems for maximum processes (see, e.g. [Peskir and Shiryaev (2006); Chapter V, Sections 15–20] and references therein). We can therefore reduce the optimal stopping problems of (21) and (22) to the equivalent free-boundary problems:

$$\begin{aligned} ({\mathbb L} V_1 - r \, V_1)(x, s, q) = - H_1(x, s, q) \quad&\text {for} \quad q< x < b(s, q) \end{aligned}$$
(26)
$$\begin{aligned} ({\mathbb L} V_2 - r \, V_2)(x, s, q) = - H_2(x, s, q) \quad&\text {for} \quad a(s, q)< x < s \end{aligned}$$
(27)
$$\begin{aligned} V_1(x, s, q) \big |_{x = b(s, q)-} = 0, \quad&V_2(x, s, q) \big |_{x = a(s, q)+} = 0 \end{aligned}$$
(28)
$$\begin{aligned} \partial _x V_1(x, s, q) \big |_{x = b(s, q)-} = 0, \quad&\partial _x V_2(x, s, q) \big |_{x = a(s, q)+} = 0 \end{aligned}$$
(29)
$$\begin{aligned} \partial _q V_1(x, s, q) \big |_{x = q+} = 1, \quad&\partial _s V_2(x, s, q) \big |_{x = s-} = -1 \end{aligned}$$
(30)
$$\begin{aligned} V_1(x, s, q) = 0 \quad \text {for} \quad x > b(s, q), \quad&V_2(x, s, q) = 0 \quad \text {for} \quad x < a(s, q) \end{aligned}$$
(31)
$$\begin{aligned} V_1(x, s, q) > 0 \quad \text {for} \quad x \le b(s, q), \quad&V_2(x, s, q) = 0 \quad \text {for} \quad x \ge a(s, q) \end{aligned}$$
(32)
$$\begin{aligned} ({\mathbb L} V_1 - r \, V_1)(x, s, q)< - H_1(x, s, q) \quad&\text {for} \quad x < b(s, q) \end{aligned}$$
(33)
$$\begin{aligned} ({\mathbb L} V_2 - r \, V_2)(x, s, q) < - H_2(x, s, q) \quad&\text {for} \quad x > a(s, q) \end{aligned}$$
(34)

where the instantaneous-stopping as well as the smooth-fit and normal-reflection conditions of (28)-(30) are satisfied, for each \(0< q < s\). Observe that the superharmonic characterisation of the value function (see, e.g. [Peskir and Shiryaev (2006); Chapter IV, Section 9]) implies that \(V^*_i(x, s, q)\), for \(i = 1, 2\), are the smallest functions satisfying (26)-(27) with (28) and (31)-(32) with the boundaries \(b^*(s, q)\) and \(a^*(s, q)\), respectively. Note that the inequalities in (33)-(34) follow directly from the arguments of parts (i) and (ii) of Subsection 2.3 above.

3 Solutions to the Free-boundary Problems

In this section, we obtain closed-form expressions for the value functions \(V^*_i(x, s, q)\), for \(i = 1, 2\), in (21) and (22) associated with the perpetual real double lookback options on maxima and minima and derive first-order nonlinear ordinary differential equations for the optimal stopping boundaries \(b^*(s, q)\) and \(a^*(s, q)\) from (24)-(25) as solutions to the free-boundary problems in (26)-(34).

3.1 The Candidate Value Functions

We first observe that the general solutions of the second-order ordinary differential equations in (10) + (26)-(27) have the form:

$$\begin{aligned} V_i(x, s, q) = C_{i,1}(s, q) \, x^{\gamma _1} + C_{i,2}(s, q) \, x^{\gamma _2} - F_i(x, s, q) \end{aligned}$$
(35)

for every \(i = 1, 2\), with the particular solutions:

$$\begin{aligned} F_1(x, s, q)&= \big ( L_1 \, x - q \big ) \, I \big ( x > g^*(s) \big ) + \big ( s - q + (L_1 - K_1) \, x \big ) \, I \big ( x \le g^*(s) \big ) \end{aligned}$$
(36)

and

$$\begin{aligned} F_2(x, s, q)&= \big ( s - K_2 \, x \big ) \, I \big ( x < h^*(q) \big ) + \big ( s - q + (L_2 - K_2) \, x \big ) \, I \big ( x \ge h^*(q) \big ) \end{aligned}$$
(37)

for all \(0 < q \le x \le s\), respectively. Here, \(C_{i, j}(s, q)\), for \(i, j = 1, 2\), are some continuously differentiable functions, and the numbers \(\gamma _j\), \(j = 1, 2\), are given by:

$$\begin{aligned} \gamma _{j} = \frac{1}{2} - \frac{\mu }{\sigma ^2} - (-1)^j \sqrt{ \bigg ( \frac{1}{2} - \frac{\mu }{\sigma ^2} \bigg )^2 + \frac{2r}{\sigma ^2}} \end{aligned}$$
(38)

so that \(\gamma _2< 0< 1 < \gamma _1\), as well as the identity:

$$\begin{aligned} \frac{\gamma _1}{\gamma _1 - 1} \, \frac{\gamma _2}{\gamma _2 - 1} = \frac{r}{r - \mu } \end{aligned}$$
(39)

holds. Then, by applying the conditions from (28)-(30) to the function in (35), we get that the equalities:

$$\begin{aligned}&C_{1,1}(s, q) \, b^{\gamma _1}(s, q) + C_{1,2}(s, q) \, b^{\gamma _2}(s, q) = F_1(b(s, q), s, q) \end{aligned}$$
(40)
$$\begin{aligned}&C_{1,1}(s, q) \, \gamma _1 \, b^{\gamma _1}(s, q) + C_{1,2}(s, q) \, \gamma _2 \, b^{\gamma _2}(s, q) = \partial _x F_1(b(s, q), s, q) \end{aligned}$$
(41)
$$\begin{aligned}&C_{2,1}(s, q) \, a^{\gamma _1}(s, q) + C_{2,2}(s, q) \, a^{\gamma _2}(s, q) = F_2(a(s, q), s, q) \end{aligned}$$
(42)
$$\begin{aligned}&C_{2,1}(s, q) \, \gamma _1 \, a^{\gamma _1}(s, q) + C_{2,2}(s, q) \, \gamma _2 \, a^{\gamma _2}(s, q) = \partial _x F_2(a(s, q), s, q) \end{aligned}$$
(43)
$$\begin{aligned}&\partial _q C_{1, 1}(s, q) \, q^{\gamma _1} + \partial _q C_{1, 2}(s, q) \, q^{\gamma _2} = 0 \end{aligned}$$
(44)
$$\begin{aligned}&\partial _s C_{2, 1}(s, q) \, s^{\gamma _1} + \partial _s C_{2, 2}(s, q) \, s^{\gamma _2} = 0 \end{aligned}$$
(45)

are satisfied, for \(0< q < s\), where the functions \(F_i(x, s, q)\), for \(i = 1, 2\), are defined in (36)-(37).

Now, by solving the system of equations in (40)-(41), we obtain that the candidate value function admits the representation:

$$\begin{aligned} V_1(x, s, q; b(s, q))&= C_{1, 1}(s, q; b(s, q)) \, x^{\gamma _1} + C_{1, 2}(s, q; b(s, q)) \, x^{\gamma _2} - F_1(x, s, q) \end{aligned}$$
(46)

for \(q \le x < b(s, q)\), where

$$\begin{aligned} C_{1, j}(s, q; b(s, q)) = \frac{\gamma _{3-j} (L_1 b(s, q) - q) - L_1 b(s, q)}{(\gamma _{3-j} - \gamma _{j}) b^{\gamma _{j}}(s, q)} \end{aligned}$$
(47)

for \(0< g^*(s) \le q < s\), and

$$\begin{aligned} C_{1, j}(s, q; b(s, q)) = \frac{\gamma _{3-j} (s - q + (L_1 - K_1) b(s, q)) - (L_1 - K_1) b(s, q)}{(\gamma _{3-j} - \gamma _{j}) b^{\gamma _{j}}(s, q)} \end{aligned}$$
(48)

for \(0< q< g^*(s) < s\), and every \(j = 1, 2\). Also, by solving the system of equations in (42)-(43), we obtain that the candidate value function admits the representation:

$$\begin{aligned} V_2(x, s, q; a(s, y))&= C_{2, 1}(s, q; a(s, q)) \, x^{\gamma _1} + C_{2, 2}(s, q; a(s, q)) \, x^{\gamma _2} - F_2(x, s, q) \end{aligned}$$
(49)

for \(a(s, q) < x \le s\), where

$$\begin{aligned} C_{2, j}(s, q; a(s, q)) = \frac{\gamma _{3-j} (s - K_2 a(s, q)) + K_2 a(s, q)}{(\gamma _{3-j} - \gamma _{j}) a^{\gamma _{j}}(s, q)} \end{aligned}$$
(50)

for \(0< q < s \le h^*(q)\), and

$$\begin{aligned} C_{2, j}(s, q; a(s, q)) = \frac{\gamma _{3-j} (s - q + (L_2 - K_2) a(s, q)) - (L_2 - K_2) a(s, q)}{(\gamma _{3-j} - \gamma _{j}) a^{\gamma _{j}}(s, q)} \end{aligned}$$
(51)

for \(0< q< h^*(q) < s\), and every \(j = 1, 2\).

Moreover, by means of straightforward computations, it can be deduced from the expression in (46) with (36) that the first-order and second-order partial derivatives \(\partial _x V_1(x, s, q; b(s, q))\) and \(\partial _{xx} V_1(x, s, q; b(s, q))\) of the function \(V_1(x, s, q; b(s, q))\) take the form:

$$\begin{aligned} \partial _x V_1(x, s, q; b(s, q))=& \ C_{1,1}(s, q; b(s, q)) \, \gamma _1 \, x^{\gamma _1-1}+ C_{1,2}(s, q; b(s, q)) \, \gamma _2 \, x^{\gamma _2-1} \\&- L_1 \, I \big ( x > g^*(s) \big )- (L_1 - K_1) \, I \big ( x \le g^*(s) \big ) \end{aligned}$$
(52)

and

$$\begin{aligned}\partial _{xx} V_1(x, s, q; b(s, q)) =& \ C_{1,1}(s, q; b(s, q)) \, \gamma _1 (\gamma _1 - 1) \, x^{\gamma _1-2} \\& + C_{1,2}(s, q; b(s, q)) \, \gamma _2 (\gamma _2 - 1) \, x^{\gamma _2-2} \end{aligned}$$
(53)

on the interval \(q< x < b(s, q)\), for each \(0< q < s\). Also, by means of straightforward computations, it can be deduced from the expression in (49) with (37) that the first-order and second-order partial derivatives \(\partial _x V_2(x, s, q; a(s, q))\) and \(\partial _{xx} V_2(x, s, q; a(s, q))\) of the function \(V_2(x, s, q; a(s, q))\) take the form:

$$\begin{aligned} \begin{aligned} \partial _x V_2(x, s, q; a(s, q))= & \ C_{i,1}(s, q; a(s, q)) \, \gamma _1 \, x^{\gamma _1-1} + C_{i,2}(s, q; a(s, q)) \, \gamma _2 \, x^{\gamma _2-1} \\&+ K_2 \, I \big ( x < h^*(q) \big ) - (L_2 - K_2) \, I \big ( x \ge h^*(q) \big ) \end{aligned} \end{aligned}$$
(54)

and

$$\begin{aligned} \begin{aligned}\partial _{xx} V_2(x, s, q; a(s, q)) = & \ C_{i,1}(s, q; a(s, q)) \, \gamma _1 (\gamma _1 - 1) \, x^{\gamma _1-2} \\& + C_{i,2}(s, q; a(s, q)) \, \gamma _2 (\gamma _2 - 1) \, x^{\gamma _2-2} \end{aligned} \end{aligned}$$
(55)

on the interval \(a(s, q)< x < s\), for each \(0< q < s\).

3.2 The Candidate Stopping Boundaries

In order to derive first-order nonlinear ordinary differential equations for the candidate boundary functions, we further assume that the functions b(sq) and a(sq) are continuously differentiable. Then, applying the condition of (44) to the functions \(C_{1, j}(s, q; b(s, q))\), for \(j = 1, 2\), in (47)-(48), we obtain the equalities:

$$\begin{aligned}&\partial _q b(s, q) = \frac{\gamma _2 (q/b(s, q))^{\gamma _1} - \gamma _1 (q/b(s, q))^{\gamma _2}}{\gamma _1 \gamma _2 (q/b(s, q) - (r - \mu )L_1/r) ((q/b(s, q))^{\gamma _1} - (q/b(s, q))^{\gamma _2})} \end{aligned}$$
(56)

for \(0< g^*(s) \le q < s\), and

$$\begin{aligned}&\partial _q b(s, q) = \frac{\gamma _2 (q/b(s, q))^{\gamma _1} - \gamma _1 (q/b(s, q))^{\gamma _2}}{\gamma _1 \gamma _2 ((s - q)/b(s, q) + (r - \mu )(L_1 - K_1)/r) ((q/b(s, q))^{\gamma _1} - (q/b(s, q))^{\gamma _2})} \end{aligned}$$
(57)

for \(0< q< g^*(s) < s\). Here, by virtue of the structure of the equation in (56), we have \(b(s, q) = \nu _* q\) with \(\nu _* > 1\) from (84), for all \(q > g^*(s)\) and each \(s > 0\). Note that the candidate value function \(V_1(x, s, q; b(s, q))\) in (46) with (47)-(48) is (strictly) increasing in b(sq), so that we should take the candidate stopping boundary b(sq), for \(i = 1, 2\), as the minimal solution of the first-order nonlinear ordinary differential equation in (57) located above the plane \(d_1 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< x = q < s \}\).

Hence, applying the condition of (45) to the functions \(C_{2, j}(s, q; a(s, q))\), for \(j = 1, 2\), in (50)-(51), we obtain the equalities:

$$\begin{aligned}&\partial _s a(s, q) = \frac{\gamma _2 (s/a(s, q))^{\gamma _1} - \gamma _1 (s/a(s, q))^{\gamma _2}}{\gamma _1 \gamma _2 (s/a(s, q) - (r - \mu ) K_2/r) ((s/a(s, q))^{\gamma _1} - (s/a(s, q))^{\gamma _2})} \end{aligned}$$
(58)

for \(0< q < s \le h^*(q)\), and

$$\begin{aligned}&\partial _s a(s, q) = \frac{\gamma _2 (s/a(s, q))^{\gamma _1} - \gamma _1 (s/a(s, q))^{\gamma _2}}{\gamma _1 \gamma _2 ((s - q)/b(s, q) + (r - \mu )(L_2 - K_2)/r) ((s/a(s, q))^{\gamma _1} - (s/a(s, q))^{\gamma _2})} \end{aligned}$$
(59)

for \(0< q< h^*(q) < s\). Here, by virtue of the structure of the equation in (58), we have \(a(s, q) = \lambda _* s\) with \(0< \lambda _* < 1\) from (82), for all \(0< s < h^*(q)\) and each \(q > 0\). Note that the candidate value function \(V_2(x, s, q; a(s, q))\) in (46) with (47)-(48) is (strictly) decreasing in a(sq), so that we should take the candidate stopping boundary a(sq), for \(i = 1, 2\), as the maximal solution of the first-order nonlinear ordinary differential equation in (59) located below the plane \(d_2 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< q < x = s \}\).

3.3 The Minimal and Maximal Admissible Solutions \(b^*(s, q)\) and \(a^*(s, q)\).

We further consider the minimal and maximal admissible solutions of first-order nonlinear ordinary differential equations as the smallest and largest possible solutions \(b^*(s, q)\) and \(a^*(s, q)\) of the equations in (57) and (59), which satisfy the inequalities \(0< q < h^*(q) \le b^*(s, q) \le s\), and \(0< q \le a^*(s, q) \le g^*(s) < s\), for all \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\). By virtue of the classical results on the existence and uniqueness of solutions for first-order nonlinear ordinary differential equations, we may conclude that these equations admit (locally) unique solutions, because of the facts that their right-hand sides represent (locally) continuous functions in (sqb(sq)) and (sqa(sq)) and (locally) Lipschitz functions in b(sq) and a(sq), for each \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\) fixed (see also [Peskir (1998); Subsection 3.9] for similar arguments based on the analysis of other first-order nonlinear ordinary differential equations). Then, it is shown by means of technical arguments based on Picard’s method of successive approximations that there exist unique solutions b(sq) and a(sq) to the equations in (57) and (59) started at some points \((q_0, s, q_0)\) and \((s_0, s_0, q)\), for each \(0< q_0< g^*(s) < s\) and \(0< q< h^*(q) < s_0\) fixed (see also [Graversen and Peskir (1998); Subsection 3.2] and [Peskir (1998); Example 4.4] for similar arguments based on the analysis of other first-order nonlinear ordinary differential equations).

Hence, in order to construct the appropriate functions \(b^*(s, q)\) and \(a^*(s, q)\) which satisfy the equations in (56) and (58) and stays strictly above and below the appropriate diagonals \(d_1 = \{ (x, s, q) \in E \, | \, 0< x = q < s \}\) or \(d_2 = \{ (x, s, q) \in E \, | \, 0< q < x = s \}\), respectively, we construct the sequences of solutions satisfying such properties and intersecting \(d_1\) and \(d_2\) (see also [Peskir (2014); Subsection 3.5] (among others) for a similar procedure applied for solutions of other first-order nonlinear ordinary differential equations). For this purpose, for any decreasing and increasing sequences \((q_{l})_{l \in \mathbb N}\) and \((s_{l})_{l \in \mathbb N}\), such that \(0< q_l< g^*(s) < s\) and \(0< q< h^*(q) < s_l\), we can construct the sequences of solutions \(b_{l}(s, q)\) and \(a_{l}(s, q)\), for \(l \in \mathbb N\), to the equations in (57) and (59), for all \(0< q < q_l\) and \(s > s_l\) such that \(b_{l}(s, q_l) = q_{l}\) and \(a_{l}(s_l, q) = s_{l}\) holds, for each \(0< q_l< g^*(s) < s\) and \(0< q< h^*(q) < s_l\), and every \(l \in \mathbb N\). It follows from the structure of the equations in (57) and (59) that the inequalities \(\partial _q b_{l}(s, q_l) < 1\) and \(\partial _s a_{l}(s_l, q) < 1\) should hold for the derivatives of the corresponding functions, for each \(0< q_l< g^*(s) < s\) and \(0< q< h^*(q) < s_l\), and every \(l \in \mathbb N\) (see also [Pedersen (2000); pages 979-982] for the analysis of solutions of another first-order nonlinear differential equation). Observe that, by virtue of the uniqueness of solutions mentioned above, we know that each two curves \(q \mapsto b_{l}(s, q)\) and \(q \mapsto b_{m}(s, q)\) as well as \(s \mapsto a_{l}(s, q)\) and \(s \mapsto a_{m}(s, q)\) cannot intersect, for each \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\), and \(l, m \in \mathbb N\), such that \(l \ne m\), and thus, we see that the sequence \((b_{l}(s, q))_{l \in \mathbb N}\) is decreasing and the sequence \((a_{l}(s, q))_{l \in \mathbb N}\) is increasing, so that the limits \(b^*(s, q) = \lim _{l \rightarrow \infty } b_{l}(s, q)\) and \(a^*(s, q) = \lim _{l \rightarrow \infty } a_{l}(s, q)\) exist, for each \(0< q < s\), respectively. We may therefore conclude that \(b^*(s, q)\) and \(a^*(s, q)\) provides the minimal and maximal solutions to the equations in (57) and (59) such that \(b^*(s, q) > q\) and \(a^*(s, q) < s\) holds, for all \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\).

Moreover, since the right-hand sides of the first-order nonlinear ordinary differential equations in (57) and (59) are (locally) Lipschitz in (sq), respectively, one can deduce by means of Gronwall’s inequality that the functions \(b_{l}(s, q)\) and \(a_{l}(s, q)\), for each \(l \in \mathbb N\), are continuous, so that the functions \(b^*(s, q)\) and \(a^*(s, q)\) are continuous too, for \(0< q< g^*(s) < s\) and \(0< q< h^*(q) < s\). The appropriate maximal admissible solutions of first-order nonlinear ordinary differential equations and the associated maximality principle for solutions of optimal stopping problems which is equivalent to the superharmonic characterisation of the payoff functions were established in Peskir (1998) and further developed in Graversen and Peskir (1998), Pedersen (2000), Guo and Shepp (2001), Gapeev (2007), Guo and Zervos (2010), Peskir (20122014), Glover et al. (2013), Ott (2013), Kyprianou and Ott (2014), Gapeev and Rodosthenous (2014b, 2016a 2016b), Rodosthenous and Zervos (2017), and Gapeev et al. (2021) among other subsequent papers (see also [Peskir and Shiryaev (2006); Chapter I; Chapter V, Section 17] for other references).

4 Main Results and Proofs

In this section, based on the facts proved above, we formulate and prove the main result of the paper. Observe that, by means of the change-of-measure arguments from Shepp and Shiryaev (1994) and Gapeev (2019), the problems of (6)-(7) can be reduced the appropriate optimal stopping problems for the two-dimensional Markov process \((S/X, Q/X) = (S_t/X_t, Q_t/X_t)_{t \ge 0}\). However, we follow the classical approach initiated in Shepp and Shiryaev (1993) and to solve them as three-dimensional optimal stopping problems.

Theorem 4.1

Let the process (XSQ) be given by (3)-(4) and (5) with \(\sigma > 0\), \(\mu < r\), and \(r > 0\). Then, the value functions of the optimal stopping problems in (21) and (22), for some \(L_i \ge 1 \ge K_i > 0\), for \(i = 1, 2\), fixed, admit the representations:

$$\begin{aligned} V^*_1(x, s, q) = {\left\{ \begin{array}{ll} V_1(x, s, q; b^*(s, q)), &{} \text {if} \quad q \le x < b^*(s, q), \\ 0, &{} \text {if} \quad x \ge b^*(s, q), \end{array}\right. } \end{aligned}$$
(60)

and

$$\begin{aligned} V^*_2(x, s, q) = {\left\{ \begin{array}{ll} V_2(x, s, q; a^*(s, q)), &{} \text {if} \quad a^*(s, q)< x \le s, \\ 0, &{} \text {if} \quad 0 < x \le a^*(s, q), \end{array}\right. } \end{aligned}$$
(61)

while the optimal stopping times have the form:

$$\begin{aligned} \tau ^*_1 = \inf \big \{ t \ge 0 \; \big | \; X_t \ge b^*(S_t, Q_t) \big ) \big \} \quad \text {and} \quad \tau ^*_2 = \inf \big \{ t \ge 0 \; \big | \; X_t \le a^*(S_t, Q_t) \big \} \end{aligned}$$
(62)

where the candidate value functions and boundaries are specified as follows:

  1. (i)

    the function \(V_1(x, s, q; b^*(s, q))\) is given by (46) with (47)-(48), where the boundary \(b^*(s, q)\) satisfying the inequality \(b^*(s, q) \ge h^*(q)\) represents the minimal solution of the first-order nonlinear ordinary differential equation in (57) such that \(b^*(s, q) > q\), for \(0< q< g^*(s) < s\), while \(b^*(s, q) = h^*(q) \equiv \nu _* q\) with \(\nu _* > 1\) from (84), for \(g^*(s) \le q < s\);

  2. (ii)

    the function \(V_2(x, s, q; a^*(s, q))\) is given by (49) with (50)-(51), where the boundary \(a^*(s, q)\) satisfying the inequality \(a^*(s, q) \le g^*(s)\) represents the maximal solution of the first-order nonlinear ordinary differential equation in (59) such that \(a^*(s, q) < s\), for \(0< q< h^*(q) < s\), while \(a^*(s, q) = g^*(s) \equiv \lambda _* s\) with \(0< \lambda _* < 1\) from (82), for \(q < s \le h^*(q)\).

Recall that we can put \(s = q = x\) to obtain the values of the original perpetual real floating-cost double lookback call-put and put-call option pricing problems of (1) and (2) from the values of the double optimal stopping problems of (6) and (7), which are equivalent to the sequence of single optimal stopping problems of (21)-(22) and (72). Note that, since the both parts of the assertion stated above are proved using similar arguments, we may only give a proof for the case of the three-dimensional single optimal stopping problem of (22), which is related to the outer perpetual real lookback put-call options.

Proof

In order to verify the assertion of part (ii) stated above, it remains for us to show that the function defined in the right-hand side of (61) coincides with the value function in (22) and that the stopping time \(\tau ^*_2\) in (62) is optimal with the boundary \(a^*(s, q)\) being the solution of the system in (42)-(43)+(45) specified in (49)-(51) with (58)-(59). For this purpose, let us denote by \(V_2(x, s, q)\) the right-hand side of the expression in (61) associated with \(a^*(s, q)\). Then, it is shown by means of straightforward calculations from the previous section that the function \(V_2(x, s, q)\) solves the left-hand system of (26)-(34). Recall that the function \(V_2(x, s, q)\) is \(C^{2,1,1}\) on the closure \({\overline{C}}_{2}\) of \(C_{2}\) and is equal to 0 on \(D_{2}\), which are defined as \({\overline{C}}^*_{2}\), \(C^*_{2}\) and \(D^*_{2}\) in (24) and (25) with a(sq) instead of \(a^*(s, q)\), respectively. Hence, taking into account the assumption that the boundary \(a^*(s, q)\) is (at least piecewise) continuously differentiable, for all \(0< q < s\), by applying the change-of-variable formula from [Peskir (2007); Theorem 3.1] to the process \(e^{- r t} V_2(X_t, S_t, Q_t)\) (see also [Peskir and Shiryaev (2006); Chapter II, Section 3.5] for a summary of the related results and further references), we obtain the expression:

$$\begin{aligned}e^{- r t} \, V_2(X_t, S_t, Q_t) = & \ V_2(x, s, q) + M^{2}_t \\&+ \int _0^t e^{- r u} \, (\mathbb LV_2 - r V_2)(X_u, S_u, Q_u) \, I \big ( Q_u \vee a^*(S_u, Q_u)< X_u < S_u \big ) \, du \\&+ \int _0^t e^{- r u} \, \partial _s V_2(X_u, S_u, Q_u) \, I \big ( X_u = S_u \big ) \, dS_u \\&+ \int _0^t e^{- r u} \, \partial _q V_2(X_u, S_u, Q_u) \, I \big ( X_u = Q_u \big ) \, dQ_u \end{aligned}$$
(63)

for all \(t \ge 0\). Here, the process \(M^{2} = (M^{2}_t)_{t \ge 0}\) defined by:

$$\begin{aligned} M^2_t&= \int _0^t e^{- r u} \, \partial _x V_2(X_u, S_u, Q_u) \, I \big ( Q_u< X_u < S_u \big ) \, \sigma X_u \, dB_u \end{aligned}$$
(64)

is a continuous local martingale with respect to the probability measure \(P_{x, s, q}\). Note that, since the time spent by the process (XSQ) at the boundary surface \(\{ (x, s, q) \in E \, | \, x = a(s, q) \}\) as well as at the diagonals \(d_1 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0 < x = q \le s \}\) and \(d_2 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0 < q \le x = s \}\) is of zero Lebesgue measure (see, e.g. [Borodin and Salminen (2002); Chapter II, Section 1]), the indicators in the second line of the formula in (63) as well as in the expression of (64) can be ignored. Moreover, since the component Q decreases only when the process (XSQ) is located on the diagonal \(d_1 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< x = q < s \}\), while the component S increases only when the process (XSQ) is located on the diagonal \(d_2 = \{ (x, s, q) \in \mathbb R^3 \, | \, 0< q < x = s \}\), the indicators appearing in the third line of (63) can be set equal to one. Finally, we observe from the expressions in (49) and (50)-(51) that the function \(V_2(x, s, q)\) does not actually depend on the variable q, and thus, the partial derivative \(\partial _q V_2(x, s, q)\) is equal to 0 in the region \(\{ (x, s, q) \in E \, | \, 0< q < x \le s \le h^*(q) \}\). Therefore, since the diagonal \(d_1\) lies outside to the region \(\{ (x, s, q) \in E \, | \, 0< q < h^*(q) \le x \le s \}\), we may conclude that the second integral in the third line of (63) is actually equal to zero.

It follows from straightforward calculations and the arguments of the previous section that the function \(V_2(x, s, q)\) satisfies the second-order ordinary differential equation in (27), which together with the left-hand conditions of (28)-(29) and (31) as well as the fact that the left-hand inequality in (34) holds imply that the inequality \((\mathbb LV_2 - r V_2)(x, s, q) \le - H_2(x, s, q)\) is satisfied, for all \((x, s, q) \in E\) such that \(0< q< x < s\) and \(x \ne a^*(s, q)\). Moreover, we observe directly from the expressions in (49) with (50)-(51) as well as (54)-(55) that the value function \(V_2(x, s, q)\) is convex and increases from zero, because its first-order partial derivative \(\partial _x V_2(x, s, q)\) is positive and increases from zero, while its second-order partial derivative \(\partial _{xx} V_2(x, s, q)\) is positive, on the interval \(q \vee a^*(s, q) < x \le s\). Thus, we may conclude that the left-hand inequality in (32) holds, which together with the left-hand conditions of (28)-(29) and (31) imply that the inequality \(V_2(x, s, q) \ge 0\) is satisfied, for all \((x, s, q) \in E\). Let \((\varkappa _n)_{n \in \mathbb N}\) be the localising sequence of stopping times for the process \(M^2\) from (64) such that \(\varkappa _n = \inf \{t \ge 0 \, | \, |M^2_t| \ge n \}\), for each \(n \in \mathbb N\). It therefore follows from the expression in (63) that the inequalities:

$$\begin{aligned}\int _0^{\tau \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du &+ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, dS_u \le \ e^{- r (\tau \wedge \varkappa _n)} \, V_2(X_{\tau \wedge \varkappa _n}, S_{\tau \wedge \varkappa _n}, Q_{\tau \wedge \varkappa _n}) \\&+ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du \\&+ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, dS_u \le \ V_2(x, s, q) + M^2_{\tau \wedge \varkappa _n} \end{aligned}$$
(65)

hold, for any stopping time \(\tau\) with respect to the natural filtration of (XSQ) and each \(n \in \mathbb N\) fixed. Then, taking the expectation with respect to \(P_{x, s, q}\) in (65), by means of Doob’s optional sampling theorem, we get:

$$\begin{aligned}E_{x, s, q} \bigg [ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du &+ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, dS_u \bigg ] \\ \le & \ E_{x, s, q} \bigg [ e^{- r (\tau \wedge \varkappa _n)} \, V_2(X_{\tau \wedge \varkappa _n}, S_{\tau \wedge \varkappa _n}, Q_{\tau \wedge \varkappa _n}) \\&+ \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du + \int _0^{\tau \wedge \varkappa _n} e^{- r u} \, dS_u \bigg ] \\\le & \ V_2(x, s, q) + E_{x, s, q} \big [ M^2_{\tau \wedge \varkappa _n} \big ] = V_2(x, s, q) \end{aligned}$$
(66)

for all \((x, s, q) \in E\) and each \(n \in \mathbb N\). Hence, letting n go to infinity and using Fatou’s lemma, we obtain from the expressions in (66) that the inequalities:

$$\begin{aligned}E_{x, s, q} \bigg [ \int _0^{\tau } e^{- r u} \, H_2(X_u, S_u, Q_u) \, du &+ \int _0^{\tau } e^{- r u} \, dS_u \bigg ] \le \ E_{x, s, q} \bigg [ e^{- r \tau } \, V_2(X_{\tau }, S_{\tau }, Q_{\tau }) \\& + \int _0^{\tau } e^{- r u} \, H_2(X_u, S_u, Q_u) \, du \\&+ \int _0^{\tau } e^{- r u} \, dS_u \bigg ] \le \ V_2(x, s, q) \end{aligned}$$
(67)

hold, for any stopping time \(\tau\) and all \((x, s, q) \in E\).

We now prove the fact that the boundary \(a^*(s, q)\) specified above is optimal. By virtue of the fact that the function \(V_2(x, s, q)\) from the right-hand side of the expression in (61) associated with the boundary \(a^*(s, q)\) satisfies the equation of (27) and the right-hand condition of (28), and taking into account the structure of \(\tau ^*_2\) in (62), it follows from the expression in (63) that the equalities:

$$\begin{aligned}E_{x, s, q} \bigg [ \int _0^{\tau ^*_2 \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du &+ \int _0^{\tau ^*_2 \wedge \varkappa _n} e^{- r u} \, dS_u \bigg ] = E_{x, s, q} \bigg [ e^{- r (\tau ^*_2 \wedge \varkappa _n)} \, V_2(X_{\tau ^*_2 \wedge \varkappa _n}, S_{\tau ^*_2 \wedge \varkappa _n}, Q_{\tau ^*_2 \wedge \varkappa _n}) \\&+ \int _0^{\tau ^*_2 \wedge \varkappa _n} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du \\&+ \int _0^{\tau ^*_2 \wedge \varkappa _n} e^{- r u} \, dS_u \bigg ] = V_2(x, s, q) + E_{x, s, q} \big [ M^2_{\tau ^*_2 \wedge \varkappa _n} \big ] = V_2(x, s, q) \end{aligned}$$
(68)

hold, for all \((x, s, q) \in E\) and each \(n \in \mathbb N\). Observe that, by virtue of the arguments from [Shepp and Shiryaev (1993); pages 635–636], the property:

$$\begin{aligned} E_{x, s, q} \Big [ \sup _{t \ge 0} e^{- r (\tau ^*_2 \wedge t)} \, G_2(X_{\tau ^*_2 \wedge t}, S_{\tau ^*_2 \wedge t}, Q_{\tau ^*_2 \wedge t}) \Big ] \le (1 + L_2) \, E_{x, s, q} \Big [ \sup _{t \ge 0} S_{\tau ^*_2 \wedge t} \Big ] < \infty \end{aligned}$$
(69)

holds, where the function \(G_2(x, s, q)\) is defined in (9), for all \((x, s, q) \in E\). We also note that the variable \(e^{- r \tau ^*_2} V_2(X_{\tau ^*_2}, S_{\tau ^*_2}, Q_{\tau ^*_2})\) is finite on the event \(\{ \tau ^*_2 = \infty \}\) as well as recall from the arguments of Beibel and Lerche (1997) and Pedersen (2000) that the property \(P_{x, s, q}(\tau ^*_2 < \infty ) = 1\) holds, for all \((x, s, q) \in E\). Hence, letting n go to infinity and using the right-hand condition of (28), we can apply the Lebesgue dominated convergence theorem to the expression of (68) to obtain the equality:

$$\begin{aligned}&E_{x, s, q} \bigg [ \int _0^{\tau ^*_2} e^{- r u} \, H_2(X_u, S_u, Q_u) \, du + \int _0^{\tau ^*_2} e^{- r u} \, dS_u \bigg ] = V_2(x, s, q) \end{aligned}$$
(70)

for all \((x, s, q) \in E\), which together with the inequalities in (67) directly implies the desired assertion. We finally recall from the results of part (iii) of Subsection 2.3 above implied by standard comparison arguments applied to the value functions of the appropriate optimal stopping problems that the inequality \(a^*(s, q) \le g^*(s)\) should hold for the optimal stopping boundary, for \(0< q< h^*(q) < s\). Thus, taking into account the fact that \(a^*(s, q) = g^*(s) \equiv \lambda _* s\) with \(0< \lambda _* < 1\) from (82), for \(q < s \le h^*(q)\), we may conclude that the inequality \(a^*(s, q) \le g^*(s)\) holds, for all \(0< q < s\), that completes the verification. \(\square\)

Corollary 4.2

The optimal method of exercising the perpetual real double lookback call-put and put-call options on the maxima and minima with the values in (1) and (2), which are equivalent to the ones of (6) and (7), acts as follows. After the outer options with the equivalent value functions from (21) and (22) are exercised at the first exit times \(\tau ^*_i\), for \(i = 1, 2\), from (62) with the boundaries \(b^*(s, q)\) and \(a^*(s, q)\) specified in Theorem 4.1 above, the inner options should be exercised at the first hitting times:

$$\begin{aligned} \zeta ^*_1 = \inf \big \{ t \ge \tau ^*_1 \; \big | \; X_t \le g^*(S_t) \big \} \quad \text {and} \quad \zeta ^*_2 = \inf \big \{ t \ge \tau ^*_2 \; \big | \; X_t \ge h^*(Q_t) \big \} \end{aligned}$$
(71)

with the boundaries \(g^*(s)\) and \(h^*(q)\) specified in Corollary 5.1 below, respectively.

Remark 4.3

Note that in the cases in which one starts from the stretch, that is, when \(x = s = q\) holds, the subsequent exercise of the outer and inner perpetual real lookback put and call options with the value functions in (21) and (22) may actually follow the subsequent exercise of the standard perpetual real lookback put and call options with the value functions in (72). More precisely, when the process X starts at some \(x = s = q\), by virtue of the facts that the inequalities \(0< \lambda _* < 1\) and \(\nu _* > 1\) hold for the unique solutions of the arithmetic equations in (82) and (84) below, the outer options should be exercised when the process X reaches an upper boundary \(b^*(S, Q) [\ge h^*(Q)]\) or a lower boundary \(a^*(S, Q) [\le g^*(S)]\), respectively. However, in the cases in which the process X starts at some \(x< g^*(s) < s\) or \(x> h^*(q) > q\), the outer perpetual real lookback call-put and put-call options on the maxima and minima should be exercised only at the times at which the underlying asset price process reaches the upper boundary \([h^*(Q)<] b^*(S, Q) [< g^*(S)]\) or the lower boundary \([h^*(Q)<] a^*(S, Q) [< g^*(S)]\), respectively, and then, the appropriate inner options should be exercised at the same time.