A Note on Costs Minimization with Stochastic Target Constraints

We study the minimization of the expected costs under stochastic constraint at the terminal time. The first and the main result says that for a power type of costs, the value function is the minimal positive solution of a second order semi--linear ordinary differential equation (ODE). Moreover, we establish the optimal control. In the second example we show that the case of exponential costs leads to a trivial optimal control.

Consider a complete probability space (Ω, F, P) together with a standard onedimensional Brownian motion W t , t ≥ 0 and the Brownian filtration F W t := σ{W u : u ≤ t} completed by the null sets.
For any (T, x) ∈ (0, ∞)×R and a progressively measurable processes u = {u t } T t=0 which satisfies the integrability condition For any (T, x, c) ∈ (0, ∞) × R 2 let U (T, x, c) be the set of all progressively measurable processes u = {u t } T t=0 (with the above integrability condition) which satisfy X x,u T ≥ I W T >c a.s. As usual, we set I Q = 1 if an event Q occurs and I Q = 0 if not.
For a given p > 1 introduce the stochastic control problem This completes the proof.
The following Proposition will be crucial for deriving the main results.
Proof. The statement is obvious for x = 1. Thus assume that x < 1. We use the scaling property of Brownian motion. Define the Brownian motion t := σ{B u : u ≤ t} be the filtration generated by B completed with the null sets. Clearly, F B t = F W tT , t ≥ 0. LetŨ be the set of all stochastic processesũ = {ũ t } 1 t=0 which are non negative, progressively measurable with respect to F B and satisfy We notice that there is a bijection U + (T, x, c) ↔Ũ which is given by · −∞ e − y 2 2 dy be the cumulative distribution function of the standard normal distribution. For any T > 0 and c ∈ R consider the martingale {M T,c t } T t=0 given by Define the function g : (0, 1) → R + by Now, we are ready to state the main results which will be proved in Section 2. Moreover, the following minimality holds. Ifĝ : (0, 1) → R + is another solution to (1.4) (ĝ(y 0 ) = g(y 0 ) for some y 0 ) and satisfies The optimal control is given bŷ Namely, for the optimal control we have the ODE: (III) Let T > 0 and c ∈ R. Then the pair (Y, Z) given by is the minimal solution of the backward stochastic differential equation (BSDE) with the singular terminal condition Y T = ∞I W T >c . This terminal condition means that lim t→T Y t = ∞I W T >c a.s. where we use the convention ∞ · 0 := 0. Remark 1.4. It is easy to see that the optimal control is unique. Indeed, if by contradiction u,ũ ∈ U (T, x, c) are optimal controls and dt ⊗ P(u =ũ) > 0. Then, the process u+ũ 2 satisfies u+ũ 2 ∈ U (T, x, c) and from the strict convexity of the function z → |z| p we have which is a contradiction.

Remark 1.5.
A natural question is whether there exists a unique positive, non increasing solution to the ODE (1.4) with the boundary conditions (1.5). Due to the fact that h takes the value 0 at the end points {0, 1} the uniqueness seems to be far from obvious and we leave it for future research.

Proof of the Main Results
We start with the following regularity result.
It remains to prove concavity. Fix a 1 < a < a 2 . Let us show that Consider the martingale M := M 1,c given by (1.2). Observe that M 0 = a. Define the stopping time Clearly, τ < 1 a.s. and so from the equality E[M τ ] = M 0 we conclude that Next, let D = τ 0 u t dt. From the Holder inequality Thus, By combining (2.1)-(2.4), the fact that g ≤ 1 and the simple inequality z p y p−1 + Since > 0 was arbitrary we complete the proof.
The proof of the main results will be based on the theory developed in [8]. We start with preparations. For any (T, where the infimum is taken over all progressively measurable processes u = {u t } T t=0 , ξ := ∞I W T >c and as before, we use the convention ∞ · 0 := 0.
Using same arguments as in Lemma 1.
Thus, from Lemma 1.1 we conclude that This brings us to the following corollary.
Corollary 2.2. Let T > 0 and c ∈ R. There exists a progressively measurable process {Z t } 0≤t<T such that the pair is the minimal superso- Remark 2.3. A priori we do not know that g is continuously differentiable and so we can not apply the Ito formula and find Z. In the proof of Theorem 1.3 we will show that g satisfies the ODE (1.4) and then we will find Z.
Now, we are ready to prove Theorem 1.3.
First step: Proving that the minimal supersolution is a solution.
Fix T > 0 and c ∈ R. Let ξ := ∞I W T >c . Let us show that the supersolution (Y, Z) from Corollary 2.2 is actually a solution. To that end, we need to establish the inequality lim sup t→T Y t ≤ ξ. We wish to apply Theorem 4 in [9]. There is a technical problem that the indicator function is not continuous and so condition (4) in [9] does not hold. Still, this issue can be simply solved by the following density argument. Define a sequence of functions φ (n) : R → R ∪ {∞}, n ∈ N by Observe that for any n, φ (n) satisfies condition (4) in [9]. Hence, from Theorem 4 in [9] there exists a pair (Y (n) , Z (n) ) which satisfies the BSDE (1.7) with the terminal constraint Y (n) T = φ (n) (W T ). Since φ (n) (W T ) ≥ ξ, then from the minimality property of (Y, Z) we obtain that for any n, Y t ≤ Y t=0 is a martingale and A = {A t } 1 t=0 is a continuous, non decreasing process with A 0 = 0.
Recall the minimal supersolution from Corollary 2.2. From the product rule and (1.7) we get Hence, We conclude that, Next, observe that M 0 = a 2 and define the function f : a 3 , 1+a dαdβ.
We notice that d M dt = 2h(Mt) 1−t , and so from the Ito Formula and (2.7) we obtain Hence, This together with (2.8) yields that g(y) − f (y) is a linear function on the interval M 0 , 1+a 2 . In particular .
Let us argue strict inequality. Indeed, assume by contradiction that there is y 0 ∈ (0, 1) for whichĝ(y 0 ) = g(y 0 ), then clearly y 0 is a minimum point for the functionĝ − g. Hence,ĝ (y 0 ) = g (y 0 ). Since h(y) is bounded away from zero if y is bounded away from the end points {0, 1}, then from standard uniqueness for initial value problems we conclude thatĝ = g on the interval (0, 1). This is a contradiction and the proof of (I) is completed.
Third step: Completion of the proof. In this step we complete the proof of statements (II)-(III) in Theorem 1.3. Since g is continuously differentiable (satisfies (1.4)) then from the Ito formula, Corollary 2.2 and the first step of the proof we obtain statement (III).
It remains to prove Statement (II). Let (T, x, c) ∈ (0, ∞) × [0, 1] × R and let ξ := ∞I W T >c . From Theorem 3 in [8] and Corollary 2.2 it follows that the optimal control for the optimization problem is given by From (2.5)-(2.6) we obtain that is the optimal control for the optimization problem (1.1), as required.

The Exponential Case
Let λ > 0 and consider the optimization problem Namely, we consider a stochastic target problem with exponential costs z → e λ|z| −1 and the same stochastic target as in (1.1).
The following result says that for any (T, x, c) the optimal control is targeting towards 1 with a constant speed.
The statement is obvious for x ≥ 1. Hence, without loss of generality we assume that x < 1. The cost function is strictly convex, and so, by using the same arguments as in Remark 1.4 we obtain that the optimal control is unique. Thus, in order to prove the theorem it is sufficient to show that the value function satisfies the inequality Applying the standard technique of Lagrange multipliers we obtain Observe that for a given α > 0 and a martingale M the minimum of the above expression is obtained by taking C t = ln(αMt/λ) Clearly for a given z 1 ∈ R and z 2 > 0 we have max α>0 [αz 1 − z 2 α ln α] = z 2 e z 1 z 2 −1 . We conclude that and (3.1) follows from the following lemma.  We conclude that in order to prove the statement, it is sufficient to find a strictly positive martingale which satisfy (3.2)- (3.3).
To this end, consider a strictly positive martingale of the form where {ζ t } 1 t=0 is a continuous deterministic function. There exists a probability measure Q such that dQ dP |F t = M t . Moreover, from the Girsanov theorem the process W t := W t −