Characterization of Fully Coupled FBSDE in Terms of Portfolio Optimization

We provide a verification and characterization result of optimal maximal sub-solutions of BSDEs in terms of fully coupled forward backward stochastic differential equations. We illustrate the application thereof in utility optimization with random endowment under probability and discounting uncertainty. We show with explicit examples how to quantify the costs of incompleteness when using utility indifference pricing, as well as a way to find optimal solutions for recursive utilities.


Introduction
Our motivation is the study of the classical portfolio optimization as follows: In a Brownian filtrated probability space, we consider an agent having a random endowment -or contingent claim -F delivering at time T . Starting with an initial wealth x, she additionally has the opportunity to invest with a strategyπ in a financial market with n stocksŜ = (S 1 , . . . , S n ) resulting in a corresponding wealth process where dŜ/Ŝ := (dS 1 /S 1 , . . . , dS n /S n ). She intends to choose a strategyπ * as to optimize her utility in the sense that U F + Xπ * T ≥ U F + Xπ T for all admissible strategiesπ.
Hereby, F → U(F) is a general utility function -quasi-concave and increasing -mapping random variables to [−∞, ∞). 1 For instance U(Y ) = u −1 (E[u(Y )]) where u : R → R is an increasing concave function corresponding to the certainty equivalent of the classical expected utility à la von Neumann and Morgenstern [40] and Savage [37]. It may however be a more general concave and increasing operator given by non-linear expectations -solutions of concave backward stochastic differential equations -introduced by Peng [31]. In this setting the utility U(F) is given by the value Y 0 , solution at time 0 of the concave backward stochastic differential equation for a jointly convex Lipschitz generator g : R × R d → R and W is a d-dimensional Brownian motion. This functional is concave and increasing. Recently, Drapeau et al. [9] introduced the concept of minimal super-solution of convex backward stochastic differential equations -in this paper maximal sub-solutions of concave backward stochastic differential equations -to extend the existence domain of classical backward stochastic differential equations for generator having arbitrary growth. In this context, the utility U(F) is given by the value Y 0 , maximal sub-solution of the concave backward stochastic differential equation This functional F → U(F) is also concave and increasing and therefore a utility functional. Furthermore, according to Drapeau et al. [10], it admits a dual representation where g * is the convex conjugate of the generator g, D b = exp(− bds) is a discounting factor and M c := exp(− c · dW − c 2 /2ds) is a probability density. The interpretation of this utility functional is that it assesses probability uncertainty, as for monetary risk measures see [16], as well as discounting uncertainty, as for sub-cash additive functional see [12]. Assuming 1 ≤ n ≤ d and taking the utility U defined as the value at 0 of the maximal subsolution of (1.1), we want to find a strategyπ * maximizing U(F + Xπ T ). Given the corresponding maximal sub-solution (Y , Z) of (1.1) such that Y 0 = U(F +Xπ T ), proceeding to the variable changē Y := Y − Xπ andZ := Z − π where 2 π = (π, 0) leads to the following equivalent formulation in terms of the following forward backward stochastic system for some bounded market price of riskθ. Transferring the terminal dependence on the forward part to the generator allows to state the main results of this paper, namely, a verification and characterization of an optimal strategyπ * in terms of the following fully coupled forward backward stochastic differential equation ∂ y g X +Ȳ ,Z + π(X +Ȳ ,Z,V ) + ∂zg X +Ȳ ,Z + π(X +Ȳ ,Z,V ) where • W = (Ŵ ,W ) is a d dimensional Brownian motion wherebyŴ andW denote the first n and last d − n components, respectively; • g is a convex generator; • F is a bounded terminal condition.
• π(y, z,v) := (π(y, z,v), 0) is a point-wise solution to ∂ẑg(y, z + π(y, z,v)) =v +θ and the optimal strategy is given byπ * =π(X +Ȳ , Z,V ). As for maximal sub-solutions of backward stochastic differential equations introduced and studied by Drapeau et al. [9], Heyne et al. [19], they can be understood as an extension of backward stochastic differential equations, where equality is dropped in favor of inequality allowing weaker conditions for the generator g. It allows to achieve existence, uniqueness and comparison theorem without growth assumptions on the generator as well as weaker integrability condition on the forward process and terminal condition. To stress the relation between maximal sub-solutions and solutions of backward stochastic differential equations, maximal sub-solutions can be characterized as maximal viscosity sub-solutions in the Markovian case, see [8]. It also turns out that they are particularly adequate for optimization problem in terms of convexity or duality among others, see [10,20] and apply to larger class of generators than backward stochastic differential equations does.
Literature Discussion Utility optimization problems in continuous time are popular topics in finance. Karatzas et al. [24] considered the optimization of the expected discounted utility of both consumption and terminal wealth in the complete market where they obtained an optimal consumption and wealth processes explicitly. Using duality methods, Cvitanić et al. [6] characterized the problem of utility maximization from terminal wealth of an agent with a random endowment process in semi-martingale model for incomplete markets. Backward stochastic differential equations, introduced in the seminal paper by Pardoux and Peng [30] in the Lipschitz case and Kobylanski [26] for the quadratic one, have revealed to be central in stating and solving problems in finance, see El Karoui et al. [13]. Duffie and Epstein [11] defined the concept of recursive utility by means of backward stochastic differential equations, generalized in Chen and Epstein [4] and Quenez and Lazrak [33]. Utility optimization characterization in that context has been treated in El Karoui et al. [14] in terms of a forward backward system of stochastic differential equations. Using a martingale argumentation, Hu et al. [23] characterized utility maximization by means of quadratic backward stochastic differential equations for small traders in incomplete financial markets with closed constraints. Following this line with a general utility function, Horst et al. [22] characterized the optimal strategy via a fully-coupled forward backward stochastic differential equation. With a similar characterization, Santacroce and Trivellato [36] considered the problem with a terminal random liability when the underlying asset price process is a continuous semi-martingale. Bordigoni et al. [1] studied a stochastic control problem arising in utility maximization under probability model uncertainty given by the relative entropy, see also Schied [39], Matoussi et al. [29]. Backward stochastic differential equations, can be viewed themselves as generalized utility operators -so called g-expectations introduced by Peng [31] -which are related to risk measures, Gianin [17], Peng [32], Gianin [18]. As in the classical case, maximal sub-solutions of concave backward stochastic differential equations are nonlinear expectations as well. In this respect, Heyne et al. [21] consider utility optimization in that framework, providing existence of optimal strategy using duality methods as well as existence of gradients. However they do not provide a characterization of the optimal solution to which this work is dedicated to.
Discussion of the results and outline of the paper The existence and uniqueness of maximal sub-solutions in [8,9,19] depends foremost on the integrability of the terminal condition F, admissibility conditions on the local martingale part, and the properties of the generatorpositive, lower semi-continuous, convex in z and monotone in y or jointly convex in (y, z). In the present context though, the generator can no longer be positive, even uniformly linearly bounded from below. Therefore we had to adapt the admissibility conditions, adequate for the optimization problem we are looking at. Henceforth, we provide existence and uniqueness of maximal sub-solutions under these new admissibility conditions in Section 2. We further present there the formulation of the utility maximization problem and the transformation leading the forward backward system (1.2). With this result at hand, we can address in Section 3 the characterization in terms of optimization of maximal sub-solutions of the forward backward stochastic differential equation. Our first main result, Theorem 3.1, provides a verification argument for solutions of coupled forward backward stochastic differential equation in terms of optimal strategy. The resulting system excerpt an auxiliary backward stochastic differential equation specifying the gradient dynamic. The second main result, Theorem 3.6, provides a characterization of optimal strategies in terms of solution of a coupled forward backward stochastic differential equation. It turns out, that an auxiliary backward stochastic differential equation is necessary in order to specify the gradient of the solution. These result extends the ones from Horst et al. [22] stated for utility maximization à la Savage [37]. We illustrate the results in Section 4 by considering utility optimization in a financial context with explicit solutions in given examples. These explicit solutions allow to address for instance the cost of incompleteness in a financial market. Finally, we address how the result can be applied when considering optimization for recursive utilities à la Kreps and Porteus [28] or for the present case in continuous time à la Duffie and Epstein [11]. The proof of existence and uniqueness of maximal sub-solutions being using the same techniques as [9] is postponed in Appendix A.

Notations
Let T > 0 be a fixed time horizon and (Ω, F , (F t ) t∈[0,T ] , P) be a filtered probability space, where the filtration (F t ) is generated by a d-dimensional Brownian motion W and fulfills the usual conditions. We further assume that F = F T . Throughout, we split this d dimensional Brownian motion into two parts W = (Ŵ ,W ) withŴ = (W 1 , . . . , W n ) andW = (W n+1 , . . . , W d ) where 1 ≤ n ≤ d. We denote by L 0 the set of F T -measurable random variables identified in the P-almost sure sense. Every inequality between random variables is to be understood in the almost sure sense. Furthermore as in the introduction, to keep the notational burden as minimal as possible, we do not write the index in t and ω for the integrands unless necessary. We furthermore generically use the short writing · for the process t → t 0 ·. We say that a càdlàg process X is integrable if X t is integrable for every 0 ≤ t ≤ T . We use the notations • for x and y in R d , let xy := (x 1 y 1 , . . . , x d y d ) and x/y = (x 1 /y 1 , . . . , • for x ∈ R m , y ∈ R n and A ∈ R m×n • L 0 and L p are the set of measurable and p-integrable random variables X identified in the P-almost sure sense, 1 ≤ p ≤ ∞.
• S the set of càdlàg adapted processes.
• L the set of R d -valued predictable processes Z such that Z · dW is a local martingale. 3 • H the set of local martingales Z · dW for Z ∈ L.
• L p the set of those Z in L such that • H p the set of martingales Z · dW for Z ∈ L p .
• bmo the set of those Z in L such that Z · dW is a bounded mean oscillations martingale.
That is, • BMO the set of those Z · W such that Z is in bmo.
• D the set of those uniformly bounded b ∈ L.
• M c the stochastic exponential of c, that is M c = exp(− c · dW − 1 2 c 2 dt).
• D b the stochastic discounting of b, that is D b = exp(− bds).
• For c in bmo we denote by P c the equivalent measure to P with density under which W c := W + cdt is a Brownian motion.
• We generically use the notation x = (x,x) for the decomposition of vectors in R d into their n first components and d − n last ones. We use the same conventions for the space L = (L,L) where Z = (Ẑ,Z) ∈ L. Also the same for H = (Ĥ,H), H p = (Ĥ p ,H p ), bmo = (bmo,bmo) and BMO = (BMO,BMO).
In the case where n = d everything in the following with a˜disappears or equivalently is set to 0 and everything with aˆbecomes withoutˆ.
For a convex function f : For any y in ∂ x * f , it follows from classical convex analysis, see [34], that If the sub-gradient is a singleton -as in this paper -it is a gradient and we simplify the notation to ∂f (x * ).

Maximal Sub-Solutions of FBSDEs and Utility
is called a generator if it is jointly measurable, and g(y, z) is progressively measurable for any (y, z) ∈ R×R d . 4 A generator is said to satisfy condition (Std) if (Std) (y, z) → g(y, z) is lower semi-continuous, convex with non-empty interior domain and gradients 5 everywhere on its domain (for every ω and t).
Remark 2.1. Note that if g satisfies the above assumptions, as a normal integrand, for every (y 0 , z 0 ) in the domain of g and for every t and ω, there exists b and c progressively measurable such that for every y, and z, see Rockafellar and Wets [35,Chapter 14,Theorem 14.46]. These processes b and c are the partial derivatives of g with respect to y and z, respectively. 4 To prevent an overload of notations, we do not mention the dependence on ω and t, that is, g(y, z) := g t (ω, y, z). 5 Note that we could work with non-empty sub-gradients where by means of [35,Theorem 14:56] we could apply measurable selection theorem, see [18,Corollary 1C] to select measurable gradients in the sub-gradients of g and working with them.
We further denote by For any terminal condition F in L 0 , we call a pair (Y , Z) where Y ∈ S and Z ∈ L a sub-solution of the backward stochastic differential equation if 6 The processes Y and Z are called the value and control processes, respectively. Sub-solutions are not unique. Indeed, (Y , Z) is a sub-solution if and only if there exists an adapted càdlàg increasing process K with K 0 = 0 such that As mentioned in the introduction, existence and uniqueness of a maximal sub-solution depend foremost on the integrability of the positive part of F, admissibility conditions on the local martingale part, and the properties of the generator -positivity, lower semi-continuity, convexity in z and monotonicity in y or joint convexity in (y, z). In this paper though, we removed the condition on the generator in terms of positivity to the optimization problem we are looking at. In order to guarantee the existence and uniqueness of a maximal sub-solution, we need the following admissibility condition.
Given a terminal condition F, we denote by Our first result concerns the existence and uniqueness of a maximal sub-solution to (2.1).

Theorem 2.3. Let g a generator satisfying (Std) and a terminal condition
The proof of the Theorem relies on the same techniques as in [9] and is postponed into the Appendix A.
As mentioned in the introduction, we present in a financial framework how the maximal sub-solutions are related to the utility formulation problem. We consider a financial market consisting of one bond with interest rate 0 and a n-dimensional stock priceŜ = (S 1 , . . . , S n ) evolving according to dŜ S =μdt +σ · dŴ , andŜ 0 ∈ R n ++ where dŜ/Ŝ := (dS 1 /S 1 , . . . , dS n /S n ),μ is a R n -valued uniformly bounded drift process, andσ is a n × n volatility matrix process. For simplicity, we assume thatσ is invertible such that the market price of risk processθ :=σ −1 ·μ is uniformly bounded.
Given a n-dimensional trading strategyη, the corresponding wealth process with initial wealth x satisfies whereŴθ =Ŵ + θ dt which is a Brownian motion under P θ where θ = (θ, 0). To remove the volatility factor, we generically setπ =η ·σ and denote by Xπ the corresponding wealth process.
Proof. Since c is in bmo, it follows from reverse Hölder inequality, see [25,Theorem 3 Since F is bounded and π ·dŴ is a BMO martingale, we only need to show that T 0π ·θds is in L q for all q > 1. Fromπ in bmo, it follows that πdW is in H q , for all q > 1. Sinceθ is uniformly bounded, for any q > 1, it holds that for some constant C. Therefore, it follows from Doob's inequality that Given therefore a terminal condition F in L ∞ , for everyπ inbmo, according to Theorem 2.3 together with Lemma 2.4, it follows that if A(F + Xπ T ) is non-empty, then there exists a unique maximal sub-solution to the forward backward stochastic differential equation We denote by U(F + Xπ) the value of this maximal sub-solution at time 0, and convene that if It follows from the same argumentation as in [8][9][10], that U is a concave increasing functional and therefore a utility operator 7 .
Remark 2.5. It is known, see [21, Example 2.1], that -under some adequate smoothness conditions -the certainty equivalent U(F) = u −1 (E[u(F)]) can be described as the value at 0 of the maximal sub-solution of the backward stochastic differential equation ) is a positive jointly convex generator in many of the classical cases. For instance, for u(x) = exp(−x), g(y, z) = z 2 /2, and for u(x) = x r with r ∈ (0, 1) and Before we proceed to characterization of optimal strategies, let us point to a simple transformation that underlies the following section. For (Y , Z) sub-solution in A(F + Xπ T ), the variable changeȲ := Y − Xπ,Z := Z − π where π = (π, 0) leads to the following system of forward backward stochastic differential equation where θ = (θ, 0). In the following we consistently use the notationȲ = Y − X andZ = Z − π where (Y , Z) is sub-solution of the utility problem.

Sufficient Characterization of the Coupled FBSDE System
We are interested in a utility maximization problem with random endowment F in L ∞ , for the utility function U. In other terms, findingπ * inbmo such that We call such a strategyπ * an optimal strategy to problem (3.1). Throughout, we call any trading strategyπ inbmo an admissible strategy. We split this Section into two, namely a verification result and a characterization result in the spirit of [22] which has been done in the context of classical expected utility optimization.

Verification
Our first main result is a verification theorem for the optimal solution given by the solution of a fully coupled backward stochastic differential equation.  , z,v), 0) such that ∂ẑg(y, z + η(y, z,v)) =v +θ (3.2) Suppose that the fully coupled forward backward system of stochastic differential equations Then,π * is an optimal strategy to problem (3.1) and Before addressing the proof of the theorem, let us show the following lemma concerning the auxiliary BSDE in (U, V ) characterizing the gradient of the optimal solution. Lemma 3.3. Let b ∈ D andc ∈bmo. The backward stochastic differential equation Proof. According to Kobylanski [26], since T 0 bds is uniformly bounded, the backward stochastic differential equation admits a unique solution (Y , Z) where Y is uniformly bounded and Z is in L 2 (P (θ,c) ). According to Briand and Elie [2, Proposition 2.1] it also holds that Z is in bmo(P (θ,c) ) which is also in bmo since (θ,c) is in bmo. 8 The variable change U = Y + c 2 /2dt + c · dW and V = (Ẑ,Z −c) which is in bmo yields showing the first assertion. Defining now c = (V +θ,c), which is in bmo, it follows that Taking the exponential on both sides, yields (3.4).
With this Lemma at hand, we are in position to address the proof of Theorem 3.1.
Since (b * , c * ) is in P g andπ * inbmo we deduce that As for the rest of the theorem, since (Y * , Z * ) is in A(F + Xπ * T ), we are left to show that for anŷ π inbmo and any (Y , Z) in A(F + Xπ T ) it follows that Y * 0 ≥ Y 0 . Indeed, it would follows that Let thereforeπ inbmo. Without loss of generality we may assume that A(F + Xπ T ) is non-empty. Let (Y , Z) in A(F + Xπ T ) and denote by ∆Y := Y − Y * , ∆Z := Z − Z * and ∆π =π −π * . According to Remark 2.1, it follows that g(Y , Thus, Y * 0 ≥ Y 0 which ends the proof.

Remark 3.4.
Note that the proof of the theorem shows in particular that the maximal subsolution for the optimal utility is (Y * , Z * ) which satisfies a "linear" 10 backward stochastic differential equation 9 Recall,Ŵθ =Ŵ + θ dt. 10 Naturally, the coefficients a * , b * and c * depend onπ * , X * ,Ȳ * ,Z * , but are actually the gradients evaluated at the value of the optimal solution.

Remark 3.5. The case of utility optimization for the certainty equivalent U(F) = u −1 (E[u(F)]) or its equivalent formulation in terms of expected utility E[u(F)]
in a backward stochastic differential equation context has been the subject of several papers, in particular [22] and [21]. The optimal solutions provided in those papers each correspond to the coupled forward backward stochastic differential equation system of Theorem (3.1). Indeed, as mentioned in Remark 2.5, the generator g corresponds to g(y, z) = −(u ′′ (y)z 2 )/(2u ′ (y)). In that context, the coupled system of forward backward stochastic differential equations in Theorem 3.1 corresponds to It turns out that ∂ y g (x + y, z + η(x + y, z,v)) = 0, implies in that case thatv = 0 and therefore Under these conditions, the forward backward stochastic differential equation turns into which coincide with the forward backward stochastic differential equation system in [22], noting that the auxiliary backward stochastic differential equation in (U, V ) disappears by a transformation. For classical utility functions such as exponential with random endowment, and power or logarithmic without endowment, the optimization problem can be solved by solving quadratic backward stochastic differential equations, see [23]. Their method relies on a "separation of variables" property shared by those classical utility functions. In the case of exponential utility, as seen in the first case study of Section 4 in the case where β = 0, our forward backward stochastic differential equation system reduces to a simple backward stochastic differential equation system.

Characterization
Our second main result is a characterization theorem of optimal solutions in terms of the fully coupled system of forward backward stochastic differential equations presented in Theorem 3.1.
Proof. Letπ inbmo. By assumption, the function f is concave, admits a maximum at 0 and is differentiable at 0. In particular, on a neighborhood of 0, f is real valued. For m in such neighborhood, we denote by (Y m , Z m ) the maximal sub-solution in A(F + X mπ+π * T ). Since (b * , c * ) is in P g , it follows that (M b * c * Z m − M b * c * Y m c) · dW is a martingale. By the same argumentation as in the proof of Theorem 3.1, it holds for every m in a neighborhood of 0. In particular E[M b * c * T T 0π · dŴ θ ] is in the sub-gradient of f at 0, which is equal to 0 since f is concave, maximal at 0 and differentiable at 0. It follows that is a strictly positive martingale in H 1 , by martingale representation theorem, it follows that for which, using the same argumentation methods as in the proof of Lemma 3.3, H is in bmo. Therefore, it holds that showing thatĤ = −θ, P ⊗ dt-almost surely and therefore shows that (U, V ) satisfies the auxiliary backward stochastic differential equation (3.5), which by means of Lemma 3.3 admits a unique solution. Hencê θ +V =ĉ * = ∂ẑg (X * + Y * , Z * +π * ) P ⊗ dt-almost surely which by uniqueness of the point-wize solutionη(y, z,v) implies thatπ * =η(X * +Y * , Z * ,V ) P ⊗dtalmost surely.
Remark 3.7. Existence of optimal strategiesπ * such that U(F + Xπ * T ) ≥ U(F + Xπ T ) for everyπ in bmo are often showed using functional analysis and duality methods, see for instance [27,38] for the case of expected utility. Present functionals given by maximal sub-solution of BSDEs, due to dual-representations [10], are also adequate to guarantee existence of optimal strategies as shown in [21]. As for the directional differentiability condition at the optimal solutionπ * , it is necessary to guarantee the identification of the optimal solution with its point-wise version. This condition is usually checked on case by case such as for the certainty equivalent.

Financial Applications and Examples
In the following, we illustrate the characterization of Theorem 3.1 to different case study. We present explicit solutions for the optimal strategy in the complete and incomplete case for a modified exponential utility maximization and an application of which to illustrate the cost of incompleteness in terms of indifference when facing an incomplete market with respect to a complete one. We conclude by addressing recursive utility optimization which bears some particularity in terms of the gradient conditions.

Illustration: Complete versus Incomplete Market
The running example we will use is inspired from the dual representation in [10] where According to this dual representation in terms of discounting and probability uncertainty, we consider the simple example where • β is a positive bounded predictable process; • γ is also a positive predictable process strictly bounded away from 0 by a constant; Note that even if we consider a discounting factor β, there is no uncertainty about him particularly. This is an example of a sub-cash additive valuation instead of the classical cash-additive one, see [12]. If β = 0 and γ is constant, then we have a classical exponential utility optimization problem. We therefore have To simplify the comparison between the complete and incomplete market, we assume that we have a simplified market with d stocks following the dynamic where σ = Id(d×d) is the identity. In other terms the randomness driving stock i is the Brownian motion i. It follows that θ = µ which is uniformly bounded. In the complete case, the agent can invest in all the stocks while in the incomplete case it is limited to the first n stocks.
Complete Market: With the generator g given as in Equation (4.1), it follows that In particular, z +η(y + x, z, v) = γ (v + θ). Therefore, in order to find an optimal solution to the optimization problem, since ∂ y g = β which is in D, it is sufficient to solve the following coupled forward backward stochastic differential equation with solution X * ,Ȳ * ,Z * , U, V satisfying • π * = γ (V + θ) −Z * is in bmo; • (b * , c * ) is in P g where b * = β and c * = V + θ; • M bc Z * + π * − X * +Ȳ * c * · dW is in H 1 for all (b, c) in P g .
One can easily deduce that the last backward stochastic differential equation admits a unique solution with V in bmo due to the assumption on β, see [23]. To provide an explicit solution, • We further assume that M θ T is bounded. Remark 4.1. This is in particular the case if (Y , θ) is solution of the following quadratic backward stochastic differential equation is a bounded Lipschitz function, then θ is bounded, see [5]. Conversely, since θ is bounded, hence in bmo, if M θ T is bounded, (ln M θ , θ) is the unique solution of the following backward stochastic differential equation Defining it follows that X T is bounded and we choose the constant C such that E θ [X T ] = x. 11 Thus, by martingale representation theorem, there exists a predictable process Γ in bmo such that it follows that (X * ,Ȳ * ,Z * , U, V ) is solution of the forward backward stochastic differential equation. We are left to check that this solution satisfies the integrability conditions. First, π * = Γ is in bmo. Second, b * = β is bounded hence b ∈ D. Third, c * ∈ bmo and Thus (b * , c * ) ∈ P g . Finally, in order to show that M bc Z * + π * − X * +Ȳ * c * · dW is in H 1 for all (b, c) in P g , according to Remark A.1, we only need to check that for every (b, c) ∈ P g , sup 0≤t≤T |M bc t (X * t +Ȳ * t )| is in L 1 , which follows directly from a similar technique as in Lemma 2.4 noting that Thus, π * = Γ = γ(V + θ) − Z * is an optimal solution to the optimization problem.

Remark 4.2.
In terms of utility optimization, since U(F + X π T ) =Ȳ * 0 + x, it follows that Remark 4.3. Instead of assuming that M θ T is bounded, we can still have an explicit solution if β is deterministic similarly as in the incomplete market, in which we will give the detailed method to get the solution.
Incomplete Market: Still with the generator g given as in Equation (4.1) but now in the incomplete case -that is n < d andθ =μ -it follows that η(y, z,v) = γ v +θ −ẑ.
In particular,ẑ+η(y +x, z, v) = γ v +θ . Here again ∂ y g = β, and since ∂zg =z/γ, in order to find an optimal solution to the optimization problem, it is sufficient to solve the following coupled forward backward stochastic differential equation with solution X * ,Ȳ * ,Z * , U, V satisfying •π * = γ V +θ −Ẑ * is in bmo; • (b * , c * ) ∈ P g where b * = β and c * = (V + θ,Z * /γ); In order to provide an explicit solution as in the complete market • we assume here that β is deterministic; First, if we assume a-priori thatc * =Z * /γ is inbmo, since β is deterministic, the last backward stochastic differential equation admits a unique solution with V = (0, −c) in bmo. Indeed, the following quadratic BSDE admits a unique solution with Λ in bmo sinceῩ T is bounded, see [23]. Therefore, satisfies the following quadratic BSDE It follows that the system is solved for The fact that the conditions of Theorem 3.1 are fulfilled follows the same argumentation as in complete market.
Remark 4.4. Again, in terms of utility optimization, we obtain that The Cost of Incompleteness The computation of explicit portfolio optimal strategies allows to address further classical financial problems such as utility indifference pricing. Given a contingent claim F, we are looking at the start wealth x * such that whereπ * is the corresponding optimal strategy. In other terms, x * represents the value in terms of indifference pricing one is willing to pay to reach the same utility by having access to a financial market. Since our functional is only upper semi-continuous, and to distinguish between complete and incomplete markets we proceed as follows.
· dŴ θ > U(F) for someπ ∈bmo which represents the utility indifference amount of wealth to be indifferent for F in the complete and incomplete case, respectively. Intuitively, the amount of wealth necessary to reach the same utility level is higher in the incomplete case, that is x * ≤ y * . This is indeed the case sincebmo is a subset of bmo.
In the case of the previous example where an explicit solution stays at hand we have the following explicit costs of having a restricted access to the financial market. Indeed, in the case where β is deterministic, according to Equations 4.2 and 4.3 we obtain On the other hand, according to (4.3) it holds We deduce that

Inter-Temporal Resolution of Uncertainty
We conclude with a classical utility functional having some interesting particularity in terms of gradient characterization. To address inter-temporal resolution of uncertainty, Kreps and Porteus [28] introduced a new class of inter-temporal utilities that weight immediate consumption against later consumptions and random payoffs. This idea has been extended in particular by Epstein and Zin [15] in the discrete case and later on by Duffie and Epstein [11] in the continuous case in terms of backward stochastic differential equations. Given a cumulative consumption stream c, positive increasing and right continuous function, a commonly used example of inter-temporal generator of a recursive utility is given by where ρ, α ∈ (0, 1) and β ≥ 0. We refer to [11] for the interpretation, properties and derivation of this generator and the corresponding constants. Note that this generator is concave in y if ρ ≤ α ≤ 1, assumption we will keep. In the classical setting, the generator is represented in terms of utility with a positive sign in the backward stochastic differential equation. In our context in terms of costs with 0 < ρ ≤ α ≤ 1, and β ≥ 0 we define which is a convex function in y and where γ = c ρ /α ρ/α . In terms of costs, given a deterministic right continuous increasing consumption stream c, the agent weight infinitesimally the opportunity to consume today weighted with a parameter ρ with a rest certainty equivalent of consumption tomorrow to the power ρ/α against the cost in terms of certainty equivalent if waiting tomorrow and not consuming. The recursive utility U(F) with terminal payoff F is given as the maximal sub-solution of In this context, given a random payoff F, start wealth x, and consumption stream c, the agent tries to optimize its recursive utility U(F + Xπ T ) in terms of investment strategyπ against its consumption choice c. For the sake of simplicity we consider the simple case of a complete market. The particularity of recursive utilities is that the generator usually do not depend on z. It follows that the condition ∂ z g = 0 = v + θ enforces the condition in terms of auxiliary backward stochastic differential equation Since ∂ y g(X +Ȳ ) = βα ρ 1 − γ 1 − ρ α (X +Ȳ ) −ρ/α we can assume that X +Ȳ = Φ where t → Φ(t) is a deterministic function. Then it follows that showing that from X +Ȳ = Φ, we deduce that if Φ is solution of the ordinary differential equation then the system has an optimal solution.

A. Existence and uniqueness of maximal sub-solutions
Proof (of Theorem 2.3). Throughout this proof, we use the notation which is given by We prove the theorem in several steps Step 1: For any (Y , Z) in A(F) and (b, c) in P g , definingY = M bc Y + M bc g * (b, c)dt, andŽ = M bc (Z − Y c), it follows that sup 0≤t≤T |Y t | ∈ L 1 . Indeed, using Ito's formula, it follows that (Y ,Ž)