Supermartingale Decomposition Theorem under G-expectation

The objective of this paper is to establish the decomposition theorem for supermartingales under the $G$-framework. We first introduce a $g$-nonlinear expectation via a kind of $G$-BSDE and the associated supermartingales. We have shown that this kind of supermartingales have the decomposition similar to the classical case. The main ideas are to apply the uniformly continuous property of $S_G^\beta(0,T)$, the representation of the solution to $G$-BSDE and the approximation method via penalization.


Introduction
The classical Doob-Meyer decomposition theorem tells us that a large class of submartingales can be uniquely represented as the summation of a martingale and a predictable increasing process. This is one of fundamental results in the theory of stochastic analysis. This theorem was firstly proved in [9] for the discrete time case. Then [16,17] proved this result for the continuous time case. This theorem is important for the optimal stopping problem used to solve the pricing for the American options (see [1], [14]). It can be applied to study the problem of hedging contingent claims by portfolios constraint to take values in a given closed, convex set (see [6]). A general case of Doob-Meyer decomposition theorem was introduced in [20] when the supermartingale Y · is defined by a nonlinear operator. It was proved that the nonlinear version of Doob-Meyer decomposition theorem also holds.
The objective of this paper is to solve the problem of decomposition theorem of Doob-Meyer's type for a nonlinear supermartingale defined in a sublinear expectation space-G-expectation (upper case G). In order to understand the motivation of this objective, let us recall its special linear case, namely, in a framework of Wiener probability space (Ω, F , (F ) t≥0 , P ) in which the canonical process B t (ω) = ω(t) for ω ∈ Ω = C 0 ([0, ∞)) is a d-dimensional standard Brownian motion. Given a function g = g(s, ω, y, z) : [0, ∞) × Ω × R × R d → R where g(·, y, z) satisfies the "usual Lipschitz conditions" in the framework of BSDE (see [18]), such that, for each T ∈ [0, ∞), the following BSDE has a unique solution on [0, T ], where ξ is a given random variable in L 2 (Ω, F T , P ) and A · is a given continuous and increasing process with A 0 = 0 and A t ∈ L 2 (Ω, F t , P ) for each t ∈ (0, T ]. We call y · a g-supersolution. If A · ≡ 0 then y · is called a g-solution. For the later case, since for each given t ≤ T , the F t measurable random variable y t is uniquely determined by the terminal condition y T = ξ ∈ L 2 (Ω, F T , P ), we then can define a backward semigroup [19,21] This semiproup gives us a generalized notion of nonlinear expectation with corresponding F t -conditional expectation, called g-expectation [19]. By applying the comparison theorem of BSDE we know that any g-supersolution Y · is also a g-supermartingale (i.e., we have E g s,t [Y t ] ≤ Y s , for each s ≤ t). But the proof of the inverse claim, namely, a g-supermartingale is a g-supersolution, is not at all trivial (we refer to [20] for detailed proof). In fact this is a generalization of the classical Doob-Meyer decomposition to the case of nonlinear expectations, and the linear situation corresponds to the case g ≡ 0.
Moreover, this nonlinear Doob-Meyer decomposition theorem plays a key role to obtain the following representation theorem of nonlinear expectations: for a given arbitrary F t -conditional nonlinear expectation (E s,t [ξ]) 0≤s≤t<∞ with certain regularity, there exists a unique function g = g(·, y, z) satisfying the usual condition of BSDE, such that, , for all 0 ≤ t ≤ T < ∞, and ξ ∈ L 2 (Ω, F T , P ).
We refer to [4], [21], [23] for the proof of this very deep result, also to [7] a wide class of time consistent risk measures are identified to be g-expectations.
It is known that volatility model uncertainty (VMU) involves essentially non-dominated family of probability measures P on (Ω, F ). This is a main reason why many risk measures, and pricing operators cannot be well-defined within a framework of probability space such as Wiener space (Ω, F T , P ). [22] introduced the framework of (fully nonlinear) time consistent G-expectation space (Ω, L 1 G (Ω),Ê) such that all probability measures in P are dominated by this sublinear expectation and such that the canonical process B · (ω) = ω(·) becomes a nonlinear Brownian motion, called G-Brownian. Many random variables, negligible under the probability measure P ∈ P, as well as under other measures in P, can be clearly distinguished in this new framework. The corresponding theory of stochastic integration and stochastic calculus of Itô's type have been established in [22,25]. In particular, the existence and uniqueness of BSDE driven by G-Brownian motion (G-BSDE) have been established in [10]. Roughly speaking (see next section for details), a G-BSDE is as follows where g(·, y, z) and ξ satisfy very similar conditions with the classical case. The solution of this G-BSDE consists of a triplet of adapted processes (y · , z · , K · ) where K · is a decreasing G-martingale with K 0 = 0. We then call y · a g-solution underÊ. From the existence and uniqueness of the G-BSDE, we can also defineÊ g t,T [ξ] = y t which forms a time consistent nonlinear expectation. If K · is just a decreasing process then we call y · a g-supersolution underÊ.
By the comparison theorem of G-BSDE obtained in [11], we can prove that a g-supersolution under E Y · is also anÊ g -supermartingale, i.e., we haveÊ g s,t [Y t ] ≤ Y s , for each s ≤ t. The objective of this paper is to prove its inverse property: a continuousÊ g -supermartingale Y · is also a g-supersolution under E . Namely, Y · can be written as where A is a continuous increasing process. A special case of this result is when g ≡ 0. In this case Y · is a G-supermartingale and it can be decomposed into the following where A is an increasing process. This is still a new and non-trivial result.
The proof of this decomposition theorem involves a penalization procedure, for n = 1, 2, · · · , where L n t = n t 0 (Y s − y n s )ds and K n · is an decreasing martingale. In order to prove that y n ↑ Y , it is necessary to show that y n ≤ Y . A main problem is that the corresponding Doob's optional sampling is still an open problem. We overcome this difficulty by proving that, for each probability dominated by P, we have y n ≤ Y . We also need to introduce some new methods, see Lemma 3.7 and Lemma 3.8, to prove the uniform convergence of y n . Generally speaking, the well-known Fatou's lemma cannot be directly and automatically used in this sublinear expectation framework. Besides, a bounded subset in M β G (0, T ) does not imply weakly compactness. Many proofs become more delicate and challenging.
We believe that the proof of our new decomposition theorem of Doob-Meyer's type under Gframework will play a key role for understanding and solving many important problem. It is a key step towards the understanding and solving a general representation theorem of dynamically consistent nonlinear expectations, as well as dynamic risk measures and pricing operators.
The paper is organized as follows. In Section 2, we set up some notations and results as preliminaries for the later proofs. Section 3 is devoted to the study of the so-calledÊ g -supermartingales. The representation theorem is established with detailed proofs. In Section 4, we present the relationship between theÊ g -supermartingales and the fully nonlinear parabolic PDEs.

G-expectation and G-Itô's calculus
The main purpose of this section is to recall some basic notions and results of G-expectation, which are needed in the sequel. The readers may refer to [10], [11], [24], [25] for more details. The triple (Ω, H,Ê) is called a sublinear expectation space. X ∈ H is called a random variable in (Ω, H,Ê). We often call Y = (Y 1 , . . . , Y d ), Y i ∈ H a d-dimensional random vector in (Ω, H,Ê). Definition 2.2 Let X 1 and X 2 be two n-dimensional random vectors defined respectively in sublinear expectation spaces (Ω 1 , H 1 ,Ê 1 ) and (Ω 2 , H 2 ,Ê 2 ). They are called identically distributed, denoted by where C depends only on ϕ.
whereX is an independent copy of X, i.e.,X d = X andX⊥X. Here the letter G denotes the function It is proved in [24] is the solution of the following fully nonlinear parabolic equation: . In the case d = 1, the function G : R → R is a given monotonic and sublinear function of the form In this paper we only consider the non-degenerate G-normal distribution, i.e., σ > 0 in the 1-dimensional case.
We present the notion of G-Brownian motion in a sublinear expectation space. For notational simplification, we only consider the case of 1-dimensional G-Brownian motion. But the methods of this paper can be directly applied to d-dimensional situations.
Let Ω = C 0 ([0, ∞); R) be the space of real valued continuous functions on [0, ∞) with ω 0 = 0 endowed with the following distance We denote by B(Ω) the collection of all Borel-measurable subsets of Ω.
Let G : R → R be a given monotonic and sublinear function of the form (1). G-expectation is a sublinear expectation defined on the space of the random variable (Ω, L ip (Ω)) in the following way: where ξ 1 , · · ·, ξ n are identically distributed 1-dimensional G-normally distributed random vectors in a sublinear expectation space Without loss of generality we can assume that ξ has the representation ξ is a continuous mapping on L ip (Ω T ) w.r.t. the norm · L p G . Therefore it can be extended continuously to the completion L p G (Ω T ) of L ip (Ω T ) under the norm · L p G . Denis et al. [8] proved that the completions of C b (Ω T ) (the set of bounded continuous function on Ω T ) under the norm · L p G coincides with Let us denote the set of all probability measures on (Ω T , B(Ω T )) by M 1 (Ω T ).
Theorem 2.6 ([8, 12]) There exists a tight set P ⊂ M 1 (Ω T ) such that P is called a set that representsÊ.
Let P be a tight set that representsÊ. For this P, we define capacity The set A ⊂ Ω T is said to be polar if c(A) = 0. A property holds "quasi-surely" (q.s. for short) if it holds outside a polar set. In the following, we do not distinguish two random variables X and Y if X = Y q.s..

Remark 2.7
Let (Ω, F , P 0 ) be a probability space and (W t ) t≥0 be a 1-dimensional Brownian motion under P 0 . Let F = {F t } be the augmented filtration generated by W . [8] proved that We shall give an estimate between the two norms · L p G and · p,E .
We consider the following type of G-BSDEs where g and f are given functions satisfy the following properties: (H1) There exists some β > 1 such that for any y, z, g(·, ·, y, z), f (·, ·, y, z) ∈ M β G (0, T ); (H2) There exists some L > 0 such that For simplicity, we denote by S α G (0, T ) the collection of processes (Y,
We also have the comparison theorem for G-BSDE.

Some results of classical penalized BSDEs
In this subsection, we will introduce some notions and results following Peng [20]. The probability space and filtration is given in Remark 2.7. For a given stopping time τ , we now consider the following classical BSDE: where ξ ∈ L 2 (Ω, F τ ) and g satisfies the following conditions: (A1) g(·, y, z) ∈ L 2 F (0, T ; R), for each (y, z) ∈ R 2 ; (A2) There exists a constant L > 0 such that Here A is a given RCLL increasing process with A 0 = 0 and E[A 2 τ ] < ∞. We call (y t ) the gsupersulotion on [0, τ ] if (y, z) solves (3). In particular, when A ≡ 0, (y t ) is called a g-solution on [0, τ ].
It is obvious that a g-supermartingale in strong sense is also a g-supermartingale in weak sense. [3] proved that, under assumptions similar to the classical case, a g-supermartingale in weak sense coincides with a g-supermartingale in strong sense. This result is a generalization of the classical Optional Stopping Theorem. If (Y t ) is a g-supersolution on [0, T ], it follows from the comparison theorem that (Y t ) is a g-supermartingale. In fact, [20] proved that the inverse problem, i.e., nonlinear version of Doob-Meyer decomposition theorem, also holds. The method of proof is to apply the penalization approach and the first step is the following lemma.

Nonlinear expectations generated by G-BSDEs and the associated supermartingales
For simplicity, we only consider the following G-BSDE driven by 1-dimensional G-Brownian motion. The result still holds for multi-dimensional cases.
where g satisfies the following conditions: (H1') There exists some β > 2 such that for any y, z, g(·, ·, y, z) ∈ M β G (0, T ); (H2) There exists some L > 0 such that For each ξ ∈ L β G (Ω T ) with β > 2, we definê (4) is replaced by a continuous decreasing process A with It follows from the comparison theorem of G-BSDE that a g-supersolution underÊ is also anÊ g -supermaringale.
(iii)If there exists a generator f corresponding to the d B term in (4), we can define the operator E g,f t,T [·] and the associatedÊ g,f -supermartingales.
The following theorem, which is a main result of this paper, tells us that anÊ g -supermartingale is also a g-supersolution underÊ. It generalizes the well-known decomposition theorem of Doob-Meyer's type to a framework of fully nonlinear expectation-G-expectation.
Suppose that g satisfies (H1') and (H2). Then (Y t ) has the following decomposition where {Z t } ∈ M 2 G (0, T ) and {A t } is a continuous nondecreasing process with A 0 = 0 and A T ∈ L 2 G (Ω T ). Furthermore, the above decomposition is unique.
We divide the proof into a sequence of lemmas. For P ∈ P M , F-stopping time τ , and F τ -measurable random variable η ∈ L 2 (P ), let (Y P , Z P ) denote the solution to the following standard BSDE: We recall from [29] that every P ∈ P M satisfies the martingale representation property. Then there exists a unique adapted solution (Y P , Z P ) of the above equation. We define E g,P t,τ [η] := Y P t . For P ∈ P M and t ∈ [0, T ], set P(t, P ) := {Q ∈ P M Q| Ft = P | Ft }. The following lemma provides a representation for solution Y T,ξ of equation (4). For reader's convenience, we give a brief proof here. Proof. By the comparison theorem of classical BSDE, for Q ∈ P(t, P ), we have E g,Q t,T [ξ] ≤Ê g t,T [ξ], Q-a.s.. Consequently, we have E g,Q t,T [ξ] ≤Ê g t,T [ξ], P -a.s.. Besides, by Theorem 16 in [13] (see also Proposition 3.4 in [28]) and noting that (K T,ξ t ) is a decreasing G-martingale, we have where P(t, P ) is the closure of P(t, P ) with respect to the weak topology. Then there exists Q ∈ P(t, P ), such that E Q [K T,ξ T − K T,ξ t ] = 0. Choose {Q n } ⊂ P(t, P ) such that Q n → Q weakly, by Lemma 29 in [8], then we obtain where 0 < α < 1 − 1 β . By Proposition 3.2 in [2], we derive that Consequently, the above inequality holds P -a.s.. Then we have The proof is complete.
Proof. Suppose the lemma were false. Then we could find some t ∈ [0, T ] and P * ∈ P M such that P * (y n t > Y t ) > 0. Applying Lemma 3.4 and the definition ofÊ g -supermartingales, we have for any P ∈ P M and s ≤ t, E g,P s,t [Y t ] ≤ ess sup P ′ ∈P(s,P ) This shows that, under the measure P ∈ P M , (Y t ) can be seen as an g B -supermartingale in weak sense (see Remark 2.17). Since (Y t ) ∈ S β G (0, T ) is continuous, it is an g B -supermartingale in strong sense. For any Q ∈ P(t, P * ), let (Ȳ Q ,Z Q ) denote the solution to the following standard BSDE: where g n (s, y, z) = f (s, y, z) + n(Y s − y). Since (Y t ) is an g B -supermartingale and g satisfies the assumptions in Lemma 2.16, then it is easy to check that Y t ≥ E gn,Q t,T [Y T ](=Ȳ Q t ), Q-a.s.. By the definition of P(t, P * ), we obtain that Y t ≥ E gn,Q t,T [Y T ], P * -a.s.. Again by Lemma 3.4, we have ess sup P * Q∈P(t,P * ) E gn,Q t,T [Y T ] = y n t , P * -a.s.. This leads to a contradiction.
It follows from the comparison theorem that y n t ≤ y n+1 t . Thus for all n = 1, 2, · · · , |y n t | is dominated by |y 1 t | ∨ |Y t |. Then we can find a constant C independent of n, such that for 1 < α < β, and for all n = 1, 2, · · · ,Ê [ sup Now let L n t = n t 0 (Y s − y n s )ds, then (L n t ) t∈[0,T ] is an increasing process. We can rewrite G-BSDE (6) as Lemma 3.6 There exists a constant C independent of n, such that for 1 < α < β, Proof. By a similar analysis as Proposition 3.5 in [10], we havê where the constant C α depends on α, T, G and L. Thus we conclude that there exists a constant C independent of n, such that for 1 < α < β, Since L n T and −K n T are nonnegative, we get For 1 < α < β, we obtain the following inequality.
where p, q > 1 satisfy 1 p + 1 q = 1, (α − 1)p < β and q < β. By estimate (7) and Lemma 3.6, there exists a constant C independent of n, such that This implies that y n converges to Y in M α G (0, T ). In fact, this convergence holds in S α G (0, T ). In order to prove this conclusion, we need the following uniformly continuous property for any Y ∈ S p G (0, T ) with p > 1.
Proof. By applying G-Itô's formula to e −nt y n t , we get T t e −ns g(s, y n s , z n s )ds].
Then we obtain T t e n(t−s) g(s, y n s , z n s )ds], | T t e n(t−s) g(s, y n s , z n s )ds| α ] → 0, as n → ∞.
For ε > 0, it is simple to show that For T > δ > 0, from the above inequality we obtain 4Ê g -supermartingales and related PDEs In this section, we present the relationship between theÊ g -supermartingales and the fully nonlinear parabolic PDEs. For this purpose, we will put theÊ g -supermartingales in a Markovian framework. We will make the following assumptions throughout this section. Let b, h, σ : [0, T ] × R → R and g : [0, T ] × R 3 → R be deterministic functions and satisfy the following conditions: (H4.1) b, h, σ, g are continuous in t; (H4.2) There exists a constant L > 0, such that For each t ∈ [0, T ] and ξ ∈ L 2 G (Ω t ), we consider the following type of SDE driven by 1-dimensional G-Brownian motion: We have the following estimates which can be found in Chapter V in [25].
u ∈ C((0, T ) × R) is said to be a viscosity solution of (17) if it is both a viscosity supersolution and a viscosity subsolution.
The following result can be considered as the "inverse" comparison theorem for viscosity solutions of PDEs. where k is a positive integer. Assume that where u t1,V (t1,·) denotes the viscosity solution of PDE (17) on (0, t 1 ) × R with Cauchy condition u t1,V (t1,·) (t 1 , x) = V (t 1 , x). Then V is a viscosity supersolution of PDE (17) (18), {Y t,x s } s∈[t,T ] is anÊ g t,x -supermartingale. It follows from Theorem 4.4 that V is a viscosity supersolution of PDE (17).
Conclusion We obtain the decomposition theorem of Doob-Meyer's type forÊ g -supermartingales, which is a generalization of the results of Peng [20]. Our theorem provides the first step for solving the representation theorem of dynamically consistent nonlinear expectations. Different from the classical case, the decomposition theorem forÊ g -submartingales remains open.