Uniqueness of the representation for $G$-martingales with finite variation

Our purpose is to prove the uniqueness of the representation for $G$-martingales with finite variation.


Introduction
In [P07b], processes in form of t 0 η s d B s − t 0 2G(η s )ds, η ∈ M 1 G (0, T ) are proved to be G-martingales. However, the uniqueness of the representation remains unresolved. In order to prove the uniqueness, we must find ways to distinguish the two classes of processes in forms of t 0 η s d B s and t 0 ζ s ds, η, ζ ∈ M 1 G (0, T ). For a process {K t } with finite variation, motivated by [Song10], we define where, for n ∈ N, δ n (s) is defined in the following way: As an application, we prove the uniqueness of the representation for Gmartingales with finite variation.
This article is organized as follows: In section 2, we recall some basic notions and results of G-expectation and the related space of random variables. In section 3, we present the main results and some corollaries. In section 4, we give the proofs to the main results.

Preliminaries
We recall some basic notions and results of G-expectation and the related space of random variables. More details of this section can be found in [P07a, P07b, P08, P10].
Definition 2.1 Let Ω be a given set and let H be a linear space of real valued functions defined on Ω with c ∈ H for all constants c. H is considered as the space of random variables. A sublinear expectationÊ on H is a functional E : H → R satisfying the following properties: for all X, Y ∈ H, we have (d) Positive homogeneity:Ê(λX) = λÊ(X), λ ≥ 0.
Definition 2.3 In a sublinear expectation space (Ω, H,Ê) a random vector Y = (Y 1 , · · ·, Y n ), Y i ∈ H is said to be independent to another random vector X = (X 1 , · · ·, X m ), whereX is an independent copy of X. Here the letter G denotes the function where S d denotes the collection of d × d symmetric matrices.
The function G(·) : S d → R is a monotonic, sublinear mapping on S d and G(A) = 1 2Ê [(AX, X)] ≤ 1 2 |A|Ê[|X| 2 ] =: 1 2 |A|σ 2 implies that there exists a bounded, convex and closed subset Γ ⊂ S + d such that ii) For t ∈ [0, T ] and ξ = ϕ(B t 1 , ..., B tn ) ∈ H 0 T , the conditional expectation defined by(there is no loss of generality, we assume t = t i ) T with norm · 1,G and therefore can be extended continuously to the completion L 1 be the collection of processes in the following form: for a given partition P is called a set that representsÊ.

Main results
In the sequel, we only consider the G-expectation space ( as n goes to infinity. So On the other hand, where a i (n) = max{|Ê( Let |η s |ds] = −a 2 + aσ 2 T /2. Now, we shall state the main result of this article, whose proof is postponed to Section 4.  Remark 3.5 Let (Ω, F, F , P ) be a filtered probability space. We recall that for any progressively measurable process η such that E[ The following corollary is about the uniqueness of representation for Gmartingales with finite variation.

Proof to Theorem 3.3
In order to prove Theorem 3.3, we first introduce two lemmas.
Let Ω T = C b ([0, T ]; R) be endowed with the supremum norm and let σ : [0, T ] × Ω T → R be a measurable mapping satisfying i) σ is bounded; ii) There exists C > 0 such that |σ(s, x) − σ(s, y)| ≤ C x − y for any s ∈ [0, T ] and x, y Then the following lemma is easy.
Lemma 4.1 Let (Ω, F, F , P ) be a filtered probability space and let M be a continuous F -martingale with M t − M s ≤ C(t − s) for some C > 0 and any 0 ≤ s < t ≤ T . Let F X be the augmented filtration generated by X. Then for any Y 0 ∈ F X 0 , there exists a unique F -adapted continuous process Let (Ω, F , P ) be a probability space and let {W t } be a standard 1dimensional Brownian motion on (Ω, F , P ). Let F W be the augmented filtration generated by W .
Denote by A 0 ([c, C]), for some 0 < c ≤ C < ∞, the collection of F W adapted processes in the following form Then σ(s, x) = h −1 s (x) is a bounded Lipschitz function. Let X t := t 0 h s dW s . Since W t = t 0 σ(s, W )dX s , we conclude, by Lemma 4.1, that W is F Xadapted.
For a process {X t }, we denote the vector ( Without loss of generality, we assume that there exists K ji ∈ N such that n j = K ji n i for m − 1 ≥ i > j ≥ 0.