Comparison and converse comparison theorems for backward stochastic differential equations with Markov chain noise

Comparison and converse comparison theorems are important parts of the research on backward stochastic differential equations. In this paper, we obtain comparison results for one dimensional backward stochastic differential equations with Markov chain noise, extending and generalizing previous work under natural and simplified hypotheses, and establish a converse comparison theorem for the same type of equation after giving the definition and properties of a type of nonlinear expectation: $f$-expectation.


Introduction
In 1990 Pardoux and Peng [21] considered general backward stochastic differential equations (BSDEs for short) in the following form: Here B is a Brownian Motion and g is the driver or drift of the above BSDE. Since then, comparison theorems of BSDEs have attracted extensive attention. El Karoui, Peng and Quenez [13], Cao and Yan [4] and Lin [19] derived comparison theorems for BSDEs with Lipschitz continuous coefficients. Liu and Ren [20] proved a comparison theorem for BSDEs with linear growth and continuous coefficients. Situ [26] obtained a comparison theorem for BSDEs with jumps. Zhang [29] deduced a comparison theorem for BSDEs with two reflecting barriers. Hu and Peng [16] established a comparison theorem for multidimensional BSDEs. Comparison theorems for BSDEs have received much attention because of their importance in applications. For example, the penalization method for reflected BSDEs is based on a comparison theorem (see [10], [12], [18] and [24]). Moreover, research on properties of g-expectations (see, Peng [23]) and the proof of a monotonic limit theorem for BSDEs (see, Peng [22]) both depend on comparison theorems. BSDEs with jumps were also introduced by many. Among others, we mention [1] and [25]. Crepey and Matoussi [9] considered BSDEs with jumps in a more general framework where a Brownian motion is incorporated in the model and a general random measure is used to model the jumps, which in [1] is a Poisson random measure.
It is natural to ask whether the converse of the above results holds or not. That is, if we can compare the solutions of two BSDEs with the same terminal conditions, can we compare the driver? Coquet, Hu, Mémin and Peng [8], Briand, Coquet, Mémin and Peng [2], and Jiang [17] derived converse comparison theorems for BSDEs, with no jumps. De Schemaekere [11], derived a converse comparison theorem for a model with jumps.
In 2012, van der Hoek and Elliott [27] introduced a market model where uncertainties are modeled by a finite state Markov chain, instead of Brownian motion or related jump diffusions, which are often used when pricing financial derivatives. The Markov chain has a semimartingale representation involving a vector martingale M = {M t ∈ R N , t ≥ 0}. BSDEs in this framework were introduced by Cohen and Elliott [5] as Cohen and Elliott [6] and [7] gave some comparison results for multidimensional BSDEs in the Markov Chain model under conditions involving not only the two drivers but also the two solutions. If we consider two onedimensional BSDEs driven by the Markov chain, we extend the comparison result to a situation involving conditions only on the two drivers. Consequently our comparison results are easier to use for the one-dimensional case. Moreover, our result in the Markov chain framework needs less conditions on the drivers compared to those in Crepey and Matoussi [9] which are suitable for more general dynamics.
Cohen and Elliott [7] also introduced a non-linear expectation: f -expectation based on the comparison results in the same paper. Using our comparison results, we shall give f -expectation a new definition for one-dimensional BS-DEs with Markov chain and show similar properties as those in [7]. Then, we shall provide a converse comparison result for the same model with the use of f -expectation. The paper is organized as follows. In Section 2, we introduce the model and give some preliminary results. Section 3 shows our comparison result for one-dimensional BSDEs with Markov chain noise. We introduce the fexpectation and give its properties in Section 4. The last section establishes a converse comparison theorem.

The Model and Some Preliminary Results
Consider a finite state Markov chain. Following [27] and [28] of van der Hoek and Elliott, we assume the finite state Markov chain X = {X t , t ≥ 0} is defined on the probability space (Ω, F , P ) and the state space of X is identified with the set of unit vectors {e 1 , e 2 · · · , e N } in R N , where e i = (0, · · · , 1 · · · , 0) ′ with 1 in the i-th position. Then the Markov chain has the semimartingale representation: Here, A = {A t , t ≥ 0} is the rate matrix of the chain X and M is a vector martingale (See Elliott, Aggoun and Moore [15]). We assume the elements A ij (t) of A = {A t , t ≥ 0} are bounded. Then the martingale M is square integrable. Take F t = σ{X s ; 0 ≤ s ≤ t} to be the σ-algebra generated by the Markov process X = {X t } and {F t } to be its filtration. Since X is right continuous and has left limits, (written RCLL), the filtration {F t } is also rightcontinuous. The following is given in Elliott [14] as Lemma 2.21 : The following product rule for semimartingales can be found in [14].

Lemma 2.2 (Product Rule for Semimartingales). Let Y and Z be two scalar RCLL semimartingales, with no continuous martingale part. Then
Here, 0<s≤t ∆Z s ∆Y s is the optional covariation of Y t and Z t and is also writ- Here, X, X is the unique predictable N × N matrix process such that [X, X] − X, X is a matrix valued martingale and write However, Equating the predictable terms in (2) and (4), we have For n ∈ N, denote for φ ∈ R n , the Euclidean norm |φ| n = √ φ ′ φ and for ψ ∈ R n×n , the matrix norm ψ n×n = T r(ψ ′ ψ). Let Ψ be the matrix Then d X, X t = Ψ t dt. For any t > 0, Cohen and Elliott [5,7], define the semi-norm . Xt , for C, D ∈ R N ×K as : We only consider the case where C ∈ R N , hence we introduce the semi-norm . Xt as: It follows from equation (5) that Consider a one-dimensional BSDE with the Markov chain noise as follows: Here the terminal condition ξ and the coefficient f are known. For t > 0, denote Lemma 2.3 (Theorem 6.2 in Cohen and Elliott [5]) gives the existence and uniqueness result of solutions to the BSDEs driven by Markov chains. Lemma 2.3. Assume ξ ∈ L 2 (F T ) and the predictable function f : Ω×[0, T ]× R × R N → R satisfies a Lipschitz condition, in the sense that there exists two constants l 1 , l 2 > 0 such that for each y 1 , y 2 ∈ R and z 1 , z 2 ∈ R N , We also assume f satisfies Then there exists a solution (Y, Z) to the BSDE (8). Moreover, Xs ds] < ∞; (3) this solution is unique up to indistinguishability for Y and equality d M, M t ×P-a.s. for Z.
The following lemma is an extension result to stopping time of Lemma 2.3 (see Cohen and Elliott [7]).
Lemma 2.4. Suppose τ > 0 is a stopping time such that there exists a real value T with P (τ > T ) = 0, ξ ∈ L 2 (F τ ) and f satisfies (9) and (10), with integration from 0 to τ , then the BSDE has a unique solution satisfying (1), (2) and (3) Recall the matrix Ψ given by (6). We adapt Lemma 3.5 in Cohen and Elliott [7] for our one-dimensional framework as follows: Lemma 2.6. For any driver satisfying (9) and (10), for any Y and Z Therefore, without any loss of generality, assume Z = ΨΨ † Z.

A comparison theorem for one-dimensional
BSDEs with Markov chain noise Assumption 3.1. Assume the Lipschitz constant l 2 of the driver f given in (9) satisfies where Ψ is given in (6) and m > 0 is the bound of A t N ×N , for any t ∈ [0, T ].

Assumption 3.2. Assume the Lipschitz constant l 2 of the driver f given in
where Ψ is given in (6) and m > 0 is the bound of A t N ×N , for any t ∈ [0, T ].

Lemma 3.4 (Duality). For t ∈ [0, T ], consider the one-dimensional SDE
Then the solution of the one-dimensional linear BSDE (12) satisfies Applying Ito's formula on U s Y s , t ≤ s ≤ T , and using Lemma 2.6, we derive Integrating both sides of above equation from t to T and taking the expectation given F t , we deduce for any t ∈ [0, T ], Since Y · and E[ξU T + T · f s U s ds|F · ] are both RCLL, by Lemma 2.1, the result holds.
Lemma 3.5. For any C ∈ R N , where m > 0 is the bound of A t N ×N , for any t ∈ [0, T ].

f -expectation
Now we introduce the nonlinear expectation: f -expectation. The fexpectation, for a fixed driver f , is an interpretation of the solution to a BSDE as a type of nonlinear expectation. Here, we give the one-dimensional case of the definitions and properties in Cohen and Elliott [7], based on our comparison theorems. (II) For all (y, z) ∈ R × R N , t → f (t, y, z) is continuous.
In this section, we suppose the driver f satisfies Assumption 3.2 and Assumption 4.1. Before introducing the f -expectation, we shall give the following definition: Definition 4.4. Define, for ξ ∈ L 2 (F T ) and a driver f , The following properties follows directly from Definition 4.4, Proposition 4.3 and Lemma 2.4. Proposition 4.5. Let s, t ≤ T , be two stopping times.

A Converse Comparison Theorem, for onedimensional BSDE with Markov chain noise
Our converse comparison theorem uses the theory of an f -expectation in the previous section. For i = 1, 2, consider the BSDEs with same terminal condition ξ: ii) P (f 1 (t, y, z) ≤ f 2 (t, y, z), for any (t, y, z) ∈ [0, T ] × R × R N ) = 1.