Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven By L\'{e}vy Processes

In this paper, we consider a partial information two-person zero-sum stochastic differential game problem where the system is governed by a backward stochastic differential equation driven by Teugels martingales associated with a L\'{e}vy process and an independent Brownian motion. One sufficient (a verification theorem) and one necessary conditions for the existence of optimal controls are proved. To illustrate the general results, a linear quadratic stochastic differential game problem is discussed.


Introduction
Consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by the following nonlinear backward stochastic differential equation (BSDE), for any t ∈ [0, T], y(t) � ξ + T t f s, y(s), q(s), z(s), u 1 with the cost functional J(u(·)) � E ϕ(y(0)) + T 0 l t, y(t), q(t), z(t), u 1 (t), u 2 (t) dt , (2) where W(t), 0 ≤ t ≤ T { } is a standard d-dimensional Brownian motion and H(t) � H i (t) ∞ i�1 , 0 ≤ t ≤ T are Teugels martingales associated with a Lévy processes L(t), 0 ≤ t ≤ T { } (see Section 2, for more details). e filtration generated by the underlying Brownian motion W and the Lévy process L(t), 0 ≤ t ≤ T { } is denoted by F t 0 ≤ t ≤ T . e meaning of variables are given in Assumptions 1 and 2.
In the above, the processes u 1 (·) and u 2 (·) are open-loop control processes, which present the controls of the two players. Let U 1 ⊂ R k 1 and U 2 ⊂ R k 2 be two given nonempty convex sets. Under many situations, under which the full information F t is inaccessible for players, ones can only observe a partial information. For this, an admissible control process u i (·) for the player i is defined as a G t -predictable process with values in U i Here, G t ⊆ F t for all t ∈ [0, T] is a given subfiltration representing the information available to the controller at time t. For example, we could choose G t � F (t− δ) + , t ∈ [0, T], where δ > 0 is a fixed delay of information.
Roughly speaking, for the zero-sum differential game, Player I seeks control u 1 (·) to minimize (2), while Player II seeks control u 2 (·) to maximize (2). Let (u 1 (·), u 2 (·)) be an optimal open-loop control satisfying for all admissible open-loop controls (u 1 (·), u 2 (·)) ∈ G 1 × G 2 . We denote this partial stochastic differential game by Problem (P). We refer to (u 1 (·), u 2 (·)) as an openloop saddle point of Problem (P). e corresponding strong solution (y(·), q(·), z(·)) of (1) is called the saddle state process. en, (u 1 (·), u 2 (·) ; y(·), q(·), z(·)) is called a saddle quintuplet. Game theory had been an active area of research and a useful tool in many applications, particularly in biology and economic. For the partial information two-person zero-sum stochastic differential games, the objective is to find a saddle point, for which the controller has less information than the complete information filtration F t t ≥ 0 . Recently, An and Øksendal [1] established a maximum principle for stochastic differential games of forward systems with Poisson jumps for the type of partial information in our paper. Moreover, we refer to [2,3] and the references therein, for more related results on the partial information stochastic differential games.
In 2000, Nualart and Schoutens [4] got a martingale representation theorem for a type of Lévy processes through Teugels martingales, where Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, Nualart and Schoutens [5] proved the existence and uniqueness theory of BSDE driven by Teugels martingales. e above results are further extended to the case for one-dimensional BSDE driven by Teugels martingales and an independent multidimensional Brownian motion by Bahlali et al. [6].
Since the theory of BSDE driven by Teugels martingales and an independent Brownian motion is established, it is natural to apply the theory to the stochastic optimal control problem. Now, the full information stochastic optimal control problem related to Teugels martingales has been in many literatures. For example, the stochastic linear quadratic problem with Lévy processes was studied by Mitsui and Tabata [7]. Motivated by [7], Meng and Tang [8] studied the general full information stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multidimensional Brownian motion and proved the corresponding stochastic maximum principle. Furthermore, Tang and Zhang [9] extended [8] to the Backward stochastic systems and obtained the corresponding stochastic maximum principle. For the case of the partial information, in 2012, Bahlali et al. [10] studied the stochastic control problem for forward system and obtained the corresponding stochastic maximum principle. In the meantime, Meng et al. [11] extended [9] to the partial information stochastic optimal control problem of backward stochastic systems and obtained the corresponding optimality conditions. For the recent results about stichastic differential control problems or games, the readers are referred to [12][13][14][15][16][17] and the references therein.
However, to the best of our knowledge, there is little discussion on the partial information stochastic differential games for the system driven by Teugels martingales and an independent Brownian motion, which motives us to write this paper. e main purpose of this paper is to establish partial information necessary and sufficient conditions for optimality for Problem (P) by using the results in [9]. e results obtained in this paper can be considered as a generalized form of stochastic optimal control problem to the two-person zero-sum case. As an application, a twoperson zero-sum stochastic differential game of linear backward stochastic differential equations with a quadratic cost criteria under partial information is discussed and the optimal control is characterized explicitly by the adjoint processes. e rest of this paper is organized as follows. We introduce useful notations and give needed assumptions in Section 2. Section 3 is devoted to present the sufficient condition for the existence of the optimal control problem. In Section 4, we establish the necessary condition of optimality. In Section 5, a linear quadratic stochastic differential game problem is solved by applying the theoretical results.

Preliminaries and Assumptions
ese settings imply that the random variables L(t) have moments of all orders. We denote by the Teugels martingales associated with the Lévy process e Teugels martingales H i (t) ∞ i�1 are pathwise strongly orthogonal and their predictable quadratic variation processes are given by For more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [4,5]. Denote by g the predictable sub-σ field of B([0, T]) × F; then, we introduce the following notation used throughout this paper.
So, Problem (P) is well defined.

A Partial Information Sufficient Maximum Principle
In this section, we want to study the sufficient maximum principle for Problem (P). In our setting, the Hamiltonian function H : e adjoint equation, which fits into system (1) and (2) corresponding to the given admissible quintuplet ((u 1 (·), u 2 (·)); y(·), q(·), z(·)), is given by the following forward stochastic differential equation driven by multidimensional Brownian motion W and Teugels martingales H i ∞ i�1 : Under Assumptions 1 and 2, the forward stochastic differential equation (18) has a unique solution k(·) ∈ G 2 F (0, T ; R n ) by Lemma 2.1 in [9]. We now come to a verification theorem for Problem (P).

(iii) If both cases (i) and (ii) hold (which implies, in particular, that ϕ(y) is an affine function), then
is an open-loop saddle point and J u 1 (·), u 2 (·) � sup u 2 (·)∈A 2 inf (26) Proof (i) In the following, we consider a stochastic optimal control problem over G 1 , where the system is with the cost functional J u 1 (·), u 2 (·) � E ϕ(y(0)) Our optimal control problem is to minimize J(u 1 (·), u 2 (·)) over u 1 (·) ∈ A 1 , i.e., find u 1 (·) ∈ A 1 such that en, for this case, it is easy to check that the Hamilton is H(t, y, q, z, u 1 , u 2 (t), k), and for the admissible control u 1 (·) ∈ A 1 , the corresponding sate process and the adjoint process is still (y(t), q(t), z(t)) and k(t), respectively. And the optimality condition is us, from the partial information sufficient maximum principle for optimal control (see eorem 1 in [9]), we conclude that u 1 (·) is the optimal control of the optimal control problem, i.e.,

e proof of (i) is complete. (ii) is statement can be proved in a similar way as (i). (iii) If both (i) and (ii) hold, then
for any (u 1 (·), u 2 (·)) ∈ A 1 × A 2 , i.e., On the contrary, where the state process (y(·), q(·), z(·)) is the solution to the controlled linear backward stochastic system below: is problem is denoted by Problem (LQ). To study this problem, we need the assumptions on the coefficients as follows. ere is no further constraint imposed on the control processes; the set all admissible control processes is In what follows, we will utilize the stochastic maximum principle to study the dual representation of the game Problem (LQ).

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.