Stochastic Differential Equations in Infinite Dimensional Hilbert Space and its Optimal Control Problem with Levy processes

The paper is concerned with a class of stochastic differential equations in infinite dimensional Hilbert space with random coefficients driven by Teugel’s martingales which are more general processes. and its optimal control problem. Here Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see Nualart and Schoutens [16]). There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classic convex variation method and dual technique. The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation driven by Teugels martingales which our theoretical results can solve.


Introduction
This paper is concerned with the following stochastic differential equations in infinite dimensional Hilbert space      dX(t) = [A(t)X(t) + b(t, X(t))]dt + [B(t)X(t) + g(t, X(t))]dW (t) + Here we denote by L (V, V * ) the space of bounded linear transformations of V into V * , by L (V, H) the space of bounded linear transformations of H into V. An adapted solution of (1.1) is a V -valued, {F t } 0≤t≤T -adapted process X(·) which satisfies (1.1) under some appropriate sense. Such a model as (1.1) represents a large classes of stochastic partial differential equations, for instance the nonlinear filtering equation and other stochastic parabolic PDEs (cf. [5]), but it is by no means the largest one. Partial differential equation are too diverse to be covered by a single model, like ordinary equations.
In 2000, Nualart and Schoutens [16] got a martingale representation theorem for a type of Lévy processes through Teugels martingales which are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, they proved in [17] the existence and uniqueness theory of BSDE driven by Teugels martingales. The above results are further extended to the one-dimensional BSDE driven by Teugels martingales and an independent multi-dimensional Brownian motion by Bahlali et al [2]. One can refer to [18,19,25,24] for more results on such kind of BSDEs.
In the mean time, the stochastic optimal control problems related to Teugels martingales were studied. In 2008, a stochastic linear-quadratic problem with Lévy processes was considered by Mitsui and Tabata [15], in which they established the closeness property of a multi-dimensional backward stochastic Riccati differential equation (BSRDE) with Teugels martingales and proved the existence and uniqueness of the solution to such kind of one-dimensional BSRDE, moreover, in their paper an application of BSDE to a financial problem with full and partial observations was demonstrated. Motivated by [15], Meng and Tang [14] studied the general stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multi-dimensional Brownian motion, of which the necessary and sufficient optimality conditions in the form of stochastic maximum principle with the convex control domain are obtained. In 2012, Tang and Zhang [31] studied the optimal control problem of backward stochastic systems driven by Teugels martingales and an independent multi-dimensional Brownian motion and obtained the corresponding stochastic maximum principle.
Due to the interesting analytical contents and wide applications in various sciences such as physics, mechanical engineering, control theory and economics, the theory of SPDEs driven by Wiener processes or Gaussian random processes now has been investigated extensively and has already achieved fruitful results on the existence uniqueness, stability, invariant measure other quantitative and qualitative properties of solution and so on. There are a great amount of literature on this topic, for example [5][6] [26] and references therein. On the one hand, non-Gaussian random processes play an increasing role in modeling stochastic dynamical systems. Typical examples of non-Gaussian stochastic processes are Lévy processes and processes arising by Poisson random measures. In neurophysiology the driving noise of the cable equation is basically impulsiv e.g., of a Poisson type (see [33] ) or, on the other hand, Woyczyński describes in [34] a number of phenomena from fluid mechanics, solid state physics, polymer chemistry, economic science etc., for which non-Gaussian Lévy processes can be used as their mathematical model in describing the related stochastic behavior. Thus, from the point of view of applications one might feel that the restriction to Wiener processes or Gaussian noise is unsatisfactory; to handle such cases one can replace Wiener processes or Gaussian noise by a Poisson random measure. Most recently, thanks to comprehensive practical applications, many attentions have been paid to SPDEs driven by jump processes, (cf., for exampl [ [36] and the references therein). It is worth mentioning that Röcker and Zhang [26] established the existence and uniqueness theory for solutions of stochastic evolution equations of type (1.1) by a successive approximations, in which case the operator B does not exist.
One of the purposes of this paper is to establish the continuous dependence of the solution on the coefficients and the existence and uniqueness of solutions to the stochastic evolution equation (1.1). It is well known that there are two different methods to analyzing SPDEs: the semigroup (or mild solution ) approach (cf. [6]) and the variational approach (cf. [22]). For (1.1), since its coefficients are allowed to be random, we need to use the variational approach in the weak solution framework (in the PDE sense) of the Gelfand triple and can not use the mild solution approach to study it. In fact, when the coefficients are deterministic, we always study the stochastic evolution equation in the mild solution framework. However, due to the randomness of coefficients, it seems very difficult or even impossible to tackle the problem in the mild solution sense. Indeed, if we define the mild solution as usual, the adaptability of the integrand in the stochastic integral may not be satisfied due to the randomness of the operator A. The advantage of the variational approach is that a version of Itô's formula exists in the context of the Gelfand triple of Hilbert spaces (see [10] for details). Such a formula will play an important role in proving the main results throughout this paper.
Another purpose of this paper is to establish the maximum principle and verification theorem for the optimal control problem where the state process is driven by a controlled stochastic evolution equation (1.1). A classical approach for optimal control problems is to derive necessary conditions satisfied by an optimal control, such as Pongtryagins maximum principle. Since the 1970s, the maximum principle has been extensively studied for stochastic control systems: in the finite-dimensional case it has been solved by Peng [21] in a general setting where the control was allowed to take values in a nonconvex set and enter into the diffusion, while in the infinte-dimensional case the existing literatur e.g., [3], [11] [12] [37], required at least one of the following three assumptions: (1) the control domain was convex, (2) the diffusion did not depend on the control, (3) the state equation and cost functional were both linear in the state variable. So far the general maximum principle for infinite-dimensional stochastic control systems, the counterpart of Peng's result, remained open for a long time. Until recently, Du and Meng attempt to fill this gap in [7] where they developed a new procedure to perform the second-order duality analysis: by virtue of the Lebesgue differentiation theorem and an approximation argument to establish the corresponding maximum principle. Meanwhile other very important works concerned with the general stochastic maximum principle in infinite dimensions were given in [9] [13] besides [7]. From the above references, works on optimal control problems of infinite dimension stochastic evolution equation or stochastic partial differential equation are mainly concerned with systems driven only by Wiener processes. In contrast, there have not been a number of results on the optimal control for stochastic partial differential equations driven by jump processes. In 2005, Øksendal, Prosk and Zhang [20] studied the optimal control problem of quasilinear semielliptic SPDEs driven by Poisson random measure and gave sufficient maximum principle results, not necessary ones. In 2017, Tang and Meng [30]studied the optimal control problem of more general stochastic evolution equations driven by Poisson random measure with random coefficients and gave necessary and sufficient maximum principle results.In this paper, for a controlled stochastic evolution equation (1.1), we suppose that the control domain is convex. We adopt the convex variation method and the first adjoint duality analysis to show a necessary maximum principle where the continuous dependence theorem (see Theorem 3.2) plays a key role in proving the variation inequality for the cost functional (see Lemma 6.2). Under the convexity assumption of the Hamiltonian and the terminal cost, we provide a sufficient maximum principle for this optimal problem which is the so-called verification theorem. It is worth mentioning that if the admissible control set is non-convex and the diffusion terms of the state equation is independent of the control variable we can use the first-order spike variation method to obtain the maximum principle in the global form by establishing some subtle L 2 -estimate for the state equation. All the details shall be given in our forthcoming paper. But for the general setting, it seems very difficult or even impossible to obtain the corresponding maximum principle because it seems impossible to establish some L p (p > 2) estimates as in [7] for the state process which play a key role in the second variation analysis. Finally, to illustrate our results, we apply the stochastic maximum principles to solve an optimal control of a Cauchy problem for a controlled stochastic linear partial differential equation.
The rest of this paper is structured as follows. In Section 2, we provide the basic notations and recall Itô formula for Teugels martingales in Hilbert space used frequently in this paper. Section 3 establishes the continuous dependence and the existence and uniqueness of solutions to the stochastic evolution equation (1.1). Section 4 formulates the optimal control problem specifying the hypotheses. In section 5, adjoint equation is introduced which turns out to be a backward stochastic evolution equation driven by Teugels martingales. In Section 6, we establish the stochastic maximum principle by the classical convex variation method. In Sections 7, the verification theorem for optimal controls is obtained by dual technique. In section 8, we present an application of our results. The final section concludes the paper.

Notations and Itô Formula for Teugels Martingales in Hilbert Space
Let (Ω, F , {F t } 0≤t≤T , P ) be a filtrated complete probability space on which a one-dimensional Lévy process {Z(t), 0 ≤ t ≤ T } and a one-dimensional standard Brownian motion {W (t), 0 ≤ t ≤ T } are defined with {F t } 0≤t≤T being the natural filtration completed by the totality N of all null sets of F T . For Lévy process {Z(t), 0 ≤ t ≤ T }, we assume that its characteristic function is given by Here σ > 0, a ∈ R 1 and v is a measure on R 1 satisfying (i) there exists δ > 0 and λ > 0, such that In view of these conditions , it is easy to check that that the random variables Z(t) have moments of all orders. Denote by P the predictable sub-σ field of B([0, T ]) × F , then we introduce the following notation used throughout this paper.
• X: a Hilbert space with norm · X .
We denote by {H i (t), 0 ≤ t ≤ T } ∞ i=1 the Teugels martingales associated with the Lévy process for all i ≥ 1, L (i) (t) are so called power-jump processes with L (1) (t) = L(t), L (i) (t) = 0<s≤t (∆L(s)) i for i ≥ 2 and the coefficients c ij corresponding to the orthonormalization of polynomials 1, x, x 2 , · · · w.r.t. the are pathwise strongly orthogonal and their predictable quadratic variation processes are given by For more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [16]. Let V and H be two separable (real) Hilbert spaces such that V is densely embedded in H. We identify H with its dual space by the Riesz mapping. Then we can take H as a pivot space and get a Gelfand triple V ⊂ H = H * ⊂ V * , where H * and V * denote the dual spaces of H and V , respectively. Denote by · V , · H and · V * the norms of V, H and V * , respectively, by (·, ·) H the inner product in H, by ·, · the duality product between V and V * . Moreover we write L (V, V * ) the space of bounded linear transformations of V into V * . Throughout this paper, we let C and K be two generic positive constants, which may be different from line to line. Now we present an Itô's formula in Hilbert space which will be frequently used in this paper.
Then Y is a H-valued strongly càdlàg F t -adapted process such that the following Itô formula holds (2.1) Proof. The proof follows that of Theorem 1 in Gyöngy and Krylov [13].

Stochastic Differential Equations in Infinite Dimensional Hilbert Space driven by Teugels Martingales
In this section, we present some preliminary results of the following stochastic evolution equation (SEE for short) in infinite dimensional Hilbert space driven by Brownian motion are given random mappings which satisfy the following standard assumptions.
i.e., A(·)x, y and (B(·)x, y) H are both predictable process for every x, y ∈ V, and satisfy the coercive condition, i.e., there exist some constants C, α > 0 and λ such that for any x ∈ V and each (t, ω) and  (3.5) or alternatively, X(·) satisfies the following Itô's equation in V * : Furthermore, suppose thatX(·) is a solution to the SEE (3.1) with the initial valueX(0) =x ∈ H and the coefficients (A, B,b,ḡ,σ) satisfying Assumptions 3.1-3.2, then we have Proof. The estimate (3.7) can be directly obtained by the estimate (3.8) by taking the initial valueX(0) = 0 and the coefficients (A, B,b,ḡ,σ) = (A, B, 0, 0, 0) which imply thatX(·) ≡ 0. Therefore, it suffices to prove the estimate (3.8). For the sake of simplicity, in the following discussion, we will use the following shorthand notation: and for φ = b, g, σ where when φ = σ, the terms X(t) andX(t) will be replaced by X(t−) andX(t−), respectively. Applying Itô formula in Lemma 2.1 to ||X(t)|| 2 H and using Assumptions 3.1-3.2 and the elementary inequalities |a + b| 2 ≤ 2a 2 + 2b 2 and 2ab ≤ a 2 + b 2 , ∀a, b > 0, we get that Taking expectation on both sides of the above inequality, we get that The proof is complete.
In the following, we give the existence and uniqueness result for the solution of the SEE (3.1) for a simple case where the coefficients (b, g, σ) is independent of the variable x.
. Suppose that the operators A and B satisfy Assumption 3.1. Then there exists a unique solution X(·) ∈ M 2 F (0, T ; V ) to the following SEE: (3.14) Proof. The proof can be obtained by Galerkin approximations in the same way as the proof of Theorem 3.2 in [4] with minor change. Now we begin our proof. First of all, we fix a standard complete orthogonal basis {e i |i = 1, 2, 3, . . . } in the space H which is dense in the space V . For any n, consider the following finite-dimensional stochastic differential equation in R n : . .
Under Assumptions 3.1 and 3.2, from the existence and uniqueness theory for the finite dimensional SDE driven by Teugels martingale , the above equation admits a unique strong solution x n (·) ∈ M 2 F (0, T ; R n ) , where x n (·) = (x n 1 (·), · · · , x n n (·)). Now we can define an approximation solution to (3.14) as follows: Now applying Itô formula to ||X n (t)|| 2 H , we get that Therefore, under Assumptions 3.1 and 3.2, similar to the proof of the estimate (3.7), using Grönwall's inequality, we can easily get the following estimate: This inequality implies that there is a subsequence {n } of {n} and a triple X(·) ∈ M 2 F (0, T ; V ) such that Let Π be an arbitrary bounded random variable on (Ω, F ) and ψ be an arbitrary bounded measurable function on [0, T ]. From the equality (3.16), for n ∈ N * and basis e i , where i ≤ n , we have Now letting n −→ ∞ on the both sides of the above equation to get its limit. Firstly, from the weak convergence property of and lim n →∞ In view of (3.3) and (3.17), we conclude that where the constant C is independent of n . Hence from Fubini's Theorem and Lebesgue's Dominated Convergence Theorem, we have (3.23) Moreover, from (3.17), we get Hence, by Fubini's Theorem and Lebesgue's Dominated Convergence Theorem, we have , we can conclude that Therefore, from the Definition 3.1, we conclude that the triple X(·) is the solution to the SEE (3.14). Thus the existence is proved. For the uniqueness, the proof can be directly by the priori estimate (3.8). The proof is complete.
Then for any x i (·) ∈ M 2 F (0, T ; V ), i = 1, 2, from the Lipschitz continuity of b, g, σ and a priori estimate (3.8), it follows that Here K is a positive constant independent of ρ. If |ρ − ρ 0 | < 1 2 √ K , the mapping Γ is strictly contractive in M 2 F (0, T ; V ). Hence it implies that the SEE (3.14) with the coefficients (A, B, ρb + b 0 , ρg + g 0 , ρσ + σ 0 ) admits a unique solution X(·) ∈ M 2 F (0, T ; V ). From Lemma 3.3, the uniqueness and existence of the solution to the SEE (3.14) is true for ρ = 0. Then starting from ρ = 0, one can reach ρ = 1 in finite steps and this finishes the proof of solvability of the SEE (3.1). This completes the proof.

Formulation of Optimal Control Problem
Let U be a real-valued Hilbert space standing for the control space. Let U be a nonempty convex closed subset of U . An admissible control process u(·) {u(t), 0 ≤ t ≤ T } is defined as follows. In the Gelfand triple (V, H, V * ), for any admissible control u(·) ∈ A, we consider the following controlled SEE driven by Teugels martingales and For any admissible control u(·), the solution of the system (4.1), denoted by X u (·) or X(·), if its dependence on u(·) is clear from the context, is called the state process corresponding to the control process u(·), and (u(·); X(·)) is called an admissible pair. The following result gives the well-posedness of the state equation as well as some useful estimates.
and |J(u(·))| < ∞. Furthermore, let X v (·) be the state process corresponding to another admissible control v(·), then The proof is complete.
Therefor by Lemma 4.1, we claim that the cost functional (4.2) is well-defined. Our optimal control problem can be stated as follows. (4.7) The admissible controlū(·) satisfying (4.7) is called an optimal control process of Problem 4.2. Correspondingly, the state processX(·) associated withū(·) is called an optimal state process. Then (ū(·);X(·)) is called an optimal pair of Problem 4.2.

Regularity Result for the Adjoint Equation
For any admissible pair (ū(·);X(·)), the corresponding adjoint processes is defined as a triple (p(·),q(·),r(·)) of stochastic processes, which is a solution to the following backward stochastic evolution equation (BSEE for short) driven by Teugels martingales, called the adjoint equation, x (t,X(t),ū(t))p(t) + B * (t)q(t) + g * x (t,X(t),ū(t))q(t) Here A * denotes the adjoint operator of the operator A. Similarly, we can define the corresponding adjoint operator for other coefficients. Under Assumptions 4.1, we have the following basic result for the adjoint process.
Lemma 5.1. Let Assumptions 4.1 be satisfied. Then for any admissible pair (ū(·);X(·)), there exists a unique adjoint process (p(·),q(·),r(·)) ∈ M 2 . Moreover, the following estimate holds: Proof. From the property of adjoint operator, the adjoint operator A * of A and the adjoint operator B * of B also satisfies (i) in Assumption 3.1. Therefore, similarly to Theorem 3.1, the existence and uniqueness of the solution can be proved by Galerkin approximations and parameter extension method.

Variation of the State Process and Cost Functional
Let(ū(·);X(·)) be an optimal pair of Problem 4.2. Define a convex perturbation ofū(·) as follows: where v(·) is an arbitrarily admissible control. Since the control domain U is convex, u ε (·) is also an element of A. We denote by X ε (·) the state process corresponding to the control u ε (·). Now we introduce the following first order variation equation: Y (0) =0.
Proof. From the estimate (4.5), we have .

Main Results
Now we are in position to state and prove the maximum principle for Problem 4.2.

Verification Theorem
In the following, we give a sufficient condition of optimality for the existence of an optimal control of Problem 4.2, which is the so-called verification theorem.

Application
In this section, we will apply our theoretical results to solve a specific exampl i.e., an optimal control problem for a controlled Cauchy problem driven by Teugels martingales. First of all, let us recall some preliminaries of Sobolev spaces. For m = 0, 1, we define the space H m {φ : ∂ α z φ ∈ L 2 (R d ), for any α := (α 1 , · · · , α d ) with |α| := |α 1 | + · · · + |α d | ≤ m} with the norm We denote by H −1 the dual space of H 1 . We set V = H 1 , H = H 0 , V * = H −1 . Then (V, H, V * ) is a Gelfand triple. We choose control domain U = U = H. The admissible control set A is defined as M 2 F (0, T ; U ). For any admissible control u(·, ·) ∈ M 2 F (0, T ; U ), we consider a controlled Cauchy problem, where the system is given by a stochastic partial differential equation driven by Brownian motion W and Poisson random martingale in the following divergence form: × Ω × R d are given random mappings and satisfy the suitable measurability. Here we also use the Einstein summation convention to For any admissible control u(·, ·) ∈ A, the following definition gives the generalized weak solution to (8.1).
Definition 8.1. An R-valued, P × B(R d )-measurable process y(·, ·) is called a solution to (8.1), if y(·, ·) ∈ M 2 F (0, T ; H) such that for every φ ∈ H and a.e. (t, ω) ∈ [0, T ] × Ω, it holds that For any admissible control process u(·, ·) and the solution y(·, ·) of the corresponding state equation (8.1), the objective of the control problem is to minimize the following cost functional To make the control problem well-defined, we make the following assumptions on the coefficients a, b, c, η, ρ, Γ, for some fixed constants K ∈ (1, ∞) and κ ∈ (0, 1): The functions a, b, c, η, and ρ are P × B(R d )-measurable with values in the set of real symmetric d × d matrices, R d , R, R d and R, respectively, and are bounded by K. The function Γ is P × B(E) × B(R d )-measurable with value l 2 (R) and is bounded by K. ξ ∈ L 2 (R d ).
Since U = U , there is no constraint on the control and therefore the minimum condition (6.13) becomes H u (t,X(t−),ū(t),p(t−),q(t),r(t)) = 0, (8.9) which imply that 2ū(t) +p(t−) +q(t) +r(t) = 0, (8.10) a.e. t ∈ [0, T ], P-a.s.. Thus the optimal controlū(·) is given bȳ Remark 8.1. The above example can be regarded as a special case of the infinite-dimensional linear-quadratic control problem driven by Teugels martiangles which can also be applied to some more practical problems such as the partial observation optimal control driven by Teugels martingales and the optimal harvesting problem associated with Lévy processes and so on. And we will give detailed investigations on these applications in our future publication.

Conclusion
In this paper, we have developed an infinite-dimensional optimal control problem of the stochastic evolution system driven by Teugels martingales. We have considered the control variable enters the diffusion of the state equation and the control domain is convex. We first provided the existence uniqueness and continuous dependence theorems of solutions to SEE driven by Teugels martingales. Then we established necessary and sufficient conditions for optimal controls in the form of maximum principles by convex variational technique. As an application, we considered an optimal control problem of a Cauchy problem for a controlled stochastic partial differential equation and obtained the dual characterization of the optimal control in terms of the solution to the corresponding stochastic Hamiltonian system. And further investigates will be carried out on the optimal control problem under convex control domain assumption and more practical applications in our future publications