Transposition Method for Backward Stochastic Evolution Equations Revisited, and Its Application

The main purpose of this paper is to improve our transposition method to solve both vector-valued and operator-valued backward stochastic evolution equations with a general filtration. As its application, we obtain a general Pontryagin-type maximum principle for optimal controls of stochastic evolution equations in infinite dimensions. In 1articular, we drop the technical assumption appeared in [Q. L\"u and X. Zhang, Springer Briefs in Mathematics,Springer, New York, 2014, Theorem 9.1].

Similarly, we will use the notations L(H) d , L 2 (H) d and so on, where L 2 (H) stands for the (Hilbert) space of all Hilbert-Schmidt operators on H.
Let A be an unbounded linear operator (with domain D(A) ⊂ H), which generates a C 0semigroup {S(t)} t≥0 on H. Denote by A * the dual operator of A. It is well-known that D(A) is a Hilbert space with the usual graph norm, and A * generates a C 0 -semigroup {S * (t)} t≥0 , which is the dual C 0 -semigroup of {S(t)} t≥0 .

(1.2)
Here neither the usual natural filtration condition nor the quasi-left continuity is assumed for the filtration F, and the unbounded operator A is only assumed to generate a general C 0 -semigroup. Hence, we cannot apply the existing results on infinite dimensional BSEEs (e.g. [1,6,12,13]) to obtain the well-posedness of the equation (1.1).

(1.4)
If H = R m for some m ∈ N, then (1.3) is an m × m matrix-valued backward stochastic differential equation (BSDE for short), and hence, one can easily obtain its well-posedness for this special case.
On the other hand, if dim H = ∞, F ∈ L 1 F (τ, T ; L p (Ω; L 2 (H))) and P T ∈ L p F T (Ω; L 2 (H)), then (1.3) is a special case of (1.1) (because L 2 (H) is a Hilbert space), and therefore in this case the well-posedness of (1.3) follows from that of (1.1). However, the situation is completely different when dim H = ∞ if one does not impose further assumptions on F and P T . Indeed, in the infinite dimensional setting, although L(H) is still a Banach space, it is neither reflexive nor separable even if H itself is separable. Because of this, L(H) is NOT a UMD space (needless to say a Hilbert space), and consequently, it is even a quite difficult problem to define the stochastic integral " T τ Qdw(t)" (appeared in (1.3)) for an L(H)-valued process Q. We refer to [3,11] for previous studies on the well-posedness of (1.3), by avoiding the definition of " T τ Qdw(t)" in one way or another. Similar to the finite dimensional case ( [14]), both (1.1) and (1.3) play crucial roles in establishing the Pontryagin-type maximum principle for optimal controls of general infinite dimensional nonlinear stochastic systems with control-dependent diffusion terms and possibly nonconvex control regions ( [3,4,11,15]).
The main purpose of this paper is to improve our transposition method, developed in [11], to solve the equations (1.1) and (1.3). Especially, we shall give some well-posedness/regularity results for solutions to these two equations such that they can be conveniently used in the above mentioned Pontryagin-type maximum principle. In the stochastic finite dimensional setting, the transposition method (for solving BSDEs) was introduced in our paper [10], but one can find a rudiment of this method at [16, pp. 353-354].
We remark that, our method is also motivated by the classical transposition method to solve the non-homogeneous boundary value problems for deterministic partial differential equations (see [8] for a systematic introduction to this topic) and especially the boundary controllability problem for hyperbolic equations ( [7]).
For the readers' convenience, let us recall below the main idea of the classical transposition method to solve the following deterministic wave equation with non-homogeneous Dirichelt boundary conditions: where G is a nonempty open bounded domain in R d with a C 2 boundary Γ, (y 0 , y 1 ) ∈ L 2 (G) × H −1 (G) and u ∈ L 2 (Σ) are given, and y is the unknown.
In order to give a reasonable definition for the solution to (1.5) by the transposition method, we consider first the case when y is sufficiently smooth. Assume that g ∈ C ∞ 0 (0, T ; H 1 0 (G)), y 1 ∈ L 2 (G), and that y ∈ H 2 (Q) satisfies (1.5). Then, multiplying the first equation in (1.5) by ζ, integrating it in Q, and using integration by parts, we find that Note that (1.7) still makes sense even if the regularity of y is relaxed as y ∈ C([0, T ]; L 2 (G)) . This leads to the following notion: if y(0) = y 0 , y t (0) = y 1 , and for any f ∈ L 1 (0, T ; L 2 (G)) and g ∈ L 1 (0, T ; H 1 0 (G)), it holds that where ζ is the unique solution to (1.6).
One can show the well-posedness of (1.5) in the sense of Definition 1.1, by means of the transposition method ( [7]). Clearly, the point of this method is to interpret the solution to a forward wave equation with non-homogeneous Dirichlet boundary conditions in terms of another backward wave equation with non-homogeneous source terms. Of course, in the deterministic setting, since the wave equation is time-reversible, there exists no essential difference between the forward problem and the backward one. Nevertheless, this reminds us to interpret BSDEs/BSEEs in terms of forward stochastic differential/evolution equations, as we have done in [10,11]. Clearly, the transposition method is a variant of the standard duality method, and in some sense it provides a way to see something which is not easy to be detected directly.
The rest of this paper is organized as follows. Section 2 is addressed to the well-posedness of the equation (1.1). Sections 3 and 4 are devoted to the well-posedness of the equation (1.3) and a regularity property for its solutions, respectively. Finally, in Section 5, we show a stochastic Pontryagin-type maximum principle for controlled stochastic evolution equations in infinite dimensions.

Well-posedness of the vector-valued BSEEs
In this section, we discuss the well-posedness of the equation (1.1) in the transposition sense.
Consider the following (forward) stochastic evolution equation: T ; L q (Ω; H d )) and η ∈ L q Ft (Ω; H). Let us recall that z(·) ∈ C F ([t, T ]; L q (Ω; H)) is a (mild) solution to the equation (2.1) if We now introduce the following notion.
Ft (Ω; H) and the corresponding solution z ∈ C F ([t, T ]; L q (Ω; H)) to (2.1), it holds that In what follows, we will use C to denote a generic positive constant, which may be different from one place to another. We have the following result for the well-posedness of the equation (1.1).   Lemma 2.1 Fix t 1 and t 2 satisfying 0 ≤ t 2 < t 1 ≤ T . Assume that Y is a reflexive Banach space. Then, for any r, s ∈ [1, ∞), it holds that Proof of Theorem 2.1 : It suffices to consider a particular case for (1.1), i.e. the case that f (·, ·, ·) is independent of the second and third arguments. More precisely, we consider the following equation: where y T ∈ L p F T (Ω; H) and f (·) ∈ L 1 F (τ, T ; L p (Ω; H)). The general case follows from the wellposedness for (2.4) and the standard fixed point technique.
We divide the proof into several steps. Since the proof is very similar to that of [11, Theorem 3.1], we give below only a sketch.

Well-posedness of the operator-valued BSEEs
In this section, we consider the well-posedness of (1.3).
In order to define the transposition solution of (1.3), for any t ∈ [τ, T ], we introduce the following two (forward) stochastic evolution equations: and where r ≥ 2. The transposition solution to the equation (1.3) is defined as follows: where, x 1 (·) and x 2 (·) solve (3.1) and (3.2), respectively. .
Further, we have the following well-posedness result for (1.3) in a special case.
Proof : The proof is very similar to that for [11,Theorem 4.2], and hence we only give below a sketch.
First, we define a family of operators {T (t)} t≥0 on L 2 (H) as follows: We consider the following L 2 (H)-valued BSEE: where Noting that P (·), Q(·) solves the equation (  Finally, by (3.10) and some direct computation, one can show that P (·), Q(·) satisfies (3.3), and therefore it is a transposition solution to the equation (1.3) (in the sense of Definition 3.1). The uniqueness of P (·), Q(·) follows from Theorem 3.1.
We have the following well-posedness result for the equation (1.3). Proof : The proof of this theorem is very lengthy and technical, and it is very similar to that of [11, Theorem 6.1]. Hence, we only give here a sketch.

A regularity property for relaxed transposition solutions to the operator-valued BSEEs
In this section, we shall derive a regularity property for relaxed transposition solutions to the equation (1.3). This property will play key roles in the proof of our general Pontryagin-type stochastic maximum principle, presented in Section 5. To simplify the notations, we assume that d = 1 in this section. We need some preliminaries. First of all, as an immediate consequence of the well-posedness result for (3.2), it is easy to prove the following result. Next, we recall the following known result.
There exist some works addressing the Pontryagin-type maximum principle for optimal controls of infinite dimensional stochastic evolution equations (e.g. [1,2,5,15,17] and the references therein). However, most of the previous works in this respect addressed only to the case that either the diffusion term does NOT depend on the control variable (i.e., the map b(t, x, u) in (5.1) is independent of u) or the control region U is convex. Recently, this restriction was relaxed in [3,4,11]. In both [3] and [4], the filtration F is assumed to be the natural one (generated by the Brownian motion {w(t)} t∈[0,T ] and augmented by all of the P-null sets). Also, in [3], the authors assume that A is a strictly monotone operator; while in [4], the authors assume that H = L 2 (D, D, µ) (for a measure space (D, D, µ) with finite measure µ), and the restriction of {S(t)} t≥0 to the space L 4 (D, D, µ) is a strongly continuous analytic semigroup and the domain of its infinitesimal generator is compactly embedded in L 4 (D, D, µ). On the other hand, in [11,Theorem 9.1], a technical assumption b x (·,x(·),ū(·)) ∈ L 4 F (0, T ; L ∞ (Ω; L(D(A)))) is imposed. The purpose of this section is to establish a Pontryagin-type maximum principle without any of the above mentioned assumptions.
Define a function H : [0, T ] × H × U × H × H → R as follows: We have the following result.