Solvability and Optimal Controls of Non-instantaneous Impulsive Stochastic Neutral Integro-differential Equation Driven by Fractional Brownian Motion

In this manuscript, a new class of non-instantaneous impulsive stochastic neutral integrodifferential equation driven by fractional Brownian motion(fBm, in short) with state-dependent delay and their stochastic optimal control problem is studied. We utilize the theory of the resolvent operator and a fixed point technique to present the solvability of the stochastic system. Then, the existence of optimal controls is discussed for the purposed stochastic system. Finally, an example is offered to demonstrate the obtained theoretical results.


Introduction
Stochastic differential equations have been used with great success in many application areas including biology, epidemiology, mechanics, economics, and finance. For the fundamental study of the theory of stochastic differential equations, we refer to [1][2][3][4]. Yang and Zhu [5] studied the existence, uniqueness, and stability of mild solutions for the stochastic differential equations with Poisson jumps by using fixed point techniques. The fBm with Hurst parameter H ∈ (0, 1) is a self-similar centered Gaussian random process with stationary increments. It admits the long-range dependence properties when H > 1/2. Many exciting applications of fBm have been established in diverse fields such as finance, economics, telecommunications, and hydrology. For more details on fBm, see [6][7][8][9] and the references cited therein. Boudaoui et al. [10] studied the existence and continuous dependence of the mild solutions for the impulsive stochastic differential equation driven by fBm.
In recent years, the differential equation with fixed moments of impulses (instantaneous impulses) has become the natural framework for modeling of many evolving processes and phenomena studied in economics, population dynamics, and physics. For more details on differential equations with instantaneous impulses, one can see the papers [11][12][13][14] and the references cited therein. Deng et al. [15] discussed the existence and exponential stability for impulsive neutral stochastic functional differential equations driven by fBm with non-compact semigroup. Zhu [16] obtained some sufficient conditions to ensure the pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching. The action of instantaneous impulses does not describe certain dynamics of evolution processes in pharmacotherapy. For example, consider the following simplified situation concerning the hemodynamic equilibrium of a person. In the case of a decompensation (for example, high or low levels of glucose) one can prescribe some intravenous drugs (insulin). Since the introduction of the drugs into the bloodstream and the consequent absorption for the body are gradual and continuous processes, we can interpret the situation as an impulsive action that starts abruptly and remains active over a finite time interval. For these reasons, Hernández and O'Regan [17] introduced a new class of abstract differential equations with non-instantaneous impulses (NII, in short) and they investigated the existence of mild and classical solutions. For comprehensive details on differential equation with NII, see [18][19][20]. The qualitative properties of mild solutions for differential equations with NII have been investigated in several papers [21][22][23] and the references cited therein.
On the other hand, delay differential equation has been gaining much interest and attracting the attention of several researchers, because of its wide applications in various fields of science and engineering such as control theory, heat flow, mechanics, distributed networks, and neural networks, etc. The delay depends on the state variable, is called state-dependent delay(SDD, in short). For more details on SDD, we refer [24][25][26][27][28]. In the neutral differential equation, the highest order derivative of the state variable appears without delay and with delay. Ezzinbi et al. [29] discussed the existence and regularity of solutions for the neutral functional integro-differential equation with delay. Vijayakumar [30] investigated the approximate controllability for integro-differential inclusions by using the resolvent operators. The optimal control problem plays an important role in many scientific fields, such as engineering, mathematics, and biomedical. When the stochastic differential equation describes the performance index or cost functional and system dynamics, an optimal control problem reduces to a stochastic optimal control problem. Wei et al. [31] obtained the existence of optimal controls for the impulsive integro-differential equation of mixed type. Jiang et al. [32] discussed the existence of optimal controls for fractional evolution inclusion with Clarke subdifferential and nonlocal conditions. In particular, in [33,34], the authors analyzed the existence of optimal controls for the fractional order differential equations, whereas in [35,36] the authors investigated the same type of problem for the impulsive fractional stochastic integro-differential equations with delay.
To the best of our knowledge, there is no manuscript considering the solvability and optimal controls of a non-instantaneous impulsive stochastic system driven by fBm with SDD. In order to fill this gap, we consider the following non-instantaneous impulsive stochastic neutral integro-differential equation driven by fBm with SDD, which is of the form where z(·) takes values in a real separable Hilbert space Z, A is the generator of a C 0 -semigroup of operators { (t) : t ≥ 0} on Z. B H = {B H (t) : t ≥ 0} is a fBm with Hurst index H ∈ (1/2, 1), takes values in a Hilbert space Y . The initial data Ω = {Ω(t), t ∈ (−∞, 0]} is a B-valued, F 0 -adapted random variable, which not dependent on B H , where B abstract phase space. The history valued function z t : (−∞, 0] → Z is defined as z t (θ) = z(t + θ) for all θ ∈ (−∞, 0] belongs to B. The control function v takes value from a separable reflexive Hilbert space T , and C is linear operator from T into Suppose that G(t), t ∈ J 1 is a linear and bounded operator. The function D : are satisfying some suitable conditions which will be specified later.
The manuscript is structured as follows. Section 2 introduces preliminary facts and some notations. In section 3, we discussed the solvability of the purposed stochastic system and section 4 is devoted to the investigation of the existence of optimal control pairs of the Lagrange problem corresponding to the proposed stochastic system. In section 5, an example is provided to illustrate the applications of the obtained results. The last section is devoted to our conclusions.

Preliminaries
In this segment, we present some mathematical tools which are required to prove the main results. Let (Ω, F, {F t } t≥0 , P ) be a filtered probability space, which is complete, where F t the σ-algebra is generated by random variable {B H (s), s ∈ [0, t]}. L(Y, Z) signify the space of all operators from Y into Z, which are linear and bounded. Notation . is represent the norms of the spaces Z, Y , L(Y, Z). The collection of all square integrable, strongly measurable, Z-valued random variables, denoted by , Z) symbolizes the space of all F t -adapted measurable, normalized piecewise continuous processes from [r 1 , r 2 ] into Z.
is called one dimensional fBm and H is the Hurst parameter.
The fBm B H (t) with 1/2 < H < 1 has the following integral representation where w( ) is a Wiener process or Brownian motion and the kernel K H (t, ) is defined as Let the operator Q ∈ L(Y, Y ) is defined by Qe i = λ i e i , where {λ i ≥ 0 : i = 1, 2, . . . , } are real numbers with trace T r(Q) = ∞ i=1 λ i < ∞ and {e i , i = 1, 2, . . . , } is a complete orthonormal basis in Y . Next, we define the infinite dimensional fBm B H on Y with covariance Q as where B H i (t) are real, independent fBm. Now, we define the separable Hilbert space L 0 2 (Y, Z) of all Q-Hilbert-Schmidt operators from Y into Z with norm ψ 2 2) is well-defined and Z-valued random variable and we get and z(t + j ) exists for all j = 1, 2, . . . , M, endowed with the norm z PC = sup t∈J 1 E z(t) 2 1/2 . Then (PC(Z), · PC ) is Banach space.
In the following, let T is a separable reflexive Hilbert space from which the controls v take the values. Operator C ∈ L ∞ (J 1 , L(T , Z)), where L ∞ (J 1 , L(T , Z)) denote the space of operator valued functions which are measurable in the strong operator topology and uniformly bounded on the interval J 1 , endowed with the norm · ∞ . Let L 2 F (J 1 , T ) denote the space of all measurable and F t -adapted, T -valued stochastic processes satisfying the condition We expect that the phase space (B, · B ) is a seminormed and linear space of F 0 −measurable functions from (−∞, 0] into Z and subsequent conditions are satisfied. , Z) and z e ∈ B , then for each t ∈ [e, e + b] the subsequent conditions are satisfied: , K 2 is a continuous function, K 3 is a locally bounded function and K 1 , K 2 , K 3 are independent of z(·).
[A2]: The phase space B is complete. For more details on phase space, we refer to [38,39]. .
≤ N e β t for some constants β and N ≥ 1.
2. For all z ∈ Z, (t)z is strongly continuously for t ∈ J 1 .

For all
For more details on the resolvent operator, we refer to [30,40,41].
1. z(t) is measurable and adapted to F t , t ≥ 0.

Solvability for Stochastic System
In this section, we prove the existence of mild solutions for the stochastic system (1.1). Let ρ : be a continuous function. To prove our main results, we assume the subsequent hypotheses [H1]: (t), t > 0 is compact and there exists a constant N > 0 such that (t) ≤ N for every t ∈ J 1 .
(b) There exists a continuous function η : J 1 → [0, ∞) and a continuous nondecreasing function [H6]: The following inequality holds Proof. On the space BPC = {z ∈ PC(Z) : z(0) = Ω(0)} endowed with the uniform convergence topology. For each l > 0, let Let the operator F : B l → BPC be specified by From Hölder's inequality and [H1], we have By Bochner theorem, we have that (t − s)C(s)v(s) are integrable on (p j , t), j = 0, 1, . . . , M, which conclude that F well defined on B l . Now, we split F as For the sake of convenience, we divide the proof into a sequence of steps.
Step 1. There exists l > 0 such that F(B l ) ⊂ B l . If we assume that this assertion is false, then for any l > 0, we can choose z l ∈ B l and t ∈ J 1 such that E F(z l )(t) 2 > l.
Step 5. The set Clearly, Q(0) = {0} is compact. Let ξ is real number and t ∈ (p j , t j+1 ], j = 0, 1, . . . , M, be fixed with 0 < ξ < t. For z ∈ B l , we define The relatively compact set Q ξ (t) and set Q(t) are arbitrarily close. Hence, Q(t) = {(F 2 z)(t) : z ∈ B l } is relatively compact in Z. By using step 3-5 along with Arzela-Ascoli theorem, we obtain that the F 2 is a completely continuous operator. Hence, by Krasnoselskii's theorem [43], we realize that the operator F 1 + F 2 has a fixed point, which is a solution of stochastic system (1.1).

Existence of Stochastic Optimal Controls
In this segment, we prove the existence of optimal controls for stochastic system. Let z v be the mild solution of stochastic system (1.1) with respect to v ∈ U ad . We consider the Lagrange problem (LP) : Find an optimal state-control pair (z * , v * ) ∈ BPC × U ad such that To discuss problem (LP), we need the following additional hypotheses (b) For almost all t ∈ J 1 , M(t, · , · , · ) is sequentially lower semi-continuous on B × Z × T .
(c) There exist constants ω 1 , ω 2 ≥ 0, ω 3 > 0 and Φ is non-negative function in [H8]: The operator C is strongly continuous. Proof. If inf{J(z v , v) : v ∈ U ad } = +∞, there is nothing to prove. Next, we choose inf{J(z v , v) : v ∈ U ad } = < +∞ and using the hypotheses [H7], we obtain By definition of infimum, there exists a minimizing sequence {(z k , v k )} ⊂ R ad , where R ad ={ (z, v) : z be the mild solution of stochastic system (1.1) with respect to v ∈ U ad } such that Since {v k } ⊆ U ad , {v k } is bounded in the space L 2 F (J 1 , T ), then exists a subsequence, relabeled as {v k }, and v * ∈ L 2 F (J 1 , T ) such that v k converges weakly to v * in L 2 F (J 1 , T ) as k → ∞. Since U ad is convex and closed, then by Marzur Lemma, we have v * ∈ U ad .

Example
Consider the stochastic partial neutral integro-differential control system driven by fBm with NII and SDD, of the form S(t − s)D(s, µ s )(ε)ds dt with cost functional as where 0 = t 0 = p 0 < t 1 < p 1 < · · · < t M < p M < t M+1 = b = 1, K : [0, π] × [0, 1] is continuous and B H is a fBm with the Hurst index 1/2 < H < 1. In this system Then A generates a C 0 -semigroup (t) which is compact, self-adjoint. And there exist normalized set θ n (v) = 2/π sin (nv), n ∈ N of eigenvectors of A corresponding to eigenvalues n 2 , n ∈ N. Since the resolvent operator (t) is compact, there exists a constant N > 0 such that (t) ≤ N , then the hypotheses [H1] is fulfilled.

Conclusion
In this manuscript, we studied the stochastic optimal control problem for a class of non-instantaneous impulsive stochastic neutral integro-differential equation driven by fBm. We define a concept of the piecewise continuous mild solutions for the proposed system, which is used to construct a suitable operator and apply fixed point technique to derive the existence result. Also, we prove the existence of optimal controls for the proposed system, which is used to derive optimization conditions. Finally, the obtained results have been verified through an example. There are two direct issues which require further study. First, we will investigate the optimal control problems for the non-instantaneous impulsive stochastic delay differential equations driven by Lévy processes [45]. Second, we will be devoted to studying the approximate controllability for the Markov and semi-Markov switched stochastic system [46,47].