Linear Feedback of Mean-Field Stochastic Linear Quadratic Optimal Control Problems on Time Scales

,is paper addresses a version of the linear quadratic control problem for mean-field stochastic differential equations with deterministic coefficients on time scales, which includes the discrete time and continuous time as special cases. Two coupled Riccati equations on time scales are given and the optimal control can be expressed as a linear state feedback. Furthermore, we give a numerical example.


Introduction
e linear quadratic control problem is one of the most important issues for optimal control problem. e study of the mean-field linear quadratic optimal control problem also has received much attention [1,2], and it has a wide range of applications in engineering and finance [3,4]. Until now, the mean-field linear quadratic control problem is well understood both from the continuous and discrete points of view. In this paper, the mean-field linear quadratic control problem is studied in the version of time scales.
Time scales were first introduced by Hilger [5] in 1988 in order to unite differential and difference equations into a general framework. Recently, time scales theory is extensively studied in many works [6][7][8][9][10][11][12][13][14]. It is well known that the optimal control problems on time scales are an important field for both theory and applications. Since the calculus of variations on time scales was studied by Bohner [15], results on related topics and their applications has become more and more. e existence of optimal control for the dynamic systems on time scales was discussed [16][17][18]. Subsequently, maximum principles were studied in several work [19,20], which specify the necessary conditions for optimality. In addition, Bellman dynamic programming on time scales for the deterministic optimal control problems was considered in [21]. At the same time, some results were obtained for the linear quadratic control problems for deterministic linear system on time scales in [22,23]. In [24], the authors developed the linear quadratic control problems for stochastic linear system on time scales. To our best knowledge, the optimal control problems for the mean-field system on time scales have not been established.
We are interested in the mean-field stochastic linear quadratic control problem on time scales (MF-SΔLQ for short). To deal with the well posedness of the state equation on time scales, we use the similar iteration method as [25]. Very similar to continuous and discrete cases, we can also get the associated Riccati equations (see [26,27], for continuous and discrete cases) on time scales, and the optimal control can be expressed as a linear state feedback through the solutions of the coupled Riccati equations. e organization of this paper is as follows. In Section 2, we show some preliminaries about time scales' theory and MF-SΔLQ problem. We study the well posedness of the state equation on time scales and show the feedback representation of the optimal control by the associated Riccati equations on time scales in Section 3. Finally, an example is presented.

Preliminaries
A time scales T is a nonempty closed subset of real numbers set R and we denote [0, T] T � [0, T]∩T. In this paper, we always suppose T is bounded. e forward jump operator σ and backward jump operator ρ are, respectively, defined by (supplemented by inf ∅ ≔ sup T and sup ∅ ≔ inf T, where ∅ denotes the empty set). If σ(t) � t (σ(t) > t, ρ(t) � t, and ρ(t) � t), the point t is called right-dense (right-scattered, left-dense, and left-scattered). Moreover, a point is called isolated if it is both left-scattered and rightscattered. For a function f, we denote f σ � f · σ and f ρ � f · ρ. e definition of the graininess function μ is as follows: We now present some basic concepts and properties about time scales (see [10,11]

Remark 1.
If a function f is right-dense continuous, then f has an antiderivative F.
Define the set T κ as follows: Definition 2. Let f: T ⟶ R be a function and t ∈ T κ , and if for all ε > 0, there exist a neighborhood U of t such that We call f Δ (t) the Δ derivative of f at t.

Remark 2.
If the functions f and g are differentiable at t, then the product fg is also differentiable at t and the product rule is given by In this paper, we adopt the stochastic integral defined by Bohner et al. [25]. Let (Ω, F, F t t∈[0,T] T , P) be a complete probability space with an increasing and continuous fil- A Brownian motion indexed by time scales T is defined by Grow and Sanyal [13]. Although the Brownian motion on time scales is very similar to that on continuous time, but there are also some differences between them. For example, the quadratic variation of a Brownian motion on time scales (see [14]) is an increasing process yet, but it is not deterministic. In fact, Next, we give the definition of the stochastic Δ integral and its properties.
Definition 3 (see [25]). e random process X(t) is stochastic Δ integrable on [0, T] T if the corresponding X(t) is integrable, and define the integral value of X(t) as where and the Brownian motion on the right side of (7) is indexed by continuous time.
We also have the following properties. Let where the integral X with respect to the quadratic variation of Brownian motion 〈W〉 t is defined by Stieltjes integral as Notation 1. e following notation will be used: Finally, we introduce our MF-SΔLQ problem. Consider the following stochastic Δ-differential equation: where the coefficients A(·), A(·), B(·), B(·), D(·), and D(·) are all deterministic matrix-valued functions and where G and G are symmetric matrices and Q(·), Q(·), R(·), and R(·) are given deterministic matrixvalued functions.

Problem (MF-SΔLQ). For any given initial state
u * (·) is called an optimal control of the MF-SΔLQ problems and the corresponding X(·; x, u * (·)) is called an optimal state process.

Main Results
First, we introduce the following assumptions which are necessary for the proofs of our main results.
(H1) Assume that (H2) Assume that and for some δ > 0, Remark 3. Assumption (H1) can guarantee the existence and uniqueness of the solution of the mean-field stochastic linear system (10). Under Assumptions (H1) and (H2), we can establish two coupled Riccati equations to show the feedback control. Now, we show the well posedness of the state equation (10) by the iteration method, which is very similar to the way as in [25].

P) be given and W be a standard F t t∈[0,T] T Brownian motion. Suppose that (H1) holds, then system (10) has a unique solution
Proof. For the existence, we adopt the iteration method and define Mathematical Problems in Engineering , and we claim that where M is a generic constant and h n is the generalized monomials defined in [28]. When n � 0, we obtain Suppose inequality (14) holds for n − 1, then is proves the claim. Similarly, we have By a martingale inequality and by inequality (18) (see [25], for details), one has where C � 4TM. Note that a simple probability inequality is obtained from Markov's inequality, P(|Y| > a) ≤ (1/a p )E[|Y| p ], where a > 0, p > 0, and Y is a random variable. Using the probability inequality, where According to Borel-Cantelli lemma, this implies that where i.o. is the abbreviation of "infinitely often". Consequently, X 0 + n−1 i�0 (X i+1 − X i ) converges uniformly. Let n ⟶ ∞, we have

Mathematical Problems in Engineering
For the uniqueness, we assume X 1 and X 2 are both solution. en, It follows that By Gronwall's inequality [29], we obtain We are in a position to give the main results of the MF-SΔLQ optimal control problem. For this, we need a useful lemma. By some simple calculations, it is not hard for us to get the following product rule for stochastic processes on time scales, which is very similar to Du and Dieu [12]. □ Lemma 1. For any two n-dimensional stochastic processes X 1 and X 2 with where a i , b i : T × R n ⟶ R n , we have In this case, where the function μ is the graininess function as defined in (3) on time scales. Remark 4. Another form of the abovementioned product rule is as follows: where ΔtΔt � μ(t)Δt, ΔtΔW � ΔWΔt � μ(t)ΔW, and ΔWΔW � Δ〈W〉 t . When T � R, it is consistent with Ito's formula.
Remark 5. As mentioned before, because the quadratic variation of a process depends on not only the process itself but also the structure of time, the quadratic variation of a process becomes a little more complicated than the classical one. For instance, the quadratic variation of a deterministic continuous process is no longer zero. erefore, we can have different forms of the product rule on time scales. For example, the product rule (6) is equivalent to Now, we use the square completion technique to present a state feedback optimal control via two coupled Riccati equations on time scales.
where K and K are given as

Mathematical Problems in Engineering
Moreover, the cost functional can be rewritten as

Mathematical Problems in Engineering
Since K > 0 and K > 0, the optimal control should satisfy Making some calculations, we get the optimal control as (37). Substituting it into (47), we have the optimal cost functional can be expressed as (38). For the existence and uniqueness of the solutions to the RΔEs, it is assert [24] that Riccati equation (33) admits as a unique positive semidefinite solution P since (H2) holds. It follows that K > 0.
Using the similar method, we can get the solvability of the RΔE (34). □ Remark 6. When A, B, D, Q, R, and G are all equal to zero, then P � P. is recovers the result of the classical SΔLQ problem [24].
Remark 7. When T � R + , the coupled RΔEs (33) and (34) reduce to the result in [26]. On the contrary, when T � Z + , the coupled RΔEs are consistent with the case in [27]. Similarly, we have the following theorem which can be regarded as an equivalent form of eorem 2.
×(B(t) + B(t)) ′ (P + P) σ (t)(I + μ(t)(A(t) + A(t))) where K and K are given as before. For the MF-SΔLQ problem, is an optimal control. Moreover, the optimal cost functional with respect to u * is Proof. As the statement in previous eorem 2, we can obtain the solvability of the RΔEs (33) and (50). We need only to prove (51) and (42). Taking integral of

Δ(X ′ (t)P(t)X(t) + E[X(t)] ′ P(t)E[X(t)]) from 0 to T and taking expectation, we obtain
Consequently, by completing the squares, one has

Remark 8.
e solution P of the RΔE (34) equals to the sum of P and P, where P and P are the solutions of the RΔEs (33) and (50).