Existence-uniqueness of positive solutions to nonlinear impulsive fractional differential systems and optimal control

In this thesis, we investigate a kind of impulsive fractional order differential systems involving control terms. By using a class of φ-concave-convex mixed monotone operator fixed point theorem, we obtain a theorem on the existence and uniqueness of positive solutions for the impulsive fractional differential equation, and the optimal control problem of positive solutions is also studied. As applications, an example is offered to illustrate our main results.


Introduction
In recent years, more experiments and theories show that many abnormal phenomena that occur in engineering and applied sciences can be described by fractional calculus, and fractional differential equations have been proved to be valuable tools in various science fields, such as physics, biological engineering, mechanics, artificial intelligence, chemistry engineering, etc. (see [1][2][3][4][5]). In [5], Zhang and Tian investigated the following fractional differential system with two nonlinear terms: x(t)) = 0, t ∈ (0, 1), n -1 < v < n; x (i) = 0, i = 0, 1, 2, 3, . . . , n -2; is continuous, and k : [0, ∞) → [0, ∞) is continuous. By means of the sum-type mixed monotone operator fixed point theorems, a unique positive solution was obtained, and the authors constructed two monotone iterative sequences to approximate the unique positive solution.
In addition, in order to describe a physical process model with discontinuous jumps or mutations in disease prevention and control, earthquake and shock absorption systems, and other aspects of research, many researchers have investigated the impulsive problems, see [6][7][8][9][10][11][12]. Moreover, in recent years, optimal control problem to all kinds of differential equations has attracted many researchers. For a small sample of such work, readers can refer to [13][14][15][16]. In [14], Zhang and Yamazaki investigated a class of second order impulsive differential equations given by x(t), x(t)) + u(t), t ∈ (0, 1)/{t 1 , t 2 , . . . , t m }, x| t=t k = I k (x(t k ), x(t k )), k = 1, 2, . . . m, By employing a fixed point theorem of ϕ-concave-convex mixed monotone operator, existence and uniqueness of positive solutions to the initial value problem were obtained. In addition, the authors investigated the control problem of positive solutions and proved the existence-stability of an optimal control. In [16], Benchohra investigated the following Caputo fractional differential equations with impulsive terms: where C D γ is the standard Caputo fractional derivative, f : J × E → E is a given function, I k : E → E, k = 1, 2, . . . , m, and y 0 ∈ E. By using Monch's fixed point theorem and the technique of measures of noncompactness, the existence of solutions for a class of initial value problems was investigated in an abstract Banach space. Inspired by the above literature, in the article, we are devoted to studying the existenceuniqueness and optimal control of positive solutions to impulsive fractional order differential equations with control term as follows: , where x(t + k ) is the right limit and x(tk ) is the left limit of x(t) at t = t k . Also, I k ∈ C[R + × R + , R + ], k = 1, 2, . . . m. In addition, let J 0 = [0, t 1 ], J 1 = (t 1 , t 2 ], . . . , J 1 = (t m-1 , t m ], and J = J \ {t 1 , t 2 , . . . , t m }. Problem (OP). Find an optimal control u * ∈ U M such that π(u * ) = inf u∈U M π(u). Here, U M is a control Banach space defined by where M is a positive constant and π(u) is the cost functional. Set where u ∈ U M is a control function, x is a positive solution to (IP; u), and x d is the given desired target profiles in L 2 (0, 1).
To the best of our knowledge, there are few studies that consider the existenceuniqueness and optimal control of positive solutions to Caputo fractional differential equations with impulsive terms. Therefore, in the sense of minimum function, it is particularly important to study this kind of equation by nonlinear theory, which enriches and extends the existing body of literature. The main characteristic features presented in this article are as follows. Firstly, the equations in this paper are the generalization of the equations studied in , and u(t) = 0. Secondly, in our work, the nonlinear term is mixed monotone, so by means of the fixed point theorem of ϕ-concave-convex mixed monotone operator, we can show the existence and uniqueness of positive solution. Here, we should point out that the conditions showed in this paper are weaker than the conditions in [7], in which the operator is completely continuous. Finally, comparing with [15], the optimal control problems in differential equations of integer order are extended to the fractional differential equations; comparing with [5] and [14], in this paper, we consider the fractional differential equations with impulsive terms and control terms. As we all know, in many applications, lots of systems with short-term perturbations are often described by impulsive fractional differential equations, and in the existing literature, there is no paper studying a similar optimal control problem for fractional differential equations with impulsive term. So our study is new and significant.
The structure of this paper is as follows. In Sect. 2, we briefly review some definitions, concepts, notations, and lemmas in a Banach space partially ordered by cone P h . In Sect. 3, the existence and uniqueness of positive solutions are investigated. In Sect. 4, we study the optimal control problems to fractional differential equations involving impulsive terms (1.1). Finally, in Sect. 5, we show a specific example to illustrate our main results.

Preliminaries
Suppose that P is a nonempty closed convex set and P ⊂ E, P is called a cone if it satisfies the following conditions: In addition, (E, · ) is a real Banach space which is partially ordered by a cone P ⊂ E, that is, yx ∈ P implies that x ≤ y. If x ≤ y and x = y, then we denote x < y or y > x. We denote the zero element of E by θ . For all x, y ∈ E, if there exists M > 0 such that θ ≤ x ≤ y implies x ≤ y , the cone P is called normal; in this case M is the infimum of such a constant, it is called normality constant of P.
Furthermore, for given ∀h > θ , set P h = {x ∈ E | x ∼ h}, in which ∼ is an equivalence relation, i.e., for all x, y ∈ E, x ∼ y means that there exist λ > 0 and μ > 0 such that λx ≥ y ≥ μx.
Throughout this paper, let PC[J, R] := {x|x : J → R, x(t) be a continuous function at t = t k and left continuous at t = t k , x(t + k ) exists, k = 1, 2, . . . , m}. Then we can easily find that PC[J, R] is a Banach space and the norm x pc = sup t∈J |x|. Set H := L 2 (J) with the usual Hilbert structure, in addition, · is the norm in H.

Definition 2.1 ([8])
The fractional integral of α order for a function f is defined as follows: provided that such an integral exists.
The Caputo fractional derivative of α order for a function f is defined as follows: where [α] denotes the integer part of the real number α.

Lemma 2.1 ([17])
Let P be a normal cone of a real Banach space E. Also, let A : P × P → P be a mixed monotone operator. Assume that Then operator A has a unique fixed point x * in P h . Moreover, for any initial x 0 , y 0 ∈ P h , constructing successively the sequences we have x nx * → 0 and y nx * → 0 as n → ∞.

Initial value problem
In this section, we show the existence-uniqueness of the positive solution to (OP; u) by applying a fixed point theorem of mixed monotone operator (Lemma 2.1). Throughout this section, let P = {u ∈ PC[J, R]; u(t) ≥ 0, ∀t ∈ J}. Obviously, P is a normal cone in PC[J, R]; moreover, the normality constant of P is 1.
Proof If t ∈ J 0 , we take α times integral for the first equation on both sides of (1.1) at the same time, then the following contents can be obtained: Then If t ∈ J 1 , integrating on both sides of the first equation of (1.1), we have Since x(t) has a break point t = t 1 within (0, t), we get and Furthermore, we obtain Similarly, if t ∈ J k , we have Finally, we get Then we know that (3.1) is equivalent to (1.1). Now, we prove that (3.1) meets the differential system (1.1). If t ∈ J 0 , let t = 0, by (3.1) we get x(0) = x 0 . If t ∈ J 1 , taking derivative on both sides of (3.1), we have

=f t, x(t), x(t)u(t).
In the first type of (3.1), let t → t -1 , we have

s, x(s), x(s) + u(s) ds.
In the second type of (3.1), let t → t + 1 , we have and then we know So it is easy to know, when t ∈ J 1 , (3.1) meets all kinds of (1.1). Likewise, if t ∈ J k , (3.1) meets all kinds of (1.1) too, i.e., (3.1) and (1.1) are completely equivalent. It constitutes a proof.

Optimal control problem (OP)
In this section, in order to investigate the optimal control problem (OP) to (IP; u), we assume that the following additional conditions hold: (H 4 ) There exist two constants C f > 0 and C k > 0 such that Proof Because of u n → u weakly in H, it is easy to see that, for ∀t ∈ J, Moreover, Since (ts) α-1 -(τs) α-1 → 0 and (t-τ ) α α → 0 as t → τ , we get ( u n )(t) -( u n )(τ ) → 0 as n → ∞. Proof Obviously, x n is a solution of (OP; u n ) if and only if By using the Gronwall inequalities, we have R] , ∀t ∈ J 0 , n = 1, 2, . . . .
It is easy to see that (H 1 ), (H 2 ), and (H 3 ) hold. Hence, for each u ∈ H with -M ≤ u(t) ≤ 0, Theorem 3.1 implies that there exists a unique positive solution on J, where M > 0 is a given constant. In addition, let C f = C 1 = 1. Then we can conclude that (H 4 ) holds. Finally, by means of Theorem 4.1, for each desired target profile x d in H, the (OP) to (5.1) has at least one optimal control.