Improved results on mixed passive and H∞ performance for uncertain neural networks with mixed interval time-varying delays via feedback control

Abstract: This paper studies the mixed passive and H∞ performance for uncertain neural networks with interval discrete and distributed time-varying delays via feedback control. The interval discrete and distributed time-varying delay functions are not assumed to be differentiable. The improved criteria of exponential stability with a mixed passive and H∞ performance are obtained for the uncertain neural networks by constructing a Lyapunov-Krasovskii functional (LKF) comprising single, double, triple, and quadruple integral terms and using a feedback controller. Furthermore, integral inequalities and convex combination technique are applied to achieve the less conservative results for a special case of neural networks. By using the Matlab LMI toolbox, the derived new exponential stability with a mixed passive and H∞ performance criteria is performed in terms of linear matrix inequalities (LMIs) that cover H∞, and passive performance by setting parameters in the general performance index. Numerical examples are shown to demonstrate the benefits and effectiveness of the derived theoretical results. The method given in this paper is less conservative and more general than the others.

the activation functions are different, and the output is general.
• By using the Lyapunov-Krasovskii stability theory, the new results of the exponential stability with a mixed passive and H ∞ performance for the uncertain neural networks are obtained. Based on the weighting parameter, the results are more general such that H ∞ performance or passive performance for the uncertain neural networks are included.
• Different from the methods in [30][31][32], the Lyapunov-Krasovskii functional comprising single, double, triple, and quadruple integral terms and integral inequalities are employed. Convex combination idea and zero equation are used. The method used in this paper reveals less conservative results when comparing with existing results [30][31][32].
This paper is formed in five sections as follows. In Section 2, network model and preliminaries are provided. Section 3 shows exponential stability analysis with a mixed passive and H ∞ performance of the uncertain neural network system, and the stability analysis of a special case neural network. Numerical examples are given in Section 4 and conclusions are addressed in Section 5.
(2) under zero initial condition, there exists a scalar γ > 0 such that the following inequality is satisfied: for any T p ≥ 0 and any non-zero ω(t) ∈ L 2 [0, ∞).
Lemma 2.5. [36] For given matrices P, Q and R with R T R ≤ I and a scalar α > 0, the following inequality holds: Lemma 2.6. [37] Let P, Q, R be given matrices such that R > 0, then

3)
wherein, with: in which: then, the NNs (3.1) is exponentially stable with a mixed passive and H ∞ performance. Moreover, the controller is in the form Proof. Consider the model (3.1) with the following Lyapunov-Krasovskii functional We find time derivatives of V i (x(t), t), i = 1, 2, . . . , 9, along the trajectories of (3.1), we achievė Utilizing Lemma 2.4., the following inequalities are easily obtained: (3.16) By utilizing Lemma 2.3, we achieve the following inequalities . . , n, which are equivalent to which is equivalent to We have zero equation as follows h(x(s)) ds + Eω(t) .
Adding above zero equation toV(x(t), t), we obtain the following inequality from (2.3), (3.5)-(3.22) is a convex combination of Θ (1) and Θ (2) . The combinations are negative definite only if Under the zero initial condition, for any T p we find that In this case, the condition (2.3) is guaranteed for any non-zero ω(t) ∈ L 2 [0, ∞). If ω(t) = 0, in sense of equation (3.26), there exists a scalar υ 1 > 0 such thaṫ By the definitions of V i (x(t), t), it is easy to derive the following inequalities: We are now ready to deal with the exponential stability of (3.1). Consider the Lyapunov-Krasovskii functional e 2ct V(x(t), t), where c is a constant. Using (3.27), (3.28), we have λ max (X 2 ). Now, we take c to be a constant satisfying c ≤ υ 1 2µ 1 , and then achieve from (3.29) that which, together with (3.4) and (3.28), imply that λ max (X 2 ), and therefore and b 2 = 2c, we can rewrite (3.31) as Hence, the NNs (3.1) is exponentially stable with a mixed passive and H ∞ performance index γ. The proof is completed.

Mixed passive and H ∞ analysis for uncertain neural networks
In the second part, the criteria of exponential stability with a mixed passive and H ∞ performance for the uncertain neural networks are obtained by using similar proof of Theorem 3.1 together with Lemma 2.5, 2.6.
In the third part, we will investigate the stability of a special model of the neural networks, in order to compare the maximum delay with existing results.
Proof. We choose the following Lyapunov-Krasovskii functional candidate for the system (3.36) as By applying similar proof in Theorem 3.1, the system (3.36) is exponentially stable.

Remark 3.
Recently, the robust passivity problem of uncertain neural networks with interval discrete and distributed time-varying delays has been studied in [14]. Also, robust reliable H ∞ control problem of uncertain neural networks with mixed time delays has been discussed in [23]. However, the problem of mixed passive and H ∞ for uncertain neural networks with interval discrete and distributed timevarying delays has not been investigated yet. The results in this paper provide the sufficient conditions to assure that the uncertain neural network is exponentially stable with mixed passive and H ∞ index γ. The conditions are obtained by constructing a Lyapunov-Krasovskii functional consisting novel integral terms.
Remark 4. It is well known that time delay is a normal phenomenon that appears in neural networks since the neural networks consist of a large number of neurons that connect with each other into a diversity of axon sizes and lengths. Practically time delay can occur in an irregular fashion such as sometimes the time-varying delays are not differentiable. So, in this work, the interval discrete and distributed time-varying delays do not necessitate being differentiable functions.
Remark 5. It is well known that the H ∞ theory is very important in the control problem. Besides, the H ∞ approaches are used in control theory to synthesize controllers achieving stabilization with an H ∞ norm bound limited to disturbance reduction. The passivity theory is widely used in system synthesis and analysis, as the system with passivity performance can effectively reduce the impact of noise. In fact, the passivity system does not produce energy by itself, but it will use the system's energy. The main property of passivity is that can keep the system internally stable. By the above mentioned, the obtained results are based on mixed passivity and H ∞ problem for uncertain neural networks with mixed timevarying delays. In comparison between the design of mixed H ∞ /passive performance and a single H ∞ or passive controller, the control problem under mixed H ∞ /passive performance consideration is more general than a single H ∞ or passive controller for example, a simple actual mixed H ∞ and passive performance index is employed in handling with the event-triggered reliable control issue for the fuzzy Markov jump systems (FMJSs), which can achieve the H ∞ or passive event-triggered reliable control problem for FMJSs by turning some fixed parameters. Hence, this paper are more general and convenient than the existing individual passive and H ∞ problem.
Remark 6. In this work, the Lyapunov-Krasovskii functional consisting single, double, triple, and quadruple integral terms, which full of the information of the delays σ 1 , σ 2 , δ 1 , δ 2 , and a state variable x(t). Furthermore, more information on activation functions has taken fully into the stability and performance analysis that is H + i are addressed in the calculation. Hence, the construction and the technique for computation of the Lyapunov-Krasovskii functional are the main key to improve results of this work. In the proof of Theorems 3.1, 3.2, and Corollary 3.3, integral inequalities and convex combination technique are used to bound the derivative of Lyapunov-Krasovskii functional, which provide tighter than the inequalities in [30][31][32]38]. All of these lead to the improved results in our work as we can see the compared results with some exiting works in numerical examples. However, the complex computation of the Lyapunov-Krasovskii functional leads to the LMI derived in this work which contains many information of the system. It is feasible for NNs with large number of neurons which can be solved by using the Matlab LMI toolbox. Hence, for further work, it is interesting for researchers to improve the technique for a simple Lyapunov-Krasovskii functional and also achieve better results.

Numerical examples
In this section, we provided four numerical examples which are illustrated the effectiveness of the proposed results. Moreover, two numerical examples show less conservative results than others.
Example 4.1. We consider the neural networks (3.36) with matrix parameters in [30]: By taking parameters β 1 = β 2 = 1 and solving Example 4.1 using LMIs in Corollary 3.3, we obtain maximum allowable values of σ 2 for different σ 1 without the upper bound of differentiable delay (µ) as shown in Table 1. Table 1 shows that the results derived in this paper are less conservative than the results in [30].
Remark 8. The stability criteria of Theorem 3.1 in the form LMIs (3.2) and (3.3) can be easily to examine by using LMI toolbox in MATLAB [39]. The improved stability criteria by using the Lyapunov-Krasovskii functional is based on LMIs and the dimension of the LMIs depends on the number of the neurons in neural networks. Thus, the computational burden problem goes up. This problem is the issue in studying needs of LMI optimization in applied mathematics and the optimization research. Hence, in the further, new techniques should be considered to reduce the conservativeness caused by the time-delays such as the delay-fractioning approach and so on.

Remark 9.
In the future work, it is very challenging to apply some lemmas or Lyapunov-Krasovskii functional used in this paper to apply into the quaternion-valued case to get improved stability conditions.

Conclusion
The problem of mixed passive and H ∞ analysis for uncertain neural networks with the state feedback control is investigated in this paper. We obtain the new sufficient conditions to guarantee exponential stability with mixed passive and H ∞ performance for the uncertain neural networks by using a Lyapunov-Krasovskii functional consisting single, double, triple, and quadruple integral terms with a feedback controller. Furthermore, integral inequalities and convex combination technique are applied to achieve the less conservative results for a special case of neural networks with interval discrete time-varying delays. The new criteria are in terms of linear matrix inequalities (LMIs) that cover H ∞ , and passive performance by setting parameters in the general performance index. Finally, numerical examples have been given to show the effectiveness of the proposed results and improve over some existing results in the literature. In the future work, the derived results and methods in this paper are expected to be applied to other systems such as fuzzy control systems, complex dynamical networks, quaternion-valued neural networks and so on [16,40,41].