Finite-time passivity for neutral-type neural networks with time-varying delays – via auxiliary function-based integral inequalities ∗

. In this paper, we investigated the problem of the ﬁnite-time boundedness and ﬁnite-time passivity for neural networks with time-varying delays. A triple, quadrable and ﬁve integral terms with the delay information are introduced in the new Lyapunov–Krasovskii functional (LKF). Based on the auxiliary integral inequality, Writinger integral inequality and Jensen’s inequality, several sufﬁcient conditions are derived. Finally, numerical examples are provided to verify the effectiveness of the proposed criterion. There results are compared with the existing results.


Introduction
Recently, neural networks have received much attention of their extensive applications in signal processing, solving optimization problems, pattern recognition, pattern classification, image processing, model identification and other engineering fields. The stability problem of neural networks with time-varying delays has been deeply investigated in [8-10, 12, 25, 37]. Time-delay phenomena are inevitable in studying real systems. The existence of time delay makes the system dynamic performance worse or even leads to system instability. Therefore, the stability and control problem of time-delay system have attracted a lot of scholars attention, and some nice results have been obtained on linear and nonlinear time-delay neural networks during the past few decades. Moreover, the delay-dependent stability conditions are generally less conservative than delay-varying delays have been investigated by employing LMI technique in [22]. In [27], the authors have studied finite-time neutral delay uncertain neural networks. Passivity analysis for neural networks of neutral type has been studied in [32]. The passivity analysis for memristor-based stochastic BAM neural networks of neutral type was presented in [26]. To the best of the authors' knowledge up to now, the finite-time passivity of neural networks with neutral-type time-varying delays has not been completely studied in the literature, which motivates our research in this paper.
With the above motivation, in this article, the issue of finite-time boundedness and finite-time passivity criteria of neutral-type neural networks with time-varying delay based on the auxiliary function-based integral inequality technique is explored. As result, in this note, there still exists some less conservatism for neural networks with interval timevarying delay to be further improved. To achieve this, at the end, several numerical examples are addressed to show the effectiveness of the developed stability criteria. The highlights and major contributions of this paper are reflected in the subsequent key points: (i) In this paper, we considered the system with time-varying delays, additionally the effect of neutral delay has also been taken into account to showing feasibility on a problem. (ii) Some simplest LMI-based criterion has been launched with the help of integral inequality technique together with the auxiliary function-based integral inequality combined with Writinger integral inequality, Jensen's inequality. (iii) Then we derived finite-time boundedness, finite-stability and finite time passivity conditions in the theorems. (iv) Several examples have been investigated to verify the correctness of the main theorem and the corollaries.
The outline of the paper is structured as follows. In Section 2, the system models and some necessary mathematical preliminaries are declared. In Section 3, we present the main results for the neural network model in which neutral delay is taken into account. Simulation examples are given in Section 4, and conclusions follow in Section 5.
Notations. R n denotes the n-dimensional Euclidean space, and R m×n is the set of all m × n real matrices. The superscript "T" denotes matrix transposition, and A B (respectively, A < B), where A and B are symmetric matrices (respectively, positive definite). · denotes the Euclidean norm in R n . If Q is a square matrix, λ max (Q) (respectively, λ min (Q)) means the largest (respectively, smallest) eigenvalue of Q. The asterisk " * " in a symmetric matrix is used to denote term, which is induced by symmetry; diag{·} stands for the diagonal matrix.

Problem formulation and preliminaries
Consider the neutral-type neural networks with time-varying delays as follows: http://www.journals.vu.lt/nonlinear-analysis where x(t) ∈ R n is the neural state vector, v(t) is the exogenous disturbance input vector belongs to L 2 [0, ∞), and y(t) is the output vector of the neural networks, f (x(t)) is the neuron activation function, A = diag{a 1 , a 2 , . . . , a n } > 0 is a diagonal matrix, B, C, D and E are connection weight matrices. φ(θ) denotes the continuous vector-valued initial function. h(t) denotes the time-varying delay, and d is neutral delay. We define the interval t k+1 − t k =h k + ∆h k h + ∆ h (t). Here |∆ h (t)| < ρ < h, where ρ is very small scalar. The intervals can be written as Assumption 1. For a given positive parameter δ, the external disturbance input w(t) is time varying and satisfies For presentation convenience, we denote   (b) Under zero initial condition, the following relation hold for a given positive scalar γ > 0: Lemma 1. (See [20].) For a positive definite matrix M , a differentiable function x(u), u ∈ (α, β), and a polynomial auxiliary function p i (u) = (u−α) i , the following inequality holds for 0 n 3: Lemma 2. (See [21].) For any constant matrix M > 0, the following inequality holds for all continuously differentiable function x in [α, β] → R n : ∈ R n such that the following integration is well defined, then 3 Main results

Finite-time boundedness
In this section, we investigate finite-time boundedness for the following delayed neural networks (1)- (3): where φ(θ) is a continuous vector-valued initial function, and we define the following vectors: Theorem 1. For given scalars h, µ, d, δ, α, β, c 1 , c 2 and T , the neural networks (4)- (5) is finite-time bounded if there exist positive symmetric matrices P , Q i (i = 1, 2, . . . , 10), any diagonal matrices U , S and matrices N 1 , N 2 with appropriate dimensions such that the following LMIs holds:Θ where where Consider Proof. Consider the following Lyapunov-Krasovskii functional: Then we calculating the time derivative of V (x(t)): Using Lemma 2 in (11), we can geṫ and applying Lemma 1 inV 5 (x(t)), we geṫ By applying Lemma 3 we geṫ By Lemma 3 we obtaiṅ Also, by using the Lemma 3 we can geṫ By Lemma 3 we getV Furthermore, the following equality holds for any real matrices N 1 and N 2 with compatible dimensions: Based on Assumption 2, for i = 1, 2, . . . , n, we obtain which is equivalent to where m i denotes the unit column vector having 1 on its ith row and zeros elsewhere. Let U = diag{u 1 , u 2 , . . . , u n }, S = diag{s 1 , s 2 , . . . , s n }.
Proof. The proof is similar to that of Theorem 1, so it is omitted here.
Proof. By using LKF and the similar lines as that in the proof of Theorem 1, Since we can obtain ξ T (t)Φξ(t) < 0.

Conclusion
In this article, we investigated the finite-time passivity of neutral-type neural networks with time-varying delays. By applying the Jensen-type integral inequality technique a delay-dependent criterion is developed to achieve the finite-time boundedness and finitetime stability for the neutral-type neural networks. Based on our proposed multiple integral forms of the Wirtinger-based integral inequality and the auxiliary function-based integral inequalities approach for high-order case, a novel delay-dependent condition is established to achieve the finite-time passivity neural networks. Numerical examples shows the effectiveness of the theoretical results and superiority to the existing results. Thus, the proposed technique can be extendable to spatial finite-time stabilization or synchronization: finite/fixed-time pinning synchronization of complex networks with stochastic disturbances [17]; discontinuous observers design for finite-time consensus of multiagent systems with external disturbances [16]; nonsmooth finite-time synchronization of switched coupled neural networks [15]. This will occur in the near future. http://www.journals.vu.lt/nonlinear-analysis