New convergence on inertial neural networks with time-varying delays and continuously distributed delays

Abstract: In this paper, a class of inertial neural networks with bounded time-varying delays and unbounded continuously distributed delays are explored by applying non-reduced order method. Based upon differential inequality techniques and Lyapunov function method, a new sufficient condition is presented to ensure all solutions of the addressed model and their derivatives converge to zero vector, which refines some previously known researches. Moreover, a numerical example is provided to illustrate these analytical conclusions.


Introduction
Inertial neural networks system is a class of second order delay differential equations proposed by Babcock and Westervelt [1]. It is established by introducing an inertia term into the multi-directional associative memory neural networks, and is widely used in the fields of optimization, associative memory, image processing, psychophysics, and adaptive pattern recognition [1]. Therefore, it is of great significance to study the dynamic behaviors (such as stability [2][3][4][5][6], dissipation [7][8][9], Hopf bifurcation [10][11][12], Lagrange stability [13][14][15], synchronization [16][17][18][19][20], etc.) of the system in the application of inertial neural networks. It is worth noting that the dynamics analysis on inertial neural networks is usually to convert them into a first-order differential system by reducing order variable substitution under the assumption that the activation functions are bounded [21][22][23][24]. In particular, the periodicity, stability and convergence for the inertial neural networks systems have been established in [25][26][27][28][29][30] by using the reduced order method. However, this method needs to introduce some new parameters, which will raise the dimension in the inertial neural networks system. This will increase huge amounts of computation and it is difficult to achieve in practice [3,4,20]. Therefore, the authors of [3,4,20] have developed some non-reduced order methods to establish the stability and synchronization conditions of inertial neural networks with constant or time-varying delays, respectively.
Because there are many parallel paths with a series of different axon sizes and lengths in the neural networks, it is necessary to introduce continuous distributed delays to describe the transmission of neuron signals. In recent years, a large number of literatures have studied the dynamic behaviors of inertial neural networks with unbounded distributed delays [31][32][33][34][35][36][37]. In particular, the author of literature [36] studied the global convergence of inertial neural networks with continuous distributed delays by using a non-reduced order method: and where BC((−∞, 0], R) is the set of all continuous and bounded functions from (−∞, 0] to R, x(t) = (x 1 (t), x 2 (t), · · · , x n (t)) is the state vector, x (t) is called an inertial term of (1.1), the time-varying connection weights c i j , h i j : R → R and a i , b i : R → (0, +∞) are bounded and continuous functions, the delay kernel K i j : [0, +∞) → R is a continuous function, the external input J i (t) and the activation functionP j andR j are continuous, and i, j ∈ S . Unfortunately, in the initial values (1.2) which also adopted in [24,35], the assumption that is incorrect. In fact, in the system (1.1), the transmission term a i (t)x i (t) is not affected by the delays. Combined with the theory of the delay differential equation, we can see that in the initial problem (1.2), it is not necessary to assume that x i (t) is bounded and continuous on (−∞, 0], which leaves room for further improvement.
On the other hand, the dynamic characteristics of the inertial neural networks are usually affected by time-varying delays and distributed delays. Therefore, it is especially significant to study the following inertial neural networks system with bounded time-varying delays and unbounded continuously distributed delays: where J i , c i j , d i j , h i j : R → R, a i , b i : R → (0, +∞) and τ i j : R → R + are bounded and continuous functions, the delay kernel K i j ∈ C([0, +∞), R) is a continuous function, the activation functions P i ,Q i ,R i are continuous, and i, j ∈ S . As is well known that the Lyapunov function structure of unbounded time-delay systems is more complex than bounded time-delay systems, therefore, the stability of the former is more difficult to establish than the latter. Especially, there are few studies on the dynamic behaviors of inertial neural networks with bounded time-varying delays and unbounded distributed delays. So far, we only find that the authors of [38] have discussed the existence and exponential stability of the periodic solution of system (1.3) with periodic input functions. However, to the best of our knowledge, there has not yet been research work on the global convergence analysis for the system (1.3) by utilizing a non-reduce method.
Regarding the above discussions, in this manuscript, the initial value (1.2) is modified to

Global convergence of system (1.3)
In this section, we use the following Barbarat's lemma to prove the global convergence of the system (1.3).
It follows from (G 2 ), (G 3 ), (G 6 ) and (1.3) that 3) The assumption (G 1 ) and the fact that uv ≤ 1 which, together with (G 4 ), (2.2) and (2.3), give This implies that W(t) ≤ W(0) for all t ∈ [0, +∞), and Since α i |x i (t)| ≤ |α i x i (t) + γ i x i (t)| + |γ i x i (t)|, it follows that x i (t) and x i (t) are uniformly bounded on [0, +∞) for all i ∈ S . According to the continuity of right-hand side functions in (1.3), it is easy to see that x i (t) is also uniformly bounded on [0, +∞) for all i ∈ S , which combining with (G 4 ) lead that x 2 i (t) are uniformly continuous on [0, +∞).

In addition, (2.4) entails that
which, together with Lemma 2.1, lead to The proof is complete. Remark 2.3. Obviously, system (1.1) is a special case of system (1.3) when d i j = 0, i, j ∈ S , and the restrictions on initial value condition (1.4) are weaker than those ones in (1.2), hence all the results in [30] can be derived from theorem 2.1. Moreover, the global Lipschitz conditions on the activation functions were crucial in [3,20,31] where the convergence on the state vector of inertial neural networks system was considered. However, in this paper, the global Lipschitz conditions have been abandoned and the global convergence on the inertial neural networks system with bounded timevarying delays and unbounded continuously distributed delays has been established. This implies that Theorem 2.1 generalizes and complements the main results of [3,20,30,31].

Conclusions
In this paper, applying differential inequality techniques coupled with Lyapunov function method instead of the reduced order method, we study the global convergence on inertial neural networks with bounded time-varying delays and unbounded continuously distributed delays. Some sufficient assertions have been established to guarantee that every solution and its derivative of the addressed model are convergent to the zero vector. It should be mentioned that the method applied in this paper provides a possible approach to study the topic on dynamical behaviours of other inertial neural networks model with bounded time-varying delays and unbounded continuously distributed delays.