Improved delay-dependent robust stability criteria for recurrent neural networks with time-varying delays
Highlights
►This paper has presented improved result for RNNs with time-varying delays. ► The results are expressed in terms of LMIs. ► The results are less conservative than existing results. ► The restriction on the change rate of time-varying delays is relaxed. ► Numerical examples are given to illustrate the effectiveness of our results.
Introduction
In recent years, neural networks (NNs) have attracted much attention in research and have found successful applications in many areas such as pattern recognition, image processing, association, optimization problems [6], [16]. One of the important research topics is the globally asymptotic stability of the neural network models. However, in the implementation of artificial NNs, time delays are unavoidable due to the finite switching speed of amplifiers. It has been shown that the existence of time delays in recurrent neural networks (RNNs) may lead to oscillation, divergence or instability. Therefore, the stability of RNNs with delay has become a topic of great theoretical and practical importance. Generally, when a neural network is applied to solve an optimization problem, it needs to have a unique and globally stable equilibrium point. Thus, it is of great interest to establish conditions that ensure the global asymptotic stability of a unique equilibrium point of RNNs with delay [1], [2], [4], [5], [7], [8], [9], [10], [11], [12], [13], [14], [15], [18], [19], [20], [21], [22], [23].
So far, the stability criteria of RNNs with time delay are classified into two categories, i.e., delay independent [1], [2], [4], [14], [18], [23] and delay dependent [5], [7], [8], [10], [11], [12], [13], [15], [20], [21]. Generally speaking, the delay-dependent stability criteria are less conservative than delay-independent when the time-delay is small. Therefore, authors always consider the delay-dependent type. Some less conservative stability criteria were proposed in [8] by considering some useful terms and using the free-weighting matrices method. The stability criteria for neural networks with time-varying delay were considered in [10] where the relationship between the time-varying delay and its lower and upper bound was taken into account. By constructing a new augmented Lyapunov functional which contains a triple-integral term, an improved delay-dependent stability criterion is derived in [19]. However, these results have conservatism to some extent, which exist room for further improvement.
In this paper, the problem of delay-dependent robust stability criterion for recurrent neural networks with time-varying delay is considered. A sufficient condition for the solvability of this problem, which depends on the size of the time delay, has been presented by means of the Lyapunov functional and the linear matrix inequality (LMI) approach. Furthermore, the proposed condition in this paper is less conservative than previously established ones and include the least number of variables, which has been shown by some numerical examples. All results are derived in the LMI framework and the solutions are obtained by using LMI toolbox of Matlab. Finally, numerical examples are given to indicate significant improvements over the existing results.
Section snippets
Problem formulation
Consider the following recurrent neural network with time-varying delays and parameter uncertainties:where is the state vector with the n neurons; is called an activation function indicating how the jth neuron responses to its input; is a diagonal matrix with each controlling the rate with which the ith unit will reset its potential to the resting
Main results
In this section, we use the integral inequality approach (IIA) to obtain stability criterion for a recurrent neural network with time-varying delays. First, we take up the case where and in system (4) as follows:
Based on the Lyapunov–Krasovskii stability theorem and integral inequality approach (IIA), the following result is obtained. Theorem 1 For given positive scalars and the recurrent neural network system with
Numerical examples
In this section, we provide three numerical examples to demonstrate the effectiveness and less conservatism of our delay-dependent stability criteria. Example 1 Consider a delayed recurrent neural network with parameters as follows:
The neuron activation functions are assumed to satisfy Assumption 1 with Solution Our purpose is to estimate the maximum allowable delay bound (MADB) under different such that the system (32)
Conclusions
In this paper, we have proposed some new delay dependent sufficient conditions for the robust stability analysis of a class of recurrent neural networks with time-varying delays and parameter uncertainties. These conditions are derived under a weak assumption on the neuron activation functions and expressed in terms of LMIs. We have discussed the advantage of the assumption condition investigated in our paper over those in previous studies in the literature. It has been established that the
References (23)
Global asymptotic stability of a larger class of neural networks with constant time delay
Physics Letters A
(2003)- et al.
Novel delay-dependent stability criteria of neural networks with time-varying delay
Neurocomputing
(2009) - et al.
Novel delay-dependent robust stability criterion of delayed cellular neural networks
Chaos, Solitons & Fractals
(2007) - et al.
New results on stability analysis of neural networks with time-varying delays
Physics Letters A
(2006) - et al.
Improved delay-dependent stability criterion for neural networks with time-varying delays
Physics Letters A
(2009) - et al.
Delay-dependent exponential stability analysis of delay neural networks: an LMI approach
Neural Networks
(2002) Robust exponential stability for uncertain time-varying delay systems with delay dependence
Journal of The Franklin Institute
(2009)- et al.
Improved stability criteria for neural networks with time-varying delay
Physics Letters A
(2009) - et al.
Improved delay-dependent stability criterion for neural networks with time-varying delay
Applied Mathematics and Computation
(2011) An analysis of global asymptotic stability of delayed neural networks
IEEE Transactions on Neural Networks
(2002)
Linear matrix inequalities in system and control theory
Society for Industrial and Applied Mathematics, PA: Philadelphia
Cited by (38)
New results on T–S fuzzy sampled-data stabilization for switched chaotic systems with its applications
2022, Chaos, Solitons and FractalsOn unified framework for discrete-time grey models: Extensions and applications
2020, ISA TransactionsSampled-data state estimation of Markovian jump static neural networks with interval time-varying delays
2018, Journal of Computational and Applied MathematicsEffects of leakage delays and impulsive control in dissipativity analysis of Takagi–Sugeno fuzzy neural networks with randomly occurring uncertainties
2017, Journal of the Franklin InstituteState estimation of neural networks with two Markovian jumping parameters and multiple time delays
2017, Journal of the Franklin InstituteCitation Excerpt :However, the existence of time delay has been recognized as one of the major source of instability and poor performance of network dynamics. Therefore, there is a lot of graceful results about the stability of delayed neural networks [5–10]. In particular, distributed delay [5,10] should be added to the considered neural networks since neural networks have a quantity of parallel pathways with various axon sizes and lengths, and signal transmission is distributed for some time.