New LMI-based condition on global asymptotic stability concerning BAM neural networks of neutral type☆
Introduction
Recent years have witnessed rapid development of bidirectional associative memory neural networks due to the vast applications in pattern recognition, artificial intelligence, automatic control engineering because of its better abilities of information memory and information association [1], [2]. Since stability is one of the most important behaviors of the neural networks, the analysis of stability for BAM networks has attracted considerable attentions. A great deal of results for BAM neural networks concerning the existence of equilibrium point, global asymptotic or exponential stability have been derived (see, for example [3], [4], [5], [6], [7], [8], [9], [10], [11], [30], [31], [32] and the references therein).
Usually, in the electronic implementations of analog neural networks, time delays will inevitably occur in the communication and response of neurons because of the unavoidable finite switching speed of amplifiers. The existence of time delays may degrade system performance and cause oscillation, leading to instability, which motivates us to investigate the stability of delayed neural networks. So far, a great deal of asymptotic or exponential stability conditions for neural networks with time delays have be derived, for see [5], [6], [9], [10], [11], [12], [13], [14]. When both the time delays and parameter uncertainties are taken into account in BAM neural networks, some stability conditions have been given in [7], [8]. In addition, because of the complicated dynamic properties of the neural cells in the real world, the existing neural network models in many cases cannot characterize the properties of a neural reaction process precisely. Thus, it is natural and important that systems will contain some information about the derivative of the past state to further describe and model the dynamics for such complex neural reactions. This new type of neural networks is called neutral neural networks or neural networks of neutral type. Since neutral neural networks contain some very important information about the derivative of the past state, thus it is very important for us to study such complicated system. However, up to date, the stability analysis for neural networks of neutral type has been rarely investigated [15], [16], [17], [18], [19], [20], [21], [22], [23], [33]. In [20], the authors considered the following neutral type delayed neural networks with discrete and distributed delays:Under the conditions that the activation functions are supposed to be bounded and monotonically increasing, by introducing some new integral inequalities and constructing a Lyapunov–Krasovskii functional, new LMI conditions are established on global asymptotic stability for system (1.1).
In [21], the authors discussed the problem of the delay-dependent asymptotic stability for BAM neural networks of neutral type. Two cases of time delays in which whether the neutral delays are equal to the state delays or not are involved. The neural networks of neutral type studied in [21] are the following form: where and are the state of ith and the jth neuron, respectively. and denote the rate with which the cell i and j reset their potential to the resting states when isolated from the other cells and inputs. and are the connection weights at the time t. and denote the constant external inputs. and are time delays in the state, and are neutral delays. and are the activation functions.
In [21], under the conditions that the activation functions are supposed to be bounded and globally Lipschitz continuous, by introducing some new integral inequalities and constructing a Lyapunov–Krasovskii functional, new LMI conditions are established on global asymptotic stability for the neural networks system (1.2).
In this paper, we continuously discuss global asymptotic stability of system (1.2). Our purpose is to remove the boundedness in [20], [21] and monotonicity in [20] for the activation functions, and under the assumptions that the activation functions satisfy only global Lipschitz conditions establish a new and complicated LMI condition on global asymptotic stability for system (1.2).
The difficulties in dealing with the stability problem of the neural networks (1.2) based on LMI method include two points. (1) How to deal with the effects of neutral terms and on networks (1.2) is difficult. In view of the existence of neutral terms, in the proof of existence, uniqueness and global asymptotic stability of equilibrium point, much more inequality techniques are used. (2) How to use LMI method to solve the problem that the LMI condition in existence and uniqueness result of equilibrium point can be deduced from the LMI condition in global exponential stability result of equilibrium point is very uneasy and difficult work. In [20], [21] and a lot of papers, the boundedness conditions on the activation functions ensures the existence of an equilibrium point, so the proof of existence part of equilibrium point will naturally disappear. However, in our paper, the activation functions are required to only satisfy global Lipschitz condition, thus we have to simultaneously obtain the LMI condition of existence and uniqueness of equilibrium point and the LMI condition of global asymptotic stability of equilibrium point and ensure the two LMI conditions are identical or a LMI condition can be deduced from another LMI condition. Therefore, a lot of inequality techniques and matrices inequality are used to constitute and such that implies . Correspondingly, the main contributions of this paper are as follows: (1) By multiplying by and Claim 1 in the Appendix (see Section 2), we propose a LMI condition for the existence and uniqueness of equilibrium point of neural networks (1.2). (2) By Claim 2 in the Appendix and using matrix equations in the proofs of global asymptotic stability, we propose a LMI condition for global asymptotic stability of neural networks (1.2). (3) In view of the contributions of (1) and (2), a new and more complicated LMI condition is obtained for global asymptotic stability of neural networks (1.2) under the assumption that the activation functions are required to only satisfy global Lipschitz condition. So far, in almost all papers (for example, see [24], [25]) which discussed the stability of neural networks by LMI method, the authors obtained many very complicated LMI conditions for stability of neural networks under the assumptions that the activation functions are required to satisfy monotonicity and boundedness conditions or boundedness and Lipschitz continuous conditions, recently, in [26], [27], [28], the authors obtain respectively their complicated LMI condition under the assumption that the activation functions are required to only satisfy global Lipschitz condition. But their LMI conditions are still not ideal and not very complicated (LMI matrices contain too many zero elements). Thus, we first obtain the complicated LMI condition on stability of neural networks under the assumption that the activation functions are required to only satisfy global Lipschitz condition.
The rest of this paper is organized as follows. In Section 2, we prove the existence and uniqueness of an equilibrium point by using Homeomorphism theory. In Section 3, the new independent of delay stability condition is established by using the existence and uniqueness result and constructing Lyapunov functional. Section 4 provides illustrative example to show the effectiveness of the theoretical result.
For the sake of convenience, we introduce some notations and definitions as follows. I denotes the unit matrix, for any matrix A, stands for the transpose of A, denote the inverse of A. If A is a symmetric matrix, means that A is positive definite (nonnegative definite). Similarly, means that A is negative definite (negative semidefinite). denotes the maximum and minimum eigenvalue of a square matrix A. . Let be an m-dimensional Euclidean space, which is endowed with a norm and inner product , respectively. Given column vector , the norm is the Euclidean vector norm, i.e., . Definition 1 A point is said to be an equilibrium point of system (1.2) if where . Lemma 1 Forti and Tesi [29] Let be continuous. If H satisfies the following conditions is injective on . as .
Then H is a homeomorphism.
Fact 1
If then
Lemma 2
If and are real constants, then
Proof
This is well-known inequality and its proof is omitted. □
Section snippets
Existence of an equilibrium point
In this section, we prove the existence and uniqueness of the equilibrium point for system (1.2) by applying Homeomorphism theory and linear Matrix inequality. Theorem 1 Assume that the following assumptions hold: There exist positive constants such that for , There exist positive diagonal m order matrices and four matrices such that:
Global asymptotic stability
In this section, we research global asymptotic stability of system (1.2) by means of matrix and constructing Lyapunov functionals. Theorem 2 Assume that in Theorem2.1 hold. Further assume that (h) There exist m-order positive definite matrices , m-order positive diagonal matrices and matrices such that:
Example 1
Example 1 Consider the following BAM neural networks of neutral type with delays:where are positive constants,
Conclusions
In this paper, we discuss global asymptotic stability to a BAM neural networks of neutral type with delays. Under the assumptions that the activation functions only satisfy global Lipschitz conditions, a new and complicated LMI condition is established on global asymptotic stability for above neutral neural networks by means of using Homeomorphism theory, matrix and Lyapunov functional. In our result, the hypotheses for boundedness in [20], [21] and monotonicity in [20] on the activation
Zhengqiu Zhang (1963) is a Professor, Ph.D. His research interests include neural network theory and applications, artificial intelligent control theory and engineering.
References (33)
- et al.
Existence and stability of almost periodic solution for BAM neural networks with delays
Appl. Math. Comput.
(2003) - et al.
Delay-dependent stability analysis for continuous-time BAM neural networks with Markovian jumping parameters
Neural Networks
(2010) Global asymptotic stability of delayed bi-directional associative memory neural networks
Appl. Math. Comput.
(2003)- et al.
A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach
Chaos Solitons Fractals
(2005) - et al.
LMI-based criteria for global robust stability of bidirectional associative memory networks with time delay
Nonlinear Anal.
(2007) - et al.
On the global robust asymptotic stability of BAM neural networks with time-varying delays
Neurocomputing
(2006) Global asymptotic stability of hybrid BAM neural networks with time delays
Phys. Lett. A
(2006)- et al.
Exponential stability of neural networks with variable delays via LMI approach
Chaos Solitons Fractals
(2006) - et al.
Stability analysis on a neutral neural networks model
- et al.
A new stability criterion for bidirectional associative memory neural networks of neutral-type
Appl. Math. Comput.
(2008)
LMI optimization approach on stability for delayed neural networks of neutral-type
Appl. Math. Comput.
Delay-dependent exponential stability for a class of neural networks with time delays
J. Comput. Appl. Math.
A new approach to exponential stability analysis of neural networks with time-varying delays
Neural Networks
New delay-dependent asymptotic stability conditions concerning BAM neural networks of neutral type
Neurocomputing
LMI-based approach for global exponential robust stability for reaction–diffusion uncertain neural networks with time-varying delay
Chaos Solitons Fractals
New LMI conditions for global exponential stability of cellular neural networks with delays
Nonlinear Anal. Real World Appl.
Cited by (0)
Zhengqiu Zhang (1963) is a Professor, Ph.D. His research interests include neural network theory and applications, artificial intelligent control theory and engineering.
Kaiyu Liu (1964) is a teacher, Ph.D. Her research interests include neural network theory and applications, artificial intelligent control theory and engineering.
Yan Yang (1986) is a master. His research interests include neural network theory and applications.
- ☆
The project sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry (2009) 1341 and Fund of National Natural Science of China (No: 61065008).