Elsevier

Neurocomputing

Volume 81, 1 April 2012, Pages 24-32
Neurocomputing

New LMI-based condition on global asymptotic stability concerning BAM neural networks of neutral type

https://doi.org/10.1016/j.neucom.2011.10.006Get rights and content

Abstract

In this paper, we discuss global asymptotic stability to a BAM neural networks of neutral type with delays. Under the assumptions that the activation functions only satisfy global Lipschitz conditions, a new and complicated LMI condition is established on global asymptotic stability for the above neutral neural networks by means of using Homeomorphism theory, matrix and Lyapunov functional. In our result, the hypotheses for boundedness in [20], [21] and monotonicity in [20] on the activation functions are removed. On the other hand, the LMI condition is also different from those in [20], [21]. Finally, an example is given to show the effectiveness of the theoretical result.

Introduction

Recent years have witnessed rapid development of bidirectional associative memory neural networks due to the vast applications in pattern recognition, artificial intelligence, automatic control engineering because of its better abilities of information memory and information association [1], [2]. Since stability is one of the most important behaviors of the neural networks, the analysis of stability for BAM networks has attracted considerable attentions. A great deal of results for BAM neural networks concerning the existence of equilibrium point, global asymptotic or exponential stability have been derived (see, for example [3], [4], [5], [6], [7], [8], [9], [10], [11], [30], [31], [32] and the references therein).

Usually, in the electronic implementations of analog neural networks, time delays will inevitably occur in the communication and response of neurons because of the unavoidable finite switching speed of amplifiers. The existence of time delays may degrade system performance and cause oscillation, leading to instability, which motivates us to investigate the stability of delayed neural networks. So far, a great deal of asymptotic or exponential stability conditions for neural networks with time delays have be derived, for see [5], [6], [9], [10], [11], [12], [13], [14]. When both the time delays and parameter uncertainties are taken into account in BAM neural networks, some stability conditions have been given in [7], [8]. In addition, because of the complicated dynamic properties of the neural cells in the real world, the existing neural network models in many cases cannot characterize the properties of a neural reaction process precisely. Thus, it is natural and important that systems will contain some information about the derivative of the past state to further describe and model the dynamics for such complex neural reactions. This new type of neural networks is called neutral neural networks or neural networks of neutral type. Since neutral neural networks contain some very important information about the derivative of the past state, thus it is very important for us to study such complicated system. However, up to date, the stability analysis for neural networks of neutral type has been rarely investigated [15], [16], [17], [18], [19], [20], [21], [22], [23], [33]. In [20], the authors considered the following neutral type delayed neural networks with discrete and distributed delays:xi(t)=aixi(t)+j=1mwijfj(xj(t))+j=1mvijfj(xj(tτ))+j=1mbijtkj(ts)×fj(xj(s))ds+j=1ndijxj(tτ)+Ii,i=1,2,,m.Under the conditions that the activation functions are supposed to be bounded and monotonically increasing, by introducing some new integral inequalities and constructing a Lyapunov–Krasovskii functional, new LMI conditions are established on global asymptotic stability for system (1.1).

In [21], the authors discussed the problem of the delay-dependent asymptotic stability for BAM neural networks of neutral type. Two cases of time delays in which whether the neutral delays are equal to the state delays or not are involved. The neural networks of neutral type studied in [21] are the following form: xi(t)=aixi(t)+j=1mw1ijfj(yj(tτ))+j=1mw2ijxj(th)+ci,yj(t)=bjyj(t)+i=1mv1jigi(xi(tσ))+i=1mv2jiyi(td)+dj,i,j=1,2,,m,where xi(t) and yj(t) are the state of ith and the jth neuron, respectively. ai>0 and bj>0 denote the rate with which the cell i and j reset their potential to the resting states when isolated from the other cells and inputs. wik,sjl,h1ij,h2ij,v1ji and v2ji are the connection weights at the time t. ci and dj denote the constant external inputs. τ>0 and σ>0 are time delays in the state, h>0 and d>0 are neutral delays. fj and gi are the activation functions.

In [21], under the conditions that the activation functions are supposed to be bounded and globally Lipschitz continuous, by introducing some new integral inequalities and constructing a Lyapunov–Krasovskii functional, new LMI conditions are established on global asymptotic stability for the neural networks system (1.2).

In this paper, we continuously discuss global asymptotic stability of system (1.2). Our purpose is to remove the boundedness in [20], [21] and monotonicity in [20] for the activation functions, and under the assumptions that the activation functions satisfy only global Lipschitz conditions establish a new and complicated LMI condition on global asymptotic stability for system (1.2).

The difficulties in dealing with the stability problem of the neural networks (1.2) based on LMI method include two points. (1) How to deal with the effects of neutral terms xj(th) and yi(td) on networks (1.2) is difficult. In view of the existence of neutral terms, in the proof of existence, uniqueness and global asymptotic stability of equilibrium point, much more inequality techniques are used. (2) How to use LMI method to solve the problem that the LMI condition in existence and uniqueness result of equilibrium point can be deduced from the LMI condition in global exponential stability result of equilibrium point is very uneasy and difficult work. In [20], [21] and a lot of papers, the boundedness conditions on the activation functions ensures the existence of an equilibrium point, so the proof of existence part of equilibrium point will naturally disappear. However, in our paper, the activation functions are required to only satisfy global Lipschitz condition, thus we have to simultaneously obtain the LMI condition of existence and uniqueness of equilibrium point and the LMI condition of global asymptotic stability of equilibrium point and ensure the two LMI conditions are identical or a LMI condition can be deduced from another LMI condition. Therefore, a lot of inequality techniques and matrices inequality are used to constitute Ω1 and Ω such that Ω<0 implies Ω1>0. Correspondingly, the main contributions of this paper are as follows: (1) By multiplying [H(x,y)H(x¯,y¯)] by [P(xx¯)+P2(xx¯)+P3(yy¯),Q(yy¯)+M2(yy¯)+M3(xx¯)]T and Claim 1 in the Appendix (see Section 2), we propose a LMI condition for the existence and uniqueness of equilibrium point of neural networks (1.2). (2) By Claim 2 in the Appendix and using matrix equations in the proofs of global asymptotic stability, we propose a LMI condition for global asymptotic stability of neural networks (1.2). (3) In view of the contributions of (1) and (2), a new and more complicated LMI condition is obtained for global asymptotic stability of neural networks (1.2) under the assumption that the activation functions are required to only satisfy global Lipschitz condition. So far, in almost all papers (for example, see [24], [25]) which discussed the stability of neural networks by LMI method, the authors obtained many very complicated LMI conditions for stability of neural networks under the assumptions that the activation functions are required to satisfy monotonicity and boundedness conditions or boundedness and Lipschitz continuous conditions, recently, in [26], [27], [28], the authors obtain respectively their complicated LMI condition under the assumption that the activation functions are required to only satisfy global Lipschitz condition. But their LMI conditions are still not ideal and not very complicated (LMI matrices contain too many zero elements). Thus, we first obtain the complicated LMI condition on stability of neural networks under the assumption that the activation functions are required to only satisfy global Lipschitz condition.

The rest of this paper is organized as follows. In Section 2, we prove the existence and uniqueness of an equilibrium point by using Homeomorphism theory. In Section 3, the new independent of delay stability condition is established by using the existence and uniqueness result and constructing Lyapunov functional. Section 4 provides illustrative example to show the effectiveness of the theoretical result.

For the sake of convenience, we introduce some notations and definitions as follows. I denotes the unit matrix, for any matrix A, AT stands for the transpose of A, A1 denote the inverse of A. If A is a symmetric matrix, A>0(A0) means that A is positive definite (nonnegative definite). Similarly, A<0(A0) means that A is negative definite (negative semidefinite). λM(A),λm(A) denotes the maximum and minimum eigenvalue of a square matrix A. |A|=(|aij|),|x|=(|x1|,,|xm|)T,A=λM(ATA). Let Rm be an m-dimensional Euclidean space, which is endowed with a norm · and inner product (·,·), respectively. Given column vector x=(x1,x2,,xm)TRm, the norm is the Euclidean vector norm, i.e., x=(i=1mxi2)1/2.

Definition 1

A point (x,y)TRm×Rm is said to be an equilibrium point of system (1.2) if aixij=1mw1ijfj(yj)ci=0,i=1,2,,m,bjyji=1mv1jigi(xi)dj=0,j=1,2,,m,where x=(x1,x2,,xm)T,y=(y1,y2,,ym)T.

Lemma 1 Forti and Tesi [29]

Let H:R2mR2m be continuous. If H satisfies the following conditions

  • (1)

    H(u) is injective on R2m.

  • (2)

    H(u) as u.

Then H is a homeomorphism.

Fact 1

If a>0,b>0 then 2aba2+b2.

Lemma 2

If ak and bk (k=1,2,,m) are real constants, then k=1makbk2k=1mak2k=1mbk2.

Proof

This is well-known inequality and its proof is omitted. 

Section snippets

Existence of an equilibrium point

In this section, we prove the existence and uniqueness of the equilibrium point for system (1.2) by applying Homeomorphism theory and linear Matrix inequality.

Theorem 1

Assume that the following assumptions hold:

(H1) There exist positive constants αj,βi such that for x,yR,i,j=1,2,,m, |fj(x)fj(y)|αj|xy|,|gi(x)gi(y)|βi|xy|.

(H2) There exist positive diagonal m order matrices P,Q,Y1,Y2 and four matrices P2=(pij)m×m,M2=(pij)m×m,P3=(lij)m×m,M3=(nij)m×m such that: Ω1=T11T12T13T14T22T23T24T330T44>

Global asymptotic stability

In this section, we research global asymptotic stability of system (1.2) by means of matrix and constructing Lyapunov functionals.

Theorem 2

Assume that (H1) in Theorem2.1 hold. Further assume that

(h) There exist m-order positive definite matrices Ri(i=1,2), m-order positive diagonal matrices P,Q,Y1,Y2,Y3,Y4 and matrices P2=(pij)m×m,M2=(pij)m×m,P3=(lij)m×m,M3=(nij)m×m,P1,P4,P5,P6,M1,M4,M5,M6 such that:Ω=a11a12a13a14a15a16a17a18a22a23a24a25a26a27a28a330a35a36a37a38a44a45a46a47a48a55a56a57a58

Example 1

Example 1

Consider the following BAM neural networks of neutral type with delays:xi(t)=aixi(t)+j=1mw1ijfj(yj(tτ)))+j=1mw2ijxj(th)+ci,i=1,2,,m,yj(t)=bjyj(t)+i=1mv1jigi(xi(tσ))+i=1mv2jiyi(td)+dj,j=1,2,,m,where τ,σ,h,d are positive constants, a1=1,a2=2,b1=2.6,b2=1,ci=2,dj=2,i,j=1,2,fi(y)=0.5|y|+1,gi(x)=|x|+2,i,j=1,2.w111=0.1,w112=1,w121=0.1,w122=0.05,w211=0.05,w212=0.1,w221=0.05,w222=0.05,v111=0.1,v112=0.2,v121=0.15,v122=0.1,v211=0.25,v212=0.1,v221=0.02,v222=0.08.

Since the activation

Conclusions

In this paper, we discuss global asymptotic stability to a BAM neural networks of neutral type with delays. Under the assumptions that the activation functions only satisfy global Lipschitz conditions, a new and complicated LMI condition is established on global asymptotic stability for above neutral neural networks by means of using Homeomorphism theory, matrix and Lyapunov functional. In our result, the hypotheses for boundedness in [20], [21] and monotonicity in [20] on the activation

Zhengqiu Zhang (1963) is a Professor, Ph.D. His research interests include neural network theory and applications, artificial intelligent control theory and engineering.

References (33)

Cited by (0)

Zhengqiu Zhang (1963) is a Professor, Ph.D. His research interests include neural network theory and applications, artificial intelligent control theory and engineering.

Kaiyu Liu (1964) is a teacher, Ph.D. Her research interests include neural network theory and applications, artificial intelligent control theory and engineering.

Yan Yang (1986) is a master. His research interests include neural network theory and applications.

The project sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry (2009) 1341 and Fund of National Natural Science of China (No: 61065008).

View full text