Elsevier

Physics Letters A

Volume 311, Issue 6, 26 May 2003, Pages 504-511
Physics Letters A

Global asymptotic stability of a larger class of neural networks with constant time delay

https://doi.org/10.1016/S0375-9601(03)00569-3Get rights and content

Abstract

This Letter presents some new sufficient conditions for the uniqueness and global asymptotic stability (GAS) of the equilibrium point for a larger class of neural networks with constant time delay. It is shown that the use of a more general type of Lyapunov–Krasovskii functional enables us to establish global asymptotic stability of a larger class of delayed neural networks than those considered in some previous papers.

Introduction

In recent years, stability of different classes of neural networks with time delay, such as Hopfield neural networks, cellular neural networks, bidirectional associative neural networks, Lotka–Volterra neural networks, has been extensively studied and various stability conditions have been obtained for these models of neural networks (see [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27] and the references cited therein). In conducting a study of the stability properties of neural networks, stability results that impose constraint conditions on the network parameters will be dependent of the intended applications. In order for a neural network to function as an associative memory the designed network must converge to a set of stable equilibrium points depending on the initial conditions. However, when a neural network is used for solving optimization problems, it is necessary that the neural network must have a unique and globally asymptotically stable (GAS) equilibrium point independently of the initial conditions. In this Letter, we will focus on the neural networks that possess a unique and GAS equilibrium point. In order to establish this type of stability for neural networks, two aspects of the problem have to be taken into account: character of the neuron activation and conditions imposed on the networks parameters. We should point out here that conditions to be imposed on the network parameters of a neural network are also determined by the characters of the activation functions. In the most of the previously published papers, the GAS results were obtained for delayed neural networks with respect to the classes of bounded and strictly increasing activation functions, strictly increasing and slope-bounded activation functions, and increasing (not necessarily strictly) and slope-bounded activation functions. In a recent paper [28], some elegant GAS results for delayed neural networks with strictly increasing and slope-bounded activation functions have been presented by using Lyapunov–Krasovskii functional and LMI (linear matrix inequality) approach. It is also shown in [28] that the results of [28] generalizes the previous stability results derived in the literature. In the present Letter, by employing a more general Lyapunov functional, we will show that the results of [28] are also applicable to neural networks with increasing (not necessarily strictly) and slope-bounded activation functions.

The delayed neural network model we consider is defined by the following state equations: du(t)dt=−Au(t)+W0gu(t)+W1gu(t−τ)+I, where u=[u1,u2,…,un]T is the neuron state vector, A=diag(ai) is a positive diagonal matrix, τ is the transmission delay, W0=(wij0)n×n and W1=(wij1)n×n are the interconnection matrices representing the weight coefficients of the neurons, I=[I1,I2,…,In]T is the constant external input vector, and the g(u)=[g(u1),g(u2),…,g(un)]T denotes the neuron activations.

The usual assumptions on the activation functions are as follows: (H)0⩽gj1)−gj2)ξ1−ξ2⩽σj,j=1,2,…,n,H0<gj1)−gj2)ξ1−ξ2⩽σj,j=1,2,…,n for each ξ1,ξ2R, ξ1ξ2, where σj are positive constants.

It should be noted that the activation functions satisfying the condition H are strictly increasing activation functions. On the other hand, the activation functions satisfying the condition H are increasing functions but the condition H does not impose strictly increasingness condition on the functions. Therefore, the class of functions satisfying the condition H is larger than the class of functions satisfying the condition H.

In the following, we will shift the equilibrium point u=[u1,u2,…,un] of system (1) to the origin. The transformation x(·)=u(·)−u puts system (1) into the following form dx(t)dt=−Ax(t)+W0fx(t)+W1fx(t−τ), where x=[x1,x2,…,xn]T is the state vector of the transformed system, and fj(xj)=gj(xj+uj)−gj(uj), with fj(0)=0, ∀j. Note that the functions fj(·) satisfy the condition H, that is 0⩽fj1)−fj2)ξ1−ξ2⩽σj,j=1,2,…,n for each ξ1,ξ2R, ξ1ξ2, where σj are positive constants.

Throughout this Letter we will use the following notations: BT, B−1, λm(B), λM(B) denotes, respectively, the transpose of, the inverse of, the minimum eigenvalue of, the maximum eigenvalue of a square matrix B. The notation B>0 (B<0) means that B is symmetric and positive definite (negative definite).

Section snippets

Main stability result

In this section, we present a sufficient condition for the uniqueness and GAS of the equilibrium point for the delayed neural system defined by (2). This result is given in the following:

Theorem 1

The origin of system (2) is the unique equilibrium point and it is globally asymptotically stable if the condition (H) is satisfied and there exist a positive definite matrix P, a positive diagonal matrix D and a positive constant β such that Ω=−2DAΣ−1+DW0+W0TD+βW1TPW1−1DP−1D<0, where Σ=diag(σi) is a positive

Conclusions

The equilibrium and stability properties of neural networks with constant time delay have been studied. Some stability criteria, which are independent of the delay parameter, have been derived by employing a more general model of Lyapunov–Krasovskii model. It has been shown that the results we have obtained are applicable to a larger class of activation functions than the class of the functions considered in previous works.

Acknowledgements

This work was supported in part by the Research Fund of Istanbul University under Project UDP-3/14062002, and in part by the Turkish Academy of Sciences in the framework of the Young Scientist Award Program (SA/TUBA-GEBIP/2002-1-2).

References (29)

  • K. Gopalsamy et al.

    Physica D

    (1994)
  • Y. Zhang

    Int. J. Systems Sci.

    (1996)
  • H. Huang et al.

    Phys. Lett. A

    (2002)
  • C. Sun et al.

    Phys. Lett. A

    (2002)
  • T. Roska et al.

    IEEE Trans. Circuits Systems I

    (1993)
  • T. Roska et al.

    IEEE Trans. Circuits Systems I

    (1992)
  • H. Ye et al.

    Phys. Rev. E

    (1995)
  • M. Gilli et al.

    IEEE Trans. Circuits Systems I

    (1993)
  • S. Arik et al.

    IEEE Trans. Circuits Systems I

    (1998)
  • J.D. Cao

    Phys. Rev. E

    (1999)
  • S. Arik

    IEEE Trans. Circuits Systems I

    (2000)
  • S. Arik et al.

    IEEE Trans. Circuits Systems I

    (2000)
  • T.-L. Liao et al.

    IEEE Trans. Neural Networks

    (2000)
  • N. Takahashi

    IEEE Trans. Circuits Systems I

    (2000)
  • Cited by (256)

    View all citing articles on Scopus
    View full text