Elsevier

ISA Transactions

Volume 52, Issue 1, January 2013, Pages 30-35
ISA Transactions

Improved delay-dependent robust stability criteria for recurrent neural networks with time-varying delays

https://doi.org/10.1016/j.isatra.2012.07.007Get rights and content

Abstract

In this paper, the problem of improved delay-dependent robust stability criteria for recurrent neural networks (RNNs) with time-varying delays is investigated. Combining the Lyapunov–Krasovskii functional with linear matrix inequality (LMI) techniques and integral inequality approach (IIA), delay-dependent robust stability conditions for RNNs with time-varying delay, expressed in terms of quadratic forms of state and LMI, are derived. The proposed methods contain the least numbers of computed variables while maintaining the effectiveness of the stability conditions. Both theoretical and numerical comparisons have been provided to show the effectiveness and efficiency of the present method. Numerical examples are included to show that the proposed method is effective and can provide less conservative results.

Highlights

►This paper has presented improved result for RNNs with time-varying delays. ► The results are expressed in terms of LMIs. ► The results are less conservative than existing results. ► The restriction on the change rate of time-varying delays is relaxed. ► Numerical examples are given to illustrate the effectiveness of our results.

Introduction

In recent years, neural networks (NNs) have attracted much attention in research and have found successful applications in many areas such as pattern recognition, image processing, association, optimization problems [6], [16]. One of the important research topics is the globally asymptotic stability of the neural network models. However, in the implementation of artificial NNs, time delays are unavoidable due to the finite switching speed of amplifiers. It has been shown that the existence of time delays in recurrent neural networks (RNNs) may lead to oscillation, divergence or instability. Therefore, the stability of RNNs with delay has become a topic of great theoretical and practical importance. Generally, when a neural network is applied to solve an optimization problem, it needs to have a unique and globally stable equilibrium point. Thus, it is of great interest to establish conditions that ensure the global asymptotic stability of a unique equilibrium point of RNNs with delay [1], [2], [4], [5], [7], [8], [9], [10], [11], [12], [13], [14], [15], [18], [19], [20], [21], [22], [23].

So far, the stability criteria of RNNs with time delay are classified into two categories, i.e., delay independent [1], [2], [4], [14], [18], [23] and delay dependent [5], [7], [8], [10], [11], [12], [13], [15], [20], [21]. Generally speaking, the delay-dependent stability criteria are less conservative than delay-independent when the time-delay is small. Therefore, authors always consider the delay-dependent type. Some less conservative stability criteria were proposed in [8] by considering some useful terms and using the free-weighting matrices method. The stability criteria for neural networks with time-varying delay were considered in [10] where the relationship between the time-varying delay and its lower and upper bound was taken into account. By constructing a new augmented Lyapunov functional which contains a triple-integral term, an improved delay-dependent stability criterion is derived in [19]. However, these results have conservatism to some extent, which exist room for further improvement.

In this paper, the problem of delay-dependent robust stability criterion for recurrent neural networks with time-varying delay is considered. A sufficient condition for the solvability of this problem, which depends on the size of the time delay, has been presented by means of the Lyapunov functional and the linear matrix inequality (LMI) approach. Furthermore, the proposed condition in this paper is less conservative than previously established ones and include the least number of variables, which has been shown by some numerical examples. All results are derived in the LMI framework and the solutions are obtained by using LMI toolbox of Matlab. Finally, numerical examples are given to indicate significant improvements over the existing results.

Section snippets

Problem formulation

Consider the following recurrent neural network with time-varying delays and parameter uncertainties:u̇(t)=(C+ΔC(t))u(t)+(A+ΔA(t))f(u(t))+(B+ΔB(t))f(u(th(t)))+J,where u(t)=[u1(t),,un(t)]TRn is the state vector with the n neurons; f(u(t))=[f1(u1(t)),,fn(un(t))]TRn is called an activation function indicating how the jth neuron responses to its input; C=diag(c1,...,cn) is a diagonal matrix with each ci>0 controlling the rate with which the ith unit will reset its potential to the resting

Main results

In this section, we use the integral inequality approach (IIA) to obtain stability criterion for a recurrent neural network with time-varying delays. First, we take up the case where ΔC(t)=0,ΔA(t)=0 and ΔB(t)=0 in system (4) as follows:ẋ(t)=Cx(t)+Ag(x(t))+Bg(x(th(t))),x(t)=ϕ(t),t[h,0].

Based on the Lyapunov–Krasovskii stability theorem and integral inequality approach (IIA), the following result is obtained.

Theorem 1

For given positive scalars h and hd, the recurrent neural network system with

Numerical examples

In this section, we provide three numerical examples to demonstrate the effectiveness and less conservatism of our delay-dependent stability criteria.

Example 1

Consider a delayed recurrent neural network with parameters as follows:

ẋ(t)=Cx(t)+Ag(x(t))+Bg(x(th(t))),whereC=[2002],A=[1111],B=[0.88111].

The neuron activation functions are assumed to satisfy Assumption 1 with K=diag{0.4,0.8}.

Solution

Our purpose is to estimate the maximum allowable delay bound (MADB) h¯ under different hd such that the system (32)

Conclusions

In this paper, we have proposed some new delay dependent sufficient conditions for the robust stability analysis of a class of recurrent neural networks with time-varying delays and parameter uncertainties. These conditions are derived under a weak assumption on the neuron activation functions and expressed in terms of LMIs. We have discussed the advantage of the assumption condition investigated in our paper over those in previous studies in the literature. It has been established that the

References (23)

  • S Boyd et al.

    Linear matrix inequalities in system and control theory

    Society for Industrial and Applied Mathematics, PA: Philadelphia

    (1994)
  • Cited by (38)

    • State estimation of neural networks with two Markovian jumping parameters and multiple time delays

      2017, Journal of the Franklin Institute
      Citation Excerpt :

      However, the existence of time delay has been recognized as one of the major source of instability and poor performance of network dynamics. Therefore, there is a lot of graceful results about the stability of delayed neural networks [5–10]. In particular, distributed delay [5,10] should be added to the considered neural networks since neural networks have a quantity of parallel pathways with various axon sizes and lengths, and signal transmission is distributed for some time.

    View all citing articles on Scopus
    View full text