Exponential Convergence for Cellular Neural Networks with Continuously Distributed Delays in the Leakage Terms

In this paper, we consider a class of cellular neural networks with contin- uously distributed delays in the leakage terms. By applying Lyapunov functional method and differential inequality techniques, without assuming the boundedness conditions on the activation functions, we establish new results to ensure that all solutions of the networks converge exponentially to zero point.


Introduction
It is well known that the dynamical behaviors of delayed cellular neural networks (DC-NNs) have received much attention due to their potential applications in associated memory, parallel computing,pattern recognition, signal processing and optimization problems (see [1,2,3]).In particular, a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, it is desired to model them by introducing continuously distributed delays over a certain duration of time [4,5,6].On the other hand, a typical time delay called Leakage (or "forgetting") delay may exist in the negative feedback terms of the neural network system, and these terms are variously known as forgetting or leakage terms (see [7,8,9]).Consequently, K. Gopalsmay [10] investigated the stability on equilibrium for the bidirectional associative memory (BAM) neural networks with constant delay in the leakage term.Followed by this, the authors of [11−23] dealt with the existence and stability of equilibrium and periodic solutions for neuron networks model involving constant or time-varying leakage delays.Moreover, by using continuation theorem in coincidence degree theory and the Lyapunov functional, S. Peng [24] established some delay dependent criteria on the existence and global attractive periodic solutions of the bidirectional associative memory neural network with continuously distributed delays in the leakage terms.However, to the best of our knowledge, few authors have considered the exponential convergence behavior for all solutions of DCNNs with continuously distributed delays in the leakage terms.Motivated by the above arguments, in this present paper, we shall consider the following DCNNs with time-varying coefficients and continuously distributed delays in the leakage terms: in which n corresponds to the number of units in a neural network, x i (t) corresponds to the state vector of the ith unit at the time t, c i (t) ≥ 0 represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs at the time t. a ij (t) and b ij (t) are the connection weights at the time t, τ ij (t) ≥ 0 denotes the transmission delay, K ij (u) and h i (u) ≥ 0 correspond to the transmission delay kernels, I i (t) denotes the external bias on the ith unit at the time t, f j and g j are activation functions of signal transmission, and i, j The main purpose of this paper is to give the new criteria for the convergence behavior for all solutions of system (1.1).By applying Lyapunov functional method and differential inequality techniques, avoiding the boundedness conditions on the activation functions, we derive some new sufficient conditions ensuring that all solutions of system (1. We also assume that the following conditions (H 1 ), (H 2 ) and (H 3 ) hold.
(H 1 ) For each i, j ∈ {1, 2, • • • , n}, there exist nonnegative constants L f j and L g j such that (H 2 ) For all t > 0 and i, j ∈ {1, 2, • • • , n}, there exist constants η > 0, λ > 0 and and The initial conditions associated with system (1.1) are of the form where ϕ i (•) denotes real-valued bounded continuous function defined on (−∞, 0]. The remaining part of this paper is organized as follows.In Section 2, we present some new sufficient conditions to ensure that all solutions of system (1.1) converge exponentially to the zero point.In Section 3, we shall give some examples and remarks to illustrate our results obtained in the previous sections.

Main Results
Theorem 2.1.
Let (H 1 ), (H 2 ) and (H 3 ) hold.Then, for every solution there exists a positive constant K such that T be a solution of system (1.1) with any initial , and let In view of (1.1), we have From (1.2), (H 2 ) and (H 3 ), we can choose a positive constant K such that Then, it is easy to see that We now claim that If this is not valid, then, one of the following two cases must occur.
Consequently, we can obtain that (2.4) is true.Thus, This implies that the proof of Theorem 2.1 is now completed.

An Example
Example 3.1.Consider the following DCNNs with continuously distributed delays in the leakage terms: EJQTDE, 2013 No. 10, p. 8 where Noting that Define a continuous function Γ i (ω) by setting Then, we obtain which, together with the continuity of Γ i (ω), implies that we can choose positive constants λ > 0 and η > 0 such that for all t > 0, there holds where ξ i = 1, i = 1, 2. This yields that system (3.1)satisfied (H 1 ), (H 2 ) and (H 3 ).Hence, from Theorem 2.1, all solutions of system (3.1)converge exponentially to the zero point (0, 0) T .
Remark 3.1 Since f 1 (x) = f 2 (x) = x cos(x 3 ), g 1 (x) = g 2 (x) = x sin(x 2 ) are unbounded activation functions, and DCNNs (3.1) is a very simple form of DCNNs with continuously distributed delays in the leakage terms, it is clear that all the results in [10−23] and the references therein can not be applicable to prove that all solutions of system (3.1)converge exponentially to the zero point.To the best of our knowledge, the results on DCNNs with continuously distributed delays only appeared in the literature [24], which are restricted to consider the convergence of the neural network system and give no opinions about the globally exponential convergence.One can observe that the results in [24] and the references cited therein cannot be applicable to prove the globally exponential convergence of system (3.1).
EJQTDE, 2013 No. 10, p. 10 This implies that the results of this paper are essentially new.Moreover, we proposed a new approach to prove the exponential convergence of DCNNs with continuously distributed delays in the leakage terms.This implies that the results of this paper are essentially new.