Robust passivity analysis of mixed delayed neural networks with interval nondifferentiable time-varying delay based on multiple integral approach

Abstract: New results on robust passivity analysis of neural networks with interval nondifferentiable and distributed time-varying delays are investigated. It is assumed that the parameter uncertainties are norm-bounded. By construction an appropriate Lyapunov-Krasovskii containing single, double, triple and quadruple integrals, which fully utilize information of the neuron activation function and use refined Jensen’s inequality for checking the passivity of the addressed neural networks are established in linear matrix inequalities (LMIs). This result is less conservative than the existing results in literature. It can be checked numerically using the effective LMI toolbox in MATLAB. Three numerical examples are provided to demonstrate the effectiveness and the merits of the proposed methods.


Introduction
Recently, neural networks (NNs) have drawn considerable attention in many fields of science and engineering applications for example associative memories, fixed-point computation, control, static image processing and combinatorial optimization Ref. [1][2][3]. However, time-delay is common in various biological and physical phenomena, which is demonstrated by applying of mathematical modelling with time-delay in a wide range of applications for instance mechanical transmission, fluid transmission, metallurgical processes and networked control systems which is frequently a source of chaos, instability and poor control performance. These applications are extensively dependent upon • The challenge of this paper is studying the new result on robust passivity analysis of NNs with non-differentiable mixed time-varying delays which mean that this work can be used for various systems with fast time-varying delays compared with previous works considered on differentiable delay (ṙ(t) ≤ µ). • The new Lyapunov Krasovskii functional establishes more relationships among different vectors, avoids the extra conservatism arising from estimating the time-varying delays and utilizes more information about the upper and lower bounds of the time delays existing in the systems. • The new sufficient conditions based on refined Jensen-based inequalities proposed in Ref. [41], are less conservative than the others proposed Ref. [33][34][35][36][37][38][39][40] which are shown in the comparison examples.
The rest of paper is organized as follows: Section 2 provides some mathematical preliminaries and network model. Section 3 presents the passivity analysis of uncertain NNs with interval and distributed time-varying delays. Numerical examples are given in Section 4. Finally, the conclusion is provided in Section 5.

Network model and mathematics preliminaries
Notations: R n is the n-dimensional Euclidean space; R m×n denotes the set of m × n real matrices; I n represents the n-dimensional identity matrix. Let S + n denotes the set of symmetric positive definite matrices in R n×n . We also denoted by D + n the set of positive diagonal matrices. A matrix D = diag{d 1 , d 2 , ..., d n } ∈ D + n if d i > 0 (i = 1, 2, ..., n). The notation X ≥ 0 (respectively, X > 0 ) means that X is positive semi-definite (respectively, positive definite); diag(. . .) denotes a block diagonal matrix; X Y Z stands for X Y Y T Z ; Matrix dimensions, if not explicitly stated, are assumed to be compatible for algebraic operations. Consider the following of NNs with nondifferentiable interval and distributed time-varying delays in the form: where n denotes the number of neurons in the network, p(t) = [p 1 (t), p 2 (t), ..., p n (t)] T ∈ R n is the neurons state vector, q(t) ∈ R n is the output vector and u(t) is the external input of the network, D = diag{d 1 , d 2 , ..., d n } is a positive diagonal matrix, A, A 1 , A 2 are interconnection weight matrices, C 1 , C 2 , C 3 , C 4 are real matrices, g(p(t)) = [g 1 (p 1 (t)), g 2 (p 2 (t)), ..., g n (p n (t))] T ∈ R n denotes the activation function, g(p(t − r(t))) = [g 1 (p 1 (t − r(t))), g 2 (p 2 (t − r(t))), ..., g n (p n (t − r(t)))] T ∈ R n . and φ(t) ∈ R n is the initial function.
The variables r(t) and d(t) represent the mixed delays of the model in (2.1) and satisfy where r 1 , r 2 , and d are constants.
The neural activation functions g i (p i (t)) are continuous g i (0) = 0 and there exist constants l − i , l + i (i = 1, 2, ..., n) such that The neural network (2.1) is said to be passive if there exists a scalar γ > 0 such that for all t f ≥ 0 under the zero initial condition.
For a given matrix Q ∈ S + n and a function e : [u, v] → R n whose derivativė e ∈ C([u, v], R n ), the following inequalities hold: (2.10)

Main results
In this section, the new result on robust passivity analysis for NNs with interval nondifferentiable and distributed time-varying delays will be established. Let us set Based on the Lyapunov-Krasovskii functional approach, we present our new theorem for passivity of NNs (2.1).
Theorem 3.1. The delayed neural network in (2.1) is passive in the sense of definition 2.1 for any delays r(t) and , and a scalar γ > 0 satisfy the following LMI: Proof. Consider the following Lyapunov-Krasovskii functional: where, The derivative of V(t, p t ) along the solution of system (2.1) as follows: T (s)S 2ṗ (s) ds +d 2 g T (p(t))S 4 g(p(t)) − r 12 We conclude that, According to Lemma 2.2, we have By splitting, we have −r 12 In the same way, applying the inequalities (2.7) and (2.8), then we obtain −r 12 g(p(s)) ds For λ 1i > 0, i = 1, 2, . . . , n, it can be deduced from (2.3) that Then, to show that NNs (2.1) is passive, we define Consider the zero initial condition and we have From (3.2) to (3.14), it can be deduced thaṫ where Φ(r) is an affine function in r, for Φ(r) < 0, r ∈ [r 1 , r 2 ] if and only if Φ(r 1 ) < 0 and Φ(r 2 ) < 0. if (3.1) holds for r = r 1 and r = r 2 and we have Φ(r) < 0, theṅ Considering, we have J(t f ) < 0 for any t f ≥ 0 if condition (2.3) is satisfied. Thus, the system of NNs (2.1) is passive. The proof is completed.

Remark 1.
We can see that the time delay in this work is a continuous function which belongs to a given interval. It means that the lower and upper bounds of the time-varying delay are available. Moreover, there is no need to be differentiable for the delay function. Therefore, the delays considering in this brief are more general than those studied in [29,33,34,38]. [40] is more general than [28,33,36,39] because the constants l − i and l + i can be positive, zero or negative. We can see that the activation function under (2.3) can be unbounded, non-monotonic, non-differentiable. Hence, the passivity condition is considered in this work is less conservative than Ref. [28,33,36,39].  .7) and (3.8), which obtained a tighter upper bound than Jensen's inequality used in Ref. [25,29,33,36].

Remark 2. The activation function in inequality (2.3) studied by
Based on the presented passivity condition in theorem 3.1, we will develop passivity analysis of uncertain NNs established as follows. Consider where ∆D(t), ∆A(t), ∆A 1 (t), ∆A 2 (t) are the time-varying parameter uncertainties, which are assumed to be of the form (3.16) where M, N 1 , N 2 , N 3 and N 4 are known real constant matrices, and F(·) is an unknown time-varying matrix function satisfying F T (t)F(t) ≤ I then we have the following result.

Remark 5.
To illuminate how to solve the upper bound of r 2 for system (2.1) satisfying time-varying delays (2.2) and neural activation functions (2.3), the following steps are performed.
Step 1: Given positive diagonal matrix D, real matrices A, A 1 , A 2 , C 1 , C 2 , C 3 and positive constants r 1 , d.
Step 2: Select a positive constant γ.
Step 3: Define variable matrices with appropriate dimensions P, Step 4: Use matlab software to compute the value of the variable.

Numerical examples
In this section, three numerical examples are given to illustrate the merits of the proposed robust passivity results.
Example 4.1. Consider a neural network (2.1) with the following parameters: The neural activation functions are assumed to be g i (p i ) = 1 2 (|p i + 1| − |p i − 1|) (i = 1, 2). It is easy to check that the neural activation functions are satisfied (2.3) with l − i = 0 and l + i = 0 (i = 1, 2). Using Matlab LMI Toolbox, we can conclude that the upper bound of r 2 without non differentiable µ which is shown in Table 1 is feasibility of the LMI in theorem 3.1. In addition, the results from [36][37][38][39][40] without distributed delay are listed in Table 1. As shown in this table, the criterion of this paper is less conservative than those results obtained in [36][37][38][39][40]. According to Figure 1, it can be confirmed that neural network (2.1) under zero input and the initial condition [p 1 (t), p 2 (t)] T = [−1, 1] T is stable. µ µ = 0.5 unknown [36] 0.5227 - [39] 1.3752 - [40] 3.0430 - [38] 3.0835 - [37] 3.6566 -Theorem 3.1 -4.1010 With these parameters, we can conclude that the upper bound of r 2 are shown in Table 2 is feasibility of the LMI in theorem 3.2. Moreover, the results from [33,35,40] are listed in Table 2. As shown in this table, the criteria of this paper is less conservative than those results obtained in [33,35,40]. We have activation functions as above and set  µ=0.1 unknown µ [33] 0.5005 0.4269 [40] 0.5504 - [35] 0.6621 -Theorem 3.2 -3.0420 In this example, we can conclude that the upper bounds of r 2 are shown in Table 3 is feasibility of the LMI in theorem 3.2. Moreover, the results from [33,34,40] without distributed delay are listed in Table 3. As shown in this table, the criterion of this paper is less conservative than those results obtained in [33,34,40]. We have activation functions as above and set ∆D(t) =   Remark 6. An important property in linear circuit and system theory is passivity which is applicable to the analysis of properties of immittance or hybrid matrices of various classes of neural networks, inverse problem of linear optimal control, Popov criterion, circle criterion and spectral factorization by algebra [42]. In the recent years, passivity properties have also been related to the neural networks [36][37][38][39][40]. It should be pointed out that the aforementioned results have the restrictions on the derivative time-varying delays which mean that the delayed conditions in this work are more applicable in the real-world system by establishing Lyapunov-Krasovskii functional fully of the information of the delays r 1 , r 2 and d. On the other hand, in this work, we use the refined Jensen's inequality to estimate single and double integrals. By applying the aforementioned techniques, we obtain the less conservative results than the others [25,29,33,36].

Conclusions
In this research, we focused on new results for robust passivity analysis of NNs with interval nondifferentiable and distributed time-varying delays. Using refined Jensen's inequalities, and applying the Lyapunov-Krasovskii functional containing single, double, triple and quadruple integrals, the new conditions were obtained in terms of LMI which can be checked by using LMI toolbox in MATLAB. Moreover, These results are less conservative than the existing ones and can be an effective method. Compared with existing ones, the obtained criteria are more effective because of the application of refined Jensen-based inequality technique comprising single and double inequalities evaluating. Three numerical examples have been proposed to show the effectiveness of the methods. For further research, we can use these methods to consider the dynamic networks with Markovian jumping delayed complex networks or stochastic delayed complex networks.