Exponential stability analysis and application of parameters switched neural networks via intermittent observation and feedback control

—This paper deals with a type of the exponential stability problem for the switched neural networks with time-varying delays driven by Brownian noise. As a prerequisite to main theorem, the existence and uniqueness of the solution to the main system are proved via contraction map theory. Based on intermittent observation control, the stability trajectory of the switched neural networks with time-varying delays is obtained. Employing stochastic analysis method, the exponential stability conditions are established via applying It ^ o formula and the matched pair technique. A numerical example for the main system with respect to intermittent observation control is provided to illustrate the effectiveness of results and potential of the proposed techniques. Meanwhile, the feasibility of stability control in multiagents system is veriﬁed by the method obtained.


I. INTRODUCTION
It is well-known that neural networks can approximate any function, which can accomplish a predetermined goal. From the point of view of a system, neural networks are equivalent to the output function of the system [1]. Therefore, the dynamical characteristics (including stable, unstable, oscillatory, and chaotic behavior) of neural networks with time delays have become a subject of intense research activities [2][3][4]. On the other hand, neural networks have a wide range of applications, see [5][6][7]. In particular, the application of integrated circuit chip with neuron unit to the field of artificial intelligence is a good example.
It should be pointed out that artificial neural networks can exhibit some complicated dynamics and even chaotic behaviors, the stability of stochastic neural networks has also become an important area of study. The theoretical research of stochastic neural networks mainly includes stability analysis (see [8][9][10][11][12]) and synchronous control (see [13][14][15]). Among them, the stability analysis of neural networks, such as asymptotical stability [16,17], mean square stability [18,19]  exponential stability [20,21], has always been an important focus in the field of neural networks research. Hence, it has attracted many scholars to study the stability of neural networks. For example, in [10], a new stability condition has been derived for neural networks with a time-varying delay, which encompassed the conventional one as a special case based on an improved generalized free-weighting-matrix integral inequality. In [16], a novel linear matrix inequality condition can guarantees the existence and global asymptotic stability of a class of generalized bidirectional associative memory neural networks, and the results obtained can be applied to design globally asymptotically stable networks. The global exponential stability for delayed recurrent neural networks has been studied in [21], and delay-dependent global exponential stability criteria have been derived in terms of linear matrix inequalities.
Note that there are several methods of stability analysis of neural networks, such as root locus method, Nyquist and Routh stability criterion, which need to be analyzed based on the solution of the system equation. For the theoretical analysis of random neural networks, it is not easy to obtain the solution of system equation. However, Lyapunov conducted stability analysis from the perspective of energy, that is to say, the stability of the system is determined based on the derivative value of the positive definite function. The stability research of stochastic system by the method of Lyapunov-Krasovskii functional, which has attracted a large number of scholars to study the stability problem of neural networks, (see [12,16,19,20] and [22][23][24][25][26]). For example, in [12], some sufficient conditions for p-moment stability of equilibrium of neural networks with time varying self-regulating parameters are obtained, and the main stability results based on the properties of solutions of neural networks are acquired by using Lyapunov method. The mean square exponential inputto-state stability in [20] for stochastic delay reaction-diffusion neural networks are investigated via the help of Lyapunov-Krasovskii functional method and Wirtinger-type inequality. In [26], a class of discontinuous BAM neural networks with hybrid time-varying delays and D operator are concerned based on Lyapunov approach and the non-smooth analysis theory, while some novel sufficient conditions are derived to guarantee the existence, uniqueness and global exponential stability of almost-periodic solution of proposed neural network model.
It is noted that, these literatures presented in this paper only focus on stable results, but ignore process control. It should be pointed out that, in the stability simulation of the above neural networks, the system traces evolve from the initial value to the final result as a black box. That is to say, the design of the point of observation for the stability control of neural networks has not received much research attention. Up to now, there have been very few results based on intermittent observation for the analysis issue of the stochastic neural networks. In order to solve this problem, an exponential stability control method based on intermittent observation for the state switching neural networks are proposed to do something useful work in this paper.
This paper is organized as follows: In Section II, we present some notations of this paper, a definition, some assumptions, some lemmas and introduce the main system. Section III contains the main results and proofs of this paper: we prove the existence of the solution for the system equation, and obtain two sufficient conditions for exponential stability. Section IV shows the numerical examples and their simulations which illustrate the feasibility and effectiveness of this model. The application for the stability control of multiagents system is exhibited by the approach of the neural networks in Section V, and the conclusions of this paper are given in Section VI.

II. PRELIMINARIES AND SYSTEM DESCRIPTION
For convenience, we first introduce some notations for this paper.
the vector norm Diag{· · · } the diagonal matrix (Ω, F , P ) a complete probability space In a complete probability space (Ω, F, P), we consider a stochastic neural networks driven by Brownion noises with time-varying delays, which is described as follows: where x(t) = [x 1 (t), x 2 (t), · · · , x n (t)] T ∈ R n is the state vector of the neural networks, C = diag{c 1 , c 2 , · · · , c n } is a diagonal matrix with entries c i > 0, (i = 1, · · · , n). A = (a ij ) n×n , B = (b ij ) n×n are the connection weight matrix and the delayed connection weight matrix, respectively. τ (t) is time-varying delays. f (·) = (f 1 (t), f 2 (t), · · · , f n (t)) T (i = 1, · · · , n) are the neuron activation functions. g(·) = (g 1 (t), g 2 (t), · · · , g n (t)) T (i = 1, · · · , n) is vector function of disturbance intensity and ω(t) = [ω 1 (t), · · · , ω m (t)] T is an m-dimensional adapted Brownian noises. Note that the change of subsystem interconnections and environmental disturbances as well as component repairs or failures can cause the abrupt changes of the system structure or the jump changes of system parameters in all kinds of neural networks. In particular, the state change of neural networks may be treated as the finite mode change of one system. While the Markov chains can describe a system in which the state change jumps from one to another state at different times. Therefore adding the Markov switched state to system (1) is more reasonable for stochastic neural networks. Therefore we give the concept of Markov chain as follows: Let {r(t), t ≥ 0} be a right continuous Markov chain in a complete probability space (Ω, F, P ) taking values in a finite state set S = {1, 2, · · · , N } with generator Π = (π ij ) N ×N given by On the other hand, in order to observe the stability of networks system conveniently, we employ the intermittent controller as follows: where the controller is designed as not only with system state x(t) at times 0, δ, 2δ, 3δ, · · · , (δ > 0), but also associating with Markov switched information r(t). Therefore the controller adopt the form Above all, system (1), we considered, can be presented as follows: dω(t).
The above system equation can also be written as follows: where K r(t) is the gain matrix with Markov switched imformation r(t) of the controller.
In order to analyze the stability property for the stochastic switched neural networks with time-varying delays driven by Brownian noises, we give the following assumptions.
We are now in a position to provide a corresponding notion for the stability analysis of switched neural networks (3).
Definition 1: The stochastic neural networks with timevarying delays and Markov switched parameters are said to be exponentially stable in mean square if the system trace x(t, ϕ(0)) = x(t) satisfies the following inequality: or, there exist constant G > 0, λ > 0, the (7) is equivalent to the following inequality Next, we supply some preliminary assertions which will play an important role in the proof of the main results.

III. MAIN RESULTS AND PROOFS
In this section, we will present the main results for the switched neural networks with time-varying delays driven by Brown noises. For it to analyze the stability with meaningful, there must exist a solution to a system equation. Therefore the following Theorem 1 is a prerequisite for the stability result of Theorem 2.
Theorem 1: Let the functions f (·) and g(·) satisfy Assumption 1 and Assumption 2, respectively. If there exist a bounded stop time T > 0 , such that the following inequality (3) has a unique solution in S 2 space. Where the notations λ CK and λ AB are denoted as follows: and α 1 , α 2 are some constants which satisfy Assumption 2, while γ, γ 1 and γ 2 are some constants given in following proof.
Proof: By integral for system equation (3), we obtain (11) can be rewritten as follows: By the Hölder inequality and Assumption 2, we obtain We now establish a map as follows: The above map is a contraction map in the following space: Therefore,Φ is a contraction map on [0, T ]. From the fixed point theory, it follows that system (3) has a unique solution In this subsection, for system (3), we are going to give the criteria of exponential stability in mean square for the Markov switched neural networks with time-varying delays driven by Brownian noises.
Theorem 2: Let the functions f (·) and g(·) satisfy Assumption 1 and Assumption 2, respectively. If there exist symmetric positive definite matrices P r(t) , r(t) ∈ S = {1, 2, · · · , N }, such that for all t ≥ 0, the following inequalities hold, then the system (3) is exponential stability in mean square. That is to say, the Markov switched neural networks (3) are exponential stability in mean square based on intermittent observation via feedback control. Where the notations and R(δ) are denoted as follows: π ir(t) P r(t) , and γ 1 , γ 2 , γ 3 are constants.
Proof: In order to prove the theorem with well-organized presentation, we will give this proof with two parts.
Part I. In this subsection, we will prove a valuable conclusion which is about to be used in exponential stability analysis. At first, fixing any integer k ≥ 0 and for ∀t ∈ [kδ, (k + 1)δ], (δ > 0), we let [ t δ ]δ = kδ. According to the theory of integration, along with the trace x(t) of system (3) and the intermittent observation state associated with x(kδ), we obtain that From the (10) of Lemma 2, setting n = 5 and p = 2, we have the following inequality From the notations of Theorem 1 and the property of integral inequality, we obtain that Taking the right endpoint of [kδ, (k + 1)δ] for t of the upper limit of integral, we derive the following inequality By the inequality (5) of Assumption 1, we have Note that the state traces x(s − τ (s) of system (3) only differs a constant from the x(s), therefore we have the following inequality where γ 1 is a constant.
In particular, if then the ω(t) is a standard Bownian motion, the fifth term F 5 (t) is a martingale, and the following equation holds.
From the (10) of Hölder inequality, letting n = 2 and p = 2, we have For simplicity, we denote Note that x(kδ) = x([ s δ ]δ) in (22). From Lemma 3 and associating with (22), (23), we derive that By the Hölder inequality, setting n = 2 and p = 2, we have, Consequently, Solving the inequality (25), we obtain the desired result Part II. In this subsection, we will analyze the exponential stability based on feedback controller with intermittent observation for the stochastic neural networks.
Let P r(t) , (r(t) ∈ S) be symmetric positive definite matrices. We choose a Lyapunov functional V (t, x(t), r(t))= x T (t)P r(t) x(t) ∈ C 2,1 (R + × R n × S; R + ). By the generalized Itô formula to the function V (t, x(t), r(t)), we obtain that By Lemma 1, we have From Assumption 1, (28) can be written as follows: In the same way, we have the following inequality From Assumption 2, we derive that where α 1 and α 2 are positive constants. Substituting (29), (30) and (31) into (27), meanwhile, adding the 2x T (t)P r(t) K r(t) x(t) into (27) and subtracting it, we obtain that π ir(t) P r(t) . (33) In order to acquire the exponential conditions, we set It is easy to know λ > 0. The θ, α and β D are given in the conditions of Theorem 1.
Following the above results, we apply the generalized Itô formula again to the functional f (·) = e λt V (t, Note that by the property of martingale, we know, the following equation holds: By integral on both sides of (35) and taking expectation, we obtain that where the parameters β d , β D , θ, M P K are given in the conditions of Theorem 1. By Lemma 1, we know, 2ab ≤ εa 2 + ε −1 b 2 , (ε > 0) holds, then setting Correspondingly, Note that by the inequality (26) of Part I, we then obtain that Therefore the (38) can be written as following inequality Then substituting (40) into (36), we derive that Note also that, from (34), the following equality holds Therefore we obtain the required results That is, where φ(x 0 ) = βD β d E∥x 0 ∥ 2 is a positive constant. From Definition 1, this proof is completed. That is to say, the Markov switched neural networks (3) are exponential stability in mean square based on intermittent observation via feedback control.
In this section, we consider the system without Markov switched information and Brownian noises perturbation. Therefore the neural networks are modified to the following equation as a special case of system (3): where , · · · , x n (t)] T ∈ R n is the state vector of the neural networks, C = diag{c 1 , c 2 , · · · , c n } is a diagonal matrix with entries c i > 0, (i = 1, · · · , n). A = (a ij ) n×n , B = (b ij ) n×n are the connection weight matrix and the delayed connection weight matrix, respectively. τ (t) is time-varying delays. f (·) = (f 1 (t), f 2 (t), · · · , f n (t)) T (i = 1, · · · , n) are activation functions. U (·) = Kx([ t δ ]δ) is the controller of intermittent observation.
For system (42), we have the following results with respect to exponential stability in mean square.
Theorem 3: Let the function f (·) satisfies Assumption 1. If there exist symmetric positive definite matrices P , such that for all t ≥ 0, the following inequalities hold, then the system (42) is exponential stability in mean square based on intermittent observation via feedback control. where the notations M A , M B , M C , M K , M P K , β D , β d , θ, α, Γ and R(δ) are denoted as follows: γ 1 and γ 2 are some constants.
Proof: At first, fixing any integer k ≥ 0 and for ∀t ∈ [kδ, (k + 1)δ], (δ > 0), we let [ t δ ]δ = kδ. According to the integral theory, along with the trace x(t) of system (42) and the intermittent observation state associated with x(kδ), we obtain that The rest proof is similar as the proof of Theorem 2, for simplicity, we omit it. By the same method of the proof of Theorem 2, we obtain the following result: where φ(x 0 ) = βD β d E∥x 0 ∥ 2 is a positive constant. From Definition 1, the neural networks (42) are exponential stability in mean square based on intermittent observation control. Therefore this proof is completed.

IV. NUMERICAL SIMULATION
In this section, we present an examples and some figures so as to demonstrate the effectiveness of the exponential stability results obtained for the stochastic neural networks with respect to Markov jump. Choose neuron activation function as follows: where . Select perturbation intensity function as follows: where g 1 (·) = 0.48(x 1 (t) + x 1 (t − τ (t))), g 2 (·) = 0.46(x 2 (t) + x 2 (t − τ (t))).
Take time delay as τ (t) = 0.9|sin(t)|.  Evidently, It can be checked that Assumptions and the conditions of Theorem 2 are satisfied, respectively. Therefore system (3) is exponential stability in mean square based on intermittent observation.
In order to illustrate the effectiveness of the proposed results, we plot some figures which are presented by Fig.2. From the figure, system (3) is exponential stability in mean square.    It can be checked that Assumptions and the conditions of Theorem 2 are satisfied, respectively. Therefore system (3) is exponential stability in mean square based on intermittent observation.
For illustrating the effectiveness of the proposed results, we plot some figures which are showed by Fig.3. From the figure, system (3) is exponential stability in mean square.

MULTIAGENTS SYSTEM
We present the application for the stability control of multiagents system, which can show the feasibility of the obtained results. The leader agent system is described as follows: . .
The N follower multiagents are described as follows: is unknown nonlinear function, w i and u i are disturbance and input control, respectively. We next use the neural network which is shown in this paper to approximate each smooth unknown nonlinearity f i (x i ) in the follower multiagents system. The main goal is to conduct the cooperative tracker by constructing a controller whose input signal is relevance to the neural network. The controller is designed as follows: n), δ ik is the synchronization error, and β is the gain of control, b i denotes the communication weight defined as b i > 0, which means that follower multiagents i has access to the information of the leader agent, is the activation function, σ is the adaptive disturbed parameter and W (t) is the Brownian motion.
For convenience, we apply a third-order leader agent in the following form: where w i = 2sin(t + iπ/8), i = 1, 2, · · · , 5 denote the disturbance. Twenty five neurons are used to each agent. The weights of initial neural network are all set to be zero. The activation functions of neural network are selected as follows: j = 1, 2, · · · , 25, i = 1, 2, · · · , 5 where the initial state of the leader agent is chosen as x (0) 0 = [9 18 10]. c j ∈ [−1, 1] and initial states of the five follower agents are chosen to be x . We select the parameters of controller is ρ = 20,Ŵ T i = 50, b i = 1, σ = 1. The Brownian motion W (t) takes as the algorithm of stationary Gauss processes with known covariance function. Fig.4 shows the state tracking errors between leader agent and follower multiagents.We can easily see that the states of all the five follower agents synchronize to the state of the leader agent very well. Moreover the synchronization error is smaller than those in Fig.4 of paper [27].

VI. CONCLUSIONS
In this paper, we have proved the existence and uniqueness of the solution of stochastic neural networks with time-varying delays via fixed point theory, while we have discussed the exponential stability problems based on intermittent observation control for the switched neural networks driven by Brownian noise. By recurring to the Hölder inequality, Gronwall inequality, Itô formula, Lyapunov functional approach and matched pair technique, we have obtained the exponential stability conditions in the mean square based on the given initial data for the Markov switched neural networks with timevarying delays. On the other hand, we also have analyzed the exponential stability criteria for the time-varying delays neural networks without Markov switch and stochastic perturbation. In particular, we have presented a numerical example and some evolutionary figures to support our proposed results and used techniques for the main system. At the same time, we also give the application of the model for the stability control of multiagents system, and the results exhibit the effectiveness.