Next Article in Journal
Closed-Form Expressions of Upper Bound for Polarization-MDCSK System
Next Article in Special Issue
Neural Adaptive H Sliding-Mode Control for Uncertain Nonlinear Systems with Disturbances Using Adaptive Dynamic Programming
Previous Article in Journal
Efficient Multi-Change Point Analysis to Decode Economic Crisis Information from the S&P500 Mean Market Correlation
Previous Article in Special Issue
Self-Organizing Interval Type-2 Fuzzy Neural Network Compensation Control Based on Real-Time Data Information Entropy and Its Application in n-DOF Manipulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Periodic Intermittent Adaptive Control with Saturation for Pinning Quasi-Consensus of Heterogeneous Multi-Agent Systems with External Disturbances

School of Mechanical Engineering, Xihua University, Chengdu 610039, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(9), 1266; https://doi.org/10.3390/e25091266
Submission received: 24 May 2023 / Revised: 14 August 2023 / Accepted: 15 August 2023 / Published: 27 August 2023
(This article belongs to the Special Issue Intelligent Modeling and Control)

Abstract

:
A periodic intermittent adaptive control method with saturation is proposed to pin the quasi-consensus of nonlinear heterogeneous multi-agent systems with external disturbances in this paper. A new periodic intermittent adaptive control protocol with saturation is designed to control the internal coupling between the follower agents and the feedback gain between the leader and the follower. In particular, we use the saturation adaptive law: when the quasi-consensus error converges to a certain range, the adaptive coupling edge weight and the adaptive feedback gain will not be updated. Furthermore, we propose three saturated adaptive pinning control protocols. The quasi-consensus is achieved through its own pinning as long as the agents remain connected to each other. Using the Lyapunov function method and inequality technique, the convergence range of the quasi-consensus error of a heterogeneous multi-agent system is obtained. Finally, the rationality of the proposed control protocol is verified through numerical simulation. Theoretical derivation and simulation results show that the novel proposed periodic intermittent adaptive control method with saturation can successfully be used to achieve the pinning of quasi-consensus of nonlinear heterogeneous multi-agent systems.

1. Introduction

Scientists have conducted extensive research on the clustering phenomenon of various organisms in nature [1] and put forward the concept of multi-agent systems (MASs). Due to the high robustness of MASs’ distributed coordinated control, they have been widely used in practical engineering, including for UAV and robot formation, satellite orbit control, smart power grid, collaborative monitoring and other fields [2,3,4,5,6,7,8]; in addition, in-depth theoretical research has been conducted in control theory, physics, computer and other fields [9].
For MAS distributed coordination control, the consensus problem is the typical basis of multi-agent coordination control research. Methods of distributed consistent coordinated control of MASs have developed rapidly in recent years, for example, the consensus of transformation topology [10]; the consensus problem with communication delay [11]; the second-order, high-order and even fractional-order consensus [12]; and the consensus problem of finite time [13]. Multi-agent consensus control includes leaderless consensus and leader-following consensus. And the control protocol value of the latter is determined by the initial state of the agent. It has the advantage of a predetermined consensus value for control. Therefore, it has been studied extensively in recent years. In [14], the author uses Lyapunov stability theory to realize the leader-following consensus of second-order MASs. A unified framework of complex network synchronization and MAS consensus is established in [15]. In [16], output feedback and state feedback are used to study the consensus problem of leaders and followers. In [17], the sliding mode method is used to solve the bounded unknown input of leads. In [18], the author realizes the leader-following consensus for a single-integrator system.
However, the above research on MASs is based on the premise that all agents have the same dynamics. Of course, in practical engineering, it is unrealistic to require each agent to satisfy the same dynamics. Therefore, the research on heterogeneous multi-agent systems (HMASs) is more practical. The current complete consensus of HMASs is not well studied. In [19], the author parameterizes the unknown dynamic linear of agents to realize the consistent tracking of HMASs. In [20], the distributed consensus problem of HMASs with asymmetric input saturation is studied. However, in other studies, heterogeneity was transformed into homogeneity [21,22] or complex compensators were added to eliminate heterogeneity [23,24,25,26]. These methods cannot be applied in practical engineering because of their complexity. In fact, in practical engineering applications, only a certain error range is allowed. So, there is no need for complete consensus between HMASs. Therefore, the research on quasi-consensus (QC) of HMASs is of more practical significance. In [27,28], the definition of HMAS QC is proposed and expanded.
Subsequently, researchers began to achieve the QC of multi-agent systems through sampling data control, pulse control, adaptive control and other methods [29,30,31]. In particular, adaptive control is favored by researchers in the field of control because of its many advantages in realizing collaborative control, especially its low operating cost, fewer system requirements, high robustness and strong adaptability. Yu Wenwu et al. [29] further studied the problem of leader-following consensus of second-order multi-agents through the method of adaptive control. In [31], two new inequalities are proposed and an adaptive controller is designed to realize the QC of HMASs. However, the above adaptive control methods rely on continuous control, which is an obstacle to the application of control methods in practical engineering, and the proposed cost-saving adaptive control method will have great advantages. In addition, since periodic intermittent control is a discontinuous control method, continuous control is not required. So, the cost is greatly reduced, clearly being more in line with actual needs. Since intermittent control is activated only at work time, its fault tolerance is greatly increased, which is important for dealing with HMASs problems. Combining the adaptive method with the periodic intermittent control method could be an effective approach. Motivated by the application of adaptive control in integer and fractional complex dynamic networks [32,33,34,35], a control method based on periodic intermittent adaptive control is proposed to realize the QC of nonlinear HMASs with external disturbances in this paper. This control method realizes discontinuous control and provides more possibilities for realizing control cost savings.

2. Preliminary Preparation and Model Description

2.1. Graph Theory

A graph can be used to represent the topological relationship in an HMAS. An N-dimensional graph G = { V , E , A } includes the nodes V = ( v 1 , v 2 , , v N ) , which are connected to edges between different agents E { ( i , j ) | i , j V , i j } E { ( i , j ) | i , j V , i j } in the adjacency matrix A = ( a i j ) N × N . In a directed graph, the edge ( i , j ) E indicates that agent j can obtain information from agent i , but agent i cannot obtain information from agent j . For undirected graphs, the edge ( i , j ) E indicates that agent i and agent j can exchange information with each other. We assume that the topological connection is an undirected graph, and node i and node j are called neighbor nodes in this paper. The undirected connecting edge between node i and node j can be represented by ( v i , v j ) and a i j = a ji > 0 is the weight of undirected edges ( v i , v j ) . When node i is not connected to node j , a i j = a ji = 0 . The degree matrix D = diag ( d i ) with d i = j N i a i j and the Laplace matrix L = ( L i j ) N × N with L i j = j = 1 . j i N a i j and L = D - A .
Lemma 1
[36]. The Laplacian matrix L of an undirected graph  G  satisfies
1. The Laplacian matrix,  L , is positive semi-definite, and its eigenvalues are 0 and positive.
2. The smallest nonzero eigenvalues  λ 2 ( L )  satisfies
λ 2 ( L ) = x T 1 min N = 0 , x 0 x T L x x T x
3. For any vector  η = ( η 1 , η 2 , , η n ) T R N ,
η T L η = 1 2 i = 1 N j = 1 N A i j ( η i η j ) 2
Lemma 2
[37]. Leta continuous function  V : [ μ , ) [ 0 , ) , which satisfies
V . ( t ) g 1 V ( t ) + ω 2
If g 1 > 0 , ω 2 > 0 , when t a :
V ( t ) < V ( a ) exp { g 1 ( t a ) } + ω 2 g 1 , t a
Lemma 3
[38]. For vector  x , y R n there is a constant  γ > 0 , which makes the following inequality true:
2 x T y γ x T x + γ 1 y T y
Assumption 1.
Suppose the nonlinear function  f ( t , )  for vector  α ¯ , β ¯ R n  satisfies
| | f ( α ¯ , t ) f ( β ¯ , t ) | | l | | α ¯ β ¯ | |
where l is a positive constant.
Assumption 2.
Suppose that the external disturbances is bounded and satisfies
| | ψ ( t ) - ϖ i ( t ) | | S i
where S i > 0
Assumption 3.
Suppose the network connection topology between the following multiple agents is undirected ( L i j = L j i ); each follower can obtain the state information of the agent with the coupling relationship and the leader agent at any time.
Definition 1.
For an HMAS, if each agent can satisfy the following inequality for any state variable of the system under the initial conditions, the entire HMAS is said to have reached QC:
lim t + | | z i ( t ) z 0 ( t ) | | δ , i = 1 , 2 , , N

2.2. Model Description

Consider an HMAS with 1+N multi-agents; the subscripts 0 and i are used to represent the leader and follower, respectively. The dynamic equations of the leader multi-agent and follower multi-agent are described as follows:
z . 0 ( t ) = C 0 z 0 ( t ) + D 0 f ( z 0 ( t ) , t ) + ψ ( t )
z . i ( t ) = C i z i ( t ) + D i f ( z i ( t ) , t ) + ϖ i ( t ) + u i ( t )
where z i R n represents the follower state variable, z 0 R n represents the leader state variable, f : R n × R n × R + R n is a nonlinear function, C 0 , D 0 R n × n is the leader parameter matrix, C i , D i R n × n is the parameter matrix of the ith agent, ϖ i ( t ) , ψ ( t ) R n is the external time-varying disturbance, and u i ( t ) R n is the control protocol.
The error is described as ξ i ( t ) = z i ( t ) z 0 ( t ) ; as a result, the error model is as follows:
ξ . i ( t ) = C i ξ i ( t ) + D i f ( ξ i ( t ) , t ) + h i ( x 0 ( t ) , t ) + ϖ i ( t ) ψ ( t ) u i ( t )
where
f ( ξ i ( t ) , t ) = f ( z i ( t ) , t ) f ( z 0 ( t ) , t ) ,   h i ( z 0 ( t ) , t ) = ( C i C 0 ) z 0 ( t ) + ( D i D 0 ) f ( z 0 ( t ) , t )
Consider the following control protocol for achieving QC between the leader state (7) and the follower state (8):
u i = { c j = 1 N L i j ( t ) ( z j ( t ) z i ( t ) ) r i ( t ) ( z i ( t ) z 0 ( t ) ) , n T t n T + σ T 0 , n T + σ T t ( n + 1 ) T
where c represents the coupling strength of the communication topology in the HMASs.
For the control protocol (10), we designed the following adaptive law:
r . i ( t ) = γ i π i e 2 β t ( z i ( t ) z 0 ( t ) ) T ( z i ( t ) z 0 ( t ) )
L . i j ( t ) = κ i j π i e 2 β t ( z i ( t ) z j ( t ) ) T ( z i ( t ) z j ( t ) ) , L i j ( 0 ) = L j i ( 0 ) > 0 , ( i , j ) E
π i = { 1 , w h e n | | ξ | | > ε 0 , o t h e r s
where γ i , κ i j = κ j i are positive constants.
Remark 1.
The condition of convergence can be achieved quickly by using this adaptive law. But in this quasi-consensus study, the error converges to a certain range and does not become zero. Therefore, when the error reaches the allowable range, the adaptive law will continue to increase rapidly. This will increase the control cost in practical applications. Therefore, we designed an adaptive control protocol with saturation. When the error converges to our allowable range, the adaptive law becomes zero. In this case, the adaptive feedback gain and adaptive coupling side weight will not be updated, which greatly reduces the practical application cost. And when the error converges to the range that we allow, the error itself is small enough. Under the action of feedback gain and coupling side weight, the error can be controlled within the allowable range.
In combination with (10)–(12), error model (9) is as follows:
ξ . i = { C i ξ i ( t ) + D i f ( ξ i ( t ) , t ) + h i ( z 0 ( t ) , t ) + ϖ i ( t ) ψ ( t ) c j = 1 N L i j ( t ) ξ j ( t ) r i ( t ) ξ i ( t ) , n T t n T + σ T C i ξ i ( t ) + D i f ( ξ i ( t ) , t ) + h i ( z 0 ( t ) , t ) + ϖ i ( t ) ψ ( t ) , n T + σ T t ( n + 1 ) T

3. Main Result

3.1. Adaptive Control Protocol

Theorem 1.
If the HMAS satisfies Assumptions 1–4, the HMAS can achieve QC under the adaptive control protocol (10) and adaptive laws (11) and (12).
Proof: 
When | | ξ | | > ε , π i = 1 , we construct the following Lyapunov function to achieve the QC of leader-following HMASs:
V ( t ) = 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i
when n T t n T + σ T .
Taking the derivative of (15), we can obtain
V . ( t ) = i = 1 N ξ i T ( t ) ξ . i ( t ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 κ i j L i j . ( t ) + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N c e - 2 β t ( r i ( t ) r i ) γ i r i . ( t )
= i = 1 N ξ i T ( t ) C i ξ i ( t ) + i = 1 N ξ i T ( t ) D i f ( ξ i ( t ) , t ) + i = 1 N ξ i T ( t ) h i ( z 0 ( t ) , t ) + i = 1 N ξ i T ( ϖ i ( t ) ψ ( t ) ) i = 1 N ξ i T ( t ) ( c j = 1 N L i j ( t ) ξ j ( t ) + r i ( t ) ξ i ( t ) ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 κ i j ( κ i j e 2 β t ( z i ( t ) z j ( t ) ) T ( z i ( t ) z j ( t ) ) ) + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N c e - 2 β t ( r i ( t ) r i ) γ i ( γ i e 2 β t ( z i ( t ) z 0 ( t ) ) T ( z i ( t ) z 0 ( t ) ) )
= i = 1 N ξ i T ( t ) C i ξ i ( t ) + i = 1 N ξ i T ( t ) D i f ( ξ i ( t ) , t ) + i = 1 N ξ i T ( t ) h i ( z 0 ( t ) , t ) + i = 1 N ξ i T ( ϖ i ( t ) ψ ( t ) ) i = 1 N ξ i T ( t ) ( c j = 1 N L i j ( t ) ξ j ( t ) + r i ( t ) ξ i ( t ) ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j 1 2 i = 1 N j = 1 N c ( L i j ( t ) + L i j ) ( ( z i ( t ) z j ( t ) ) T ( z i ( t ) z j ( t ) ) ) + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N c ( r i ( t ) r i ) ( ( z i ( t ) z 0 ( t ) ) T ( z i ( t ) z 0 ( t ) ) )
We can define the Laplacian matrix Ω = ( τ i j ) N × N , where τ i j = L i j , i j and
τ i i = j = 1 , j i N τ i j , through Lemma 1, one can obtain
1 2 i = 1 N j = 1 N c ( L i j ( t ) + L i j ) ( ( z i ( t ) z j ( t ) ) T ( z i ( t ) z j ( t ) ) ) = c i = 1 N j = 1 N L i j ( t ) ξ i T ξ j + c i = 1 N j = 1 N τ i j ξ i T ξ j
Then one can obtain
V . ( t ) = i = 1 N ξ i T ( t ) C i ξ i ( t ) + i = 1 N ξ i T ( t ) D i f ( ξ i ( t ) , t ) + i = 1 N ξ i T ( t ) h i ( z 0 ( t ) , t ) + i = 1 N ξ i T ( ϖ i ( t ) ψ ( t ) ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j c i = 1 N j = 1 N τ i j ξ i T ξ j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i c i = 1 N r i ξ i T ξ i
Through Lemma 3 and Assumption 1, one can obtain
i = 1 N ξ i T ( t ) D i f ( ξ i ( t ) , t ) = i = 1 N e i T ( t ) D i ( f ( z i ( t ) , t ) f ( z 0 ( t ) , t ) ) 1 2 i = 1 N ξ i T ( t ) D i D i T ξ i ( t ) + 1 2 i = 1 N | | f ( z i ( t ) , t ) f ( z 0 ( t ) , t ) | | 2 2 1 2 i = 1 N ξ i T ( t ) ( D i D i T + l 2 I n ) ξ i ( t )
and
i = 1 N ξ i T ( t ) h i ( z 0 ( t ) , t ) 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N | | h i ( z 0 ( t ) , t ) | | 2 2
and
i = 1 N ξ i T ( ϖ i ( t ) ψ ( t ) ) 1 2 i = 1 N ξ i T ξ i + 1 2 i = 1 N S 2 i
then
V . ( t ) i = 1 N ξ i T ( t ) C i ξ i ( t ) + 1 2 i = 1 N ξ i T ( t ) ( D i D i T + ( l 2 + 2 ) I n 2 r i I n ) ξ i ( t ) + 1 2 i = 1 N | | h i ( z 0 ( t ) , t ) | | 2 2 + 1 2 i = 1 N S 2 i c i = 1 N j = 1 N τ i j ξ i T ( t ) ξ j ( t ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
We define 1 2 | | h ( z 0 ( t ) , t ) | | 2 2 + 1 2 i = 1 N S 2 i = ω 2 ; let Λ be the diagonal matrix of Ω . There is a unitary matrix U = ( u 1 , , u N ) , so that U T Ω U = Λ . Let y ( t ) = ( U T I n ) ξ ( t ) , so that
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) c ( Ω I n ) ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i = ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) ) ξ ( t ) c y T ( t ) ( Λ I n ) y ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
where C = d i a g ( C 1 , C 2 , , C N ) , D = d i a g ( D 1 , D 2 , , D N ) , R = d i a g ( r 1 , r 2 , , r N ) .
Through Lemma 1, since I n is positive definite, we can obtain y T ( t ) ( Λ I n ) y ( t ) λ 2 ( Ω ) y T ( t ) ( I N I n ) y ( t ) ; hence,
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) ) ξ ( t ) λ 2 ( Ω ) y T ( t ) ( I N I n ) y ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e 2 β t ( r i ( t ) r i ) 2 γ i = ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) ) ξ ( t ) c λ 2 ( Ω ) ξ T ( t ) ( U I n ) ( I N I n ) ( U T I n ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e 2 β t ( r i ( t ) r i ) 2 γ i = ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) c λ 2 ( Ω ) ( I N I n ) ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e 2 β t ( r i ( t ) r i ) 2 γ i
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) c λ 2 ( Ω ) ( I N I n ) + β ( I N I n ) ) ξ ( t ) β ξ T ( t ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
We can choose L i j and r i , which are large enough to meet the conditions:
C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) c λ 2 ( Ω ) ( I N I n ) + β ( I N I n ) 0 . One can obtain
V . ( t ) β i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + ω 2 = β i = 1 N ξ i T ( t ) ξ i ( t ) i = 1 N j = 1 N ( β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j i = 1 N ( β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + ω 2 = 2 β ( 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i ) + ω 2
= g 1 V ( t ) + ω 2
where g 1 = 2 β .
when n T + σ T t ( n + 1 ) T .
Similar to the discussion above, we have
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n ) ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n ) + ( β d ) ( I N I n ) ) ξ ( t ) + ( d β ) ξ T ( t ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
where d β > 0 . One can obtain
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n ) + ( β d ) ( I N I n ) ) ξ ( t ) + ( d β ) ξ T ( t ) ξ ( t ) + ω 2 + i = 1 N j = 1 N ( β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N ( β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N j = 1 N d c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N d c e - 2 β t ( r i ( t ) r i ) 2 γ i
We choose d , β to meet the conditions . One can obtain
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n ) + ( β d ) ( I N I n ) ) ξ ( t ) + ( d β ) ξ T ( t ) ξ ( t ) + ω 2 + i = 1 N j = 1 N ( d β ) c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N ( d β ) c e - 2 β t ( r i ( t ) r i ) 2 γ i
( d β ) i = 1 N ξ i T ( t ) ξ i ( t ) + i = 1 N j = 1 N ( d β ) c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N ( d β ) c e - 2 β t ( r i ( t ) r i ) 2 γ i + ω 2 = 2 ( d β ) ( 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i ) + ω 2 = g 2 V ( t ) + ω 2
where g 2 = 2 ( d β ) .
Combining (22) and (24), we have
{ V . ( t ) g 1 V ( t ) + ω 2 , n T t n T + σ T V . ( t ) g 2 V ( t ) + ω 2 , n T + σ T t ( n + 1 ) T
through Lemma 2, when n T t n T + σ T , we have the following:
V ( ξ ( t ) ) V ( ξ ( n T ) ) exp ( g 1 ( t n T ) ) + λ 1 1 ω 2 g 1
Simultaneously, when n T + σ T t ( n + 1 ) T , we have
V ( ξ ( t ) ) V ( ξ ( n T + σ T ) ) exp ( g 2 ( t n T σ T ) ) λ 1 1 ω 2 g 2
Combining this with (27), we have
{ V ( t ) V ( n T ) exp ( g 1 ( t n T ) ) + ω 2 g 1 , n T t n T + σ T V ( t ) V ( n T + σ T ) exp ( g 2 ( t n T σ T ) ) ω 2 g 2 , n T + σ T t ( n + 1 ) T
Similar to the discussion in [37], when t 0 if 1 > g 1 σ g 2 ( 1 σ ) > 0 , we obtain the following inequality:
V ( t ) V ( 0 ) exp ( g 1 σ t + g 2 ( 1 σ ) t ) + ω 2 g 1 ( 1 + i = 1 n exp ( i g 2 ( T σ T ) i g 1 σ T ) )
So, we can obtain:
1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 η i + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i V ( 0 ) exp ( g 1 σ t + g 2 ( 1 σ ) t ) + ω 2 g 1 ( 1 + i = 1 n exp ( i g 2 ( T σ T ) i g 1 σ T ) )
1 2 | | ξ | | 2 2 V ( 0 ) exp ( g 1 σ t + g 2 ( 1 σ ) t ) + ω 2 g 1 ( 1 + i = 1 n exp ( i g 2 ( T σ T ) i g 1 σ T ) )
When N + , one has:
1 + i = 1 n exp ( i g 2 ( T σ T ) i g 1 σ T ) 1 1 - exp ( g 2 ( T σ T ) g 1 σ T )
Then, the margin of error convergence can be obtained:
ξ ( t ) 2 ω 2 g 1 ( 1 1 - exp ( g 2 ( T σ T ) g 1 σ T ) )
So far, it has been proved that the leader (7) and follower system (8) achieve QC, and the error range bounds of consensus are obtained. □

3.2. Adaptive Pinning Control

The control protocol (10) and adaptive laws (11) and (12) are used to control the whole situation. Each follower exchanges information with the leader. However, it is impractical and costly in practical engineering applications. In practical engineering, under large-scale tracking control, it is exceedingly expensive to maintain the information exchange between the leader and all followers, which will greatly hinder the application of distributed control methods. Therefore, this paper will continue to study the pinning control protocol. We only use adaptive laws for partial coupling topologies and the leader only interacts with some followers. We propose three pinning control schemes.
Pinning 1.
We use the following control protocol and adaptive laws (11) and (12).
u i = { c j = 1 N L i j ( t ) ( z j ( t ) z i ( t ) ) i r i ( t ) ( z i ( t ) z 0 ( t ) ) , n T t n T + σ T 0 , n T + σ T t ( n + 1 ) T
where i = 1 for i = 1 , 2 , , N S and i = 0 for i = N S + 1 , , N .
Theorem 2.
If the HMAS satisfies Assumptions 1–4, the HMAS can achieve QC under the adaptive control protocol (33) and adaptive laws (11) and (12).
Proof. 
When | | ξ | | > ε , π i = 1 . We construct the following Lyapunov function to achieve the QC of leader-following HMASs:
V ( t ) = 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N S c e - 2 β t ( r i ( t ) r i ) 2 γ i
when n T t n T + σ T .
Taking the derivative of (34), we have
V . ( t ) = i = 1 N ξ i T ( t ) ξ . i ( t ) + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + i = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 κ i j L i j . ( t ) + 1 2 i = 1 N S ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N S c e - 2 β t ( r i ( t ) r i ) γ i r i . ( t )
V . ( t ) i = 1 N ξ i T ( t ) C i ξ i ( t ) + 1 2 i = 1 N ξ i T ( t ) ( D i D i T + ( l 2 + 2 ) I n 2 i r i I n ) ξ i ( t ) c i = 1 N j = 1 N τ i j ξ i T ( t ) ξ j ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N S ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R ~ I n ) ) ) ξ ( t ) ξ T ( t ) ( c λ 2 ( Ω ) ( I N I n ) + β ( I N I n ) ) ξ ( t ) β ξ T ( t ) ξ ( t ) + ω 2 + 1 2 i = 1 N j = 1 N ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N S ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i 2 β ( 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N j = 1 N c e - 2 β t ( L i j ( t ) + L i j ) 2 2 κ i j + 1 2 i = 1 N S c e - 2 β t ( r i ( t ) r i ) 2 γ i ) + ω 2 = g 1 V ( t ) + ω 2
where R ~ = d i a g ( r 1 , r 2 , , r N S , 0 , , 0 ) .
When n T + σ T t ( n + 1 ) T , it is similar to Theorem 1. Hence, we have
{ V . ( t ) g 1 V ( t ) + ω 2 , n T t n T + σ T V . ( t ) g 2 V ( t ) + ω 2 , n T + σ T t ( n + 1 ) T
The rest of the proof is the same as Theorem 1. □
Pinning 2.
We use the control protocol (10) and adaptive law (11) and the following adaptive laws:
L . i j ( t ) = κ i j e 2 β t ( z i ( t ) z j ( t ) ) T ( z i ( t ) z j ( t ) ) , L i j ( 0 ) = L j i ( 0 ) > 0 , ( i , j ) E ~
π i = { 1 , w h e n | | ξ | | > ε 0 , o t h e r s
where E ~ is the subset of E , and E ~ is connected.
Theorem 3.
If theHMAS satisfies Assumptions 1–4, the HMAS can achieve QC under the adaptive control protocol (10)and adaptive laws (11) and (36).
Proof. 
When | | ξ | | > ε , π i = 1 . We construct the following Lyapunov function to achieve the QC of leader-following HMASs:
V ( t ) = 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N ( i , j ) E ~ c e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i
where L i j ~ = L j i ~ > 0 , ( i , j ) E ~ and L i j = 0 , ( i j ) . Let Ω ~ = ( τ i j ~ ) N × N , where τ i j ~ = L i j ~ . i j and τ i i ~ = j = 1 , j i N τ i j ~ ; then,
G i j = { L i j ( 0 ) , ( i , j ) E E ~ j = 1 , j i N L i j ( 0 ) , i = j 0 , o t h e r
when n T t n T + σ T .
Taking the derivative of (39), we have
V . ( t ) = i = 1 N ξ i T ( t ) ξ . i ( t ) + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + i = 1 N ( i , j ) E ~ c e - 2 β t ( L i j ( t ) + L i j ~ ) 2 κ i j L i j . ( t ) + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N c e - 2 β t ( r i ( t ) r i ) γ i r i . ( t )
V . ( t ) i = 1 N ξ i T ( t ) C i ξ i ( t ) + 1 2 i = 1 N ξ i T ( t ) ( D i D i T + ( l 2 + 2 ) I n 2 r i I n ) ξ i ( t ) + c i = 1 N j = 1 N G i j ( t ) ξ i T ( t ) ξ j ( t ) c i = 1 N ( i , j ) E ~ τ i j ~ ξ i T ( t ) ξ j ( t ) + ω 2 + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R I n ) ) + c ( G I n ) c λ 2 ( Ω ~ ) ( I N I n ) + β ( I N I n ) ) ξ ( t ) β ξ T ( t ) ξ ( t ) + ω 2 + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i 2 β ( 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N ( i , j ) E ~ c e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N c e - 2 β t ( r i ( t ) r i ) 2 γ i ) + ω 2 = g 1 V ( t ) + ω 2
where G = ( G i j ) N × N .
When n T + σ T t ( n + 1 ) T , it is similar to Theorem 1. And the rest of the proof is the same as Theorem 1. □
Pinning 3.
We consider the control protocol (33) and adaptive laws (11) and (36).
Theorem 4.
If the HMAS satisfies Assumptions 1–4, the HMAS can achieve QC under the adaptive control protocol (33) and adaptive laws (11) and (36).
Proof. 
When | | ξ | | > ε , π i = 1 , we construct the following Lyapunov function to achieve the QC of leader-following HMASs:
V ( t ) = 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N ( i , j ) E ~ c e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N S c e - 2 β t ( r i ( t ) r i ) 2 γ i
when n T t n T + σ T .
Taking the derivative of (40), we have
V . ( t ) = i = 1 N ξ i T ( t ) ξ . i ( t ) + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + i = 1 N ( i , j ) E ~ c e - 2 β t ( L i j ( t ) + L i j ~ ) 2 κ i j L i j . ( t ) + 1 2 i = 1 N S ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i + i = 1 N S c e - 2 β t ( r i ( t ) r i ) γ i r i . ( t )
V . ( t ) i = 1 N ξ i T ( t ) C i ξ i ( t ) + 1 2 i = 1 N ξ i T ( t ) ( D i D i T + ( l 2 + 2 ) I n 2 i r i I n ) ξ i ( t ) + c i = 1 N j = 1 N G i j ( t ) ξ i T ( t ) ξ j ( t ) c i = 1 N ( i , j ) E ~ τ i j ~ ξ i T ( t ) ξ j ( t ) + ω 2 + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e - 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N S ( 2 β c ) e - 2 β t ( r i ( t ) r i ) 2 γ i
V . ( t ) ξ T ( t ) ( C + 1 2 ( D D T + I N ( l 2 + 2 ) I n 2 ( R ~ I n ) ) + c ( G I n ) c λ 2 ( Ω ~ ) ( I N I n ) + β ( I N I n ) ) ξ ( t ) β ξ T ( t ) ξ ( t ) + ω 2 + 1 2 i = 1 N ( i , j ) E ~ ( 2 β c ) e 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N S ( 2 β c ) e 2 β t ( r i ( t ) r i ) 2 γ i 2 β ( 1 2 i = 1 N ξ i T ( t ) ξ i ( t ) + 1 2 i = 1 N ( i , j ) E ~ c e 2 β t ( L i j ( t ) + L i j ~ ) 2 2 κ i j + 1 2 i = 1 N S c e 2 β t ( r i ( t ) r i ) 2 γ i ) + ω 2 = g 1 V ( t ) + ω 2
where G = ( G i j ) N × N , R ~ = d i a g ( r 1 , r 2 , , r N S , 0 , , 0 ) .
When n T + σ T t ( n + 1 ) T , it is similar to Theorem 1. And the rest of the proof is the same as Theorem 1. □

4. Numerical Examples

In this part, we will prove the effectiveness of the proposed control protocol using several simulation examples. Assume that the HMAS contains one leader agent and five follower agents. For the leader system (7), assume the external disturbance is ψ ( t ) = ( 0 , 0 , 0 ) T . For the follower system (8), the external disturbance is defined as ϖ i ( t ) = ( 0.1 sin t cos t , 0.2 sin t , 0.3 cos t ) T .
Example 1.
Assume the agent dynamics are described by a classical Chua circuit system model.
In the leader system (7), the linear part is presented as z 0 ( t ) = ( z 01 , z 02 , z 03 ) T , and the nonlinear part is presented as f ( z 0 , t ) = ( 0.5 ( | z 01 + 1 | | z 01 1 | ) , 0 , 0 ) T . The system matrix selection is C 0 = ( 2.5 10 0 1 1 1 0 18 0 ) , D 0 = ( 35 6 0 0 0 0 0 0 0 0 ) . For the follower system (8), the linear part can be described as z i ( t ) = ( z i 1 , z i 2 , z i 3 ) T , and the nonlinear part can be described as f ( z i , t ) = ( 0.5 ( | z i 1 + 1 | | z i 1 1 | ) , 0 , 0 ) T . Assume that the matrix of the follower system is
C i = ( 2.5 + 0.2 i 10 + 0.3 i 0 1 + 0.1 i 1 + 0.1 i 1 + 0.1 i 0 18 + 0.3 i 0 ) ,   D i = ( 35 6 0 0 0 0 0 0 0 0 )
For state variables, we choose the initial value z 0 ( 0 ) = ( 2.9 , 0.75 , 0.1 ) T , and z i ( 0 ) = ( 10 + 2 i , 4 + i , 5 + 2 i ) T . The state changes of the HMASs without the control protocol are shown in Figure 1. It can be concluded from Figure 1 that, when we do not add control protocols, the state changes of the system increase.
The simulation results after adding the control protocol are shown in Figure 2.
Figure 2a shows the errors under the adaptive control protocol (10) and adaptive laws (11) and (12). We pick the arbitrary value r ( 0 ) = ( 3.3 , 4.1 , 2.8 , 1.6 , 1.9 ) T , and β = 0.1 γ i = ( 0.010 , 0.011 , 0.012 , 0.013 , 0.014 ) , κ 12 = κ 21 = 1.7 , κ 13 = κ 31 = 1.5 , κ 14 = κ 41 = 1.9   κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 , κ 45 = κ 54 = 1.5 . It can be concluded from Figure 2a that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMAS can achieve QC via the adaptive control protocol (10) and adaptive laws (11) and (12).
Figure 2b shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (12). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary values r ( 0 ) = ( 2.9 , 3.6 ) T , and β = 0.1 γ i = ( 0.10 , 0.11 ) , κ 12 = κ 21 = 1.7 , κ 13 = κ 31 = 1.5 , κ 14 = κ 41 = 1.9 , κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 , and κ 45 = κ 54 = 1.5 .
Figure 2c shows the errors under the adaptive control protocol (10) and adaptive laws (11), (12), and (36). We pick the arbitrary value r ( 0 ) = ( 1.5 , 1.3 , 1.6 , 1.1 , 1.2 ) T , and β = 0.1 γ i = ( 0.010 , 0.011 , 0.012 , 0.013 , 0.014 ) , κ 14 = κ 41 = 1.9 , κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 .
Figure 2d shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (36). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.5 , 3.3 ) T , and β = 0.1   γ i = ( 0.10 , 0.11 , 0.12 ) , κ 14 = κ 41 = 1.9 ,   κ 15 = κ 51 = 1.5 , and κ 24 = κ 42 = 1.5 .
It can be concluded from Figure 2b–d that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMASs can achieve QC by using three saturated adaptive pinning control protocols.
Example 2.
Assume the agent dynamics are described by a classical Chen circuit system model.
For the leader system (7), the linear part is presented as z 0 ( t ) = ( z 01 , z 02 , z 03 ) T and the nonlinear part is presented as f ( z 0 , t ) = ( 0 , z 01 z 03 , z 01 z 02 ) T . The system matrix selection is C 0 = ( 28 - 28 0 7 35 0 0 3 0 ) , D 0 = ( 1 0 0 0 1 0 0 0 1 ) . For the follower system (8), the linear part can be described as z i ( t ) = ( z i 1 , z i 2 , z i 3 ) T and the nonlinear part can be described as f ( z i , t ) = ( 0 , z i 1 z i 3 , z i 1 z i 2 ) T . Assume that the matrix of the follower system is
C i = ( 28 + i - 28 + i 0 7 + i 35 + 2 i 0 0 3 + i 0 ) ,   D i = ( 1 0 0 0 1 0 0 0 1 )
For state variables, we choose the initial value z 0 ( 0 ) = ( 9 , 14 , 20 ) T , and z i ( 0 ) = ( 9 + 2 i , - 14 + i , 20 i ) T . The state changes of the HMAS without the control protocol are shown in Figure 3. It can be concluded from Figure 3 that when we do not add control protocols, the state changes of the system increase.
The simulation results after adding the control protocol are shown in Figure 4.
Figure 4a shows the errors under the adaptive control protocol (10) and adaptive laws (11) and (12). We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.2 , 2.4 , 3.1 , 1.5 ) T , and β = 0.1 γ i = ( 0.010 , 0.011 , 0.012 , 0.013 , 0.014 ) , κ 12 = κ 21 = 1.7 , κ 13 = κ 31 = 1.5 , κ 14 = κ 41 = 1.9   κ 15 = κ 51 = 1.3 ,   κ 24 = κ 42 = 1.5 , κ 45 = κ 54 = 1.5 . It can be concluded from Figure 4a that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMASs can achieve QC by using the adaptive control protocol (10) and adaptive laws (11) and (12).
Figure 4b shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (12). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.2 ) T , and β = 0.1   γ i = ( 0.10 , 0.11 ) , κ 12 = κ 21 = 1.7 , κ 13 = κ 31 = 1.5 , κ 14 = κ 41 = 1.9 , κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 , and κ 45 = κ 54 = 1.5 .
Figure 4c shows the errors under the adaptive control protocol (10) and adaptive laws (11), (12), and (36). We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.2 , 2.4 , 3.1 , 1.5 ) T , and β = 0.1   γ i = ( 0.010 , 0.011 , 0.012 , 0.013 , 0.014 ) , κ 14 = κ 41 = 1.9 , κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 .
Figure 4d shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (36). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.2 , 2.4 ) T , and β = 0.1   γ i = ( 0.10 , 0.11 , 0.12 ) , κ 14 = κ 41 = 1.9 ,   κ 15 = κ 51 = 1.5 , and κ 24 = κ 42 = 1.5 .
It can be concluded from Figure 4b–d that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMASs can achieve QC by using the three saturated adaptive pinning control protocols.
Example 3.
Assume the agent dynamics are described by a manipulator system with flexible joints model, whose dynamic system is as follows:
{ θ m . = ω m ω m . = k J m ( θ 1 θ m ) B J m ω m + k τ J m u θ 1 . = ω 1 ω 1 . = k J 1 ( θ θ m ) m g h J 1 sin ( θ 1 )
Let z = ( θ m ω m θ 1 ω 1 ) T . The nonlinear dynamics of the robotic arm system are equivalent to
z ˙ ( t ) = A z ( t ) + f ( z ) + g ( y ) u ( t )
Let g ( y ) = I n . For the leader system (7), the linear part is presented as z 0 ( t ) = ( z 01 , z 02 , z 03 , z 04 ) T and the nonlinear part is presented as f ( z 0 , t ) = ( 0 , 0 , 0 , 1 3 sin ( z 03 3 ) T . The system matrix selection is C 0 = ( 0 1 0 0 48.6 1.25 48.6 0 0 0 0 1 19.5 0 19.5 0 ) , D 0 = ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) . For the follower system (8), the linear part can be described as z i ( t ) = ( z i 1 , z i 2 , z i 3 ) T and the nonlinear part can be described as f ( z i , t ) = ( 0 , 0 , 0 , 1 3 sin ( z i 3 3 ) T . Assume that the matrix of the follower system is
C i = ( 0 1 + 0.1 i 0 0 48.6 + 0.5 i 1.25 + 0.1 i 48.6 + 0.6 i 0 0 0 0 1 + 0.2 i 19.5 + 0.3 i 0 19.5 + 0.2 i 0 ) ,   D i = ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 )
For state variables, we choose the initial value z 0 ( 0 ) = ( 0.2 , 0.5 , 0.7 , 0.3 ) T , and z i ( 0 ) = ( 0.3 + 1.2 i , 0.5 + 1.1 i , 0.8 + 1.3 i , 0.4 + 1.5 i ) T . The state changes of the HMASs without the control protocol are shown in Figure 5. It can be concluded from Figure 5 that, when we do not add control protocols, the state changes of the system increase.
The simulation results after adding the control protocol are shown in Figure 6.
Figure 6a shows the errors under the adaptive control protocol (10) and adaptive laws (11) and (12). We pick the arbitrary value r ( 0 ) = ( 1.3 , 2.1 , 0.4 , 0.7 , 0.1 ) T , and β = 0.1   γ i = ( 0.010 , 0.009 , 0.010 , 0.008 , 0.012 ) , κ 12 = κ 21 = 0.3 , κ 13 = κ 31 = 0.3 , κ 14 = κ 41 = 0.45   κ 15 = κ 51 = 0.55 ,   κ 23 = κ 32 = 0.65 , κ 25 = κ 52 = 0.6 , κ 34 = κ 43 = 0.4 . It can be concluded from Figure 6a that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMASs can achieve QC when using the adaptive control protocol (10) and adaptive laws (11) and (12).
Figure 6b shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (12). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary value r ( 0 ) = ( 1.3 , 2.1 ) T , and β = 0.1   γ i = ( 0.010 , 0.009 ) , κ 12 = κ 21 = 0.3 , κ 13 = κ 31 = 0.3 , κ 14 = κ 41 = 0.45 , and κ 15 = κ 51 = 0.55 .
Figure 6c shows the errors under the adaptive control protocol (10) and adaptive laws (11), (12), and (36). We pick the arbitrary value r ( 0 ) = ( 1.3 , 2.1 , 0.4 , 0.7 , 0.1 ) T , and β = 0.1   γ i = ( 0.010 , 0.009 , 0.010 , 0.008 , 0.012 ) , κ 14 = κ 41 = 1.9 , κ 15 = κ 51 = 1.3 , κ 24 = κ 42 = 1.5 .
Figure 6d shows the errors under the adaptive control protocol (33) and adaptive laws (11) and (36). Assume that the leader only exchanges information with nodes 1 and nodes 2. We pick the arbitrary value r ( 0 ) = ( 2.7 , 3.5 , 3.3 ) T , and β = 0.1 γ i = ( 0.010 , 0.009 , 0.010 ) , κ 14 = κ 41 = 1.9 ,   κ 15 = κ 51 = 1.3 , and κ 24 = κ 42 = 1.5 .
It can be concluded from Figure 6b–d that the errors of the leader system (7) and follower system (8) converge to a bounded range. And the HMASs can achieve QC by using three saturated adaptive pinning control protocols.
From the above three simulations, we can see the effectiveness of our proposed saturated adaptive control protocol, and it can be widely used in various models. From the simulations of three saturated adaptive pinning control protocols, it can be clearly seen that distributed control has high robustness. By appropriately increasing the coupling strength or feedback gain, the error caused by the loss of control can be compensated for.
Remark 2.
Like existing works [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31], we deal with this open problem with theoretical derivation and numerical simulation. It is worth noting that, to expand the application scope of the proposed control method, we chose a universal unified multi-agent system model, without establishing a system model for specific applications. At the same time, to reduce the complexity of the control method and the theoretical derivation, we simplified the system appropriately and adopted a more idealized normalized model. From the perspective of control method research, it is reasonable and feasible to verify the effectiveness and correctness of the proposed control methods via numerical simulation. Within our research framework, one can construct a system model for a specific application by considering suitable environment and detailed parameters for specific applications, such as multi-robot formation and multi-UAV formation. In addition, to facilitate engineering applications, we should consider more complex factors and more realistic working conditions, such as limited communication, unpredictable state, unknown parameters, perturbations, time delays, and so on. In this way, the proposed control methods might be practically tested and verified through real-world experiments.

5. Conclusions

The QC of nonlinear HMASs with an external interference is studied in this paper. Firstly, we design a periodic intermittent adaptive control protocol with saturation, which controls both the internal coupling between follower agents and the communication between the leader and follower. Then, the feasibility of the control protocol is proved theoretically using the Lyapunov function method. The convergence range of the QC error is obtained by using lemma and inequality techniques. Furthermore, three cost-saving saturated adaptive pinning control protocols are proposed. Adaptive feedback gain is applied to only some of the followers, and only some the followers can interact with each other. Due to the coupling effect inside the HMAS, the QC of the whole HMAS can be achieved as long as each agent is connected. The adaptive check control protocol greatly saves on control costs and demonstrates the high robustness of the distributed control. Finally, the correctness of the control protocol is proved through four numerical simulations. Although this paper presents a novel and reasonable control method, it is still difficult to fully apply it in practical engineering applications. The main reason for this is that the model is not fully consistent with the actual model. Therefore, in future work, in order to make the research method and results more practical, we will apply the theoretical method to the collaborative control of UAVs and robots as the focus of our next study.

Author Contributions

Conceptualization, B.D.; Project administration, Q.X. and J.Z.; Software, L.W., Y.T. and R.Y.; Methodology, Y.Y. and J.A.; Writing—review and editing, Q.X. and J.Z.; Writing—original draft, B.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China, grant number ICT2022B45; the Sichuan Province Scientific and Technological Achievements Transfer and Transformation Demonstration Project, grant number 2020ZHCG0076; the Cooperative Research for the Ministry of Education, under the “Chunhui Plan”, grant number 191657; and the Key Scientific Research Fund Project of Xihua University, under grant number Z17124.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Comput. Graph. 1987, 21, 25–34. [Google Scholar] [CrossRef]
  2. Jadhav, A.M.; Patne, N.R. Priority-Based Energy Scheduling in a Smart Distributed Network With Multiple Microgrids. IEEE Trans. Ind. Inform. 2017, 13, 3134–3143. [Google Scholar] [CrossRef]
  3. Hector, R.; Vaidyanathan, R.; Sharma, G.; Trahan, J.L. Optimal Convex Hull Formation on a Grid by Asynchronous Robots With Lights. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 3532–3545. [Google Scholar] [CrossRef]
  4. Xue, D.; Yao, J.; Wang, J. H-infinity Formation Control and Obstacle Avoidance for Hybrid Multi-Agent Systems. J. Appl. Math. 2013, 2013, 123072. [Google Scholar] [CrossRef]
  5. Meng, D.; Jia, Y. Formation control for multi-agent systems through an iterative learning design approach. Int. J. Robust Nonlinear Control 2014, 24, 340–361. [Google Scholar] [CrossRef]
  6. Li, W.; Chen, Z.; Liu, Z. Formation control for nonlinear multi-agent systems by robust output regulation. Neurocomputing 2014, 140, 114–120. [Google Scholar] [CrossRef]
  7. Han, D.; Panagou, D. Robust Multitask Formation Control via Parametric Lyapunov-Like Barrier Functions. IEEE Trans. Autom. Control 2019, 64, 4439–4453. [Google Scholar] [CrossRef]
  8. Chai, X.; Liu, J.; Yu, Y.; Xi, J.; Sun, C. Practical Fixed-Time Event-Triggered Time-Varying Formation Tracking Control for Disturbed Multi-Agent Systems with Continuous Communication Free. Unmanned Syst. 2021, 9, 23–34. [Google Scholar] [CrossRef]
  9. Fan, W.; Chen, P.; Shi, D.; Guo, X.; Kou, L. Multi-Agent Modeling and Simulation in the AI Age. Tsinghua Sci. Technol. 2021, 26, 608–624. [Google Scholar] [CrossRef]
  10. Jiang, X.; Xia, G.; Feng, Z.; Jiang, Z. H-infinity delayed tracking protocol design of nonlinear singular multi-agent systems under Markovian switching topology. Inf. Sci. 2021, 545, 280–297. [Google Scholar] [CrossRef]
  11. Jiang, X.; Xia, G.; Feng, Z.; Jiang, Z. Consensus Tracking of Data-Sampled Nonlinear Multi-Agent Systems With Packet Loss and Communication Delay. IEEE Trans. Netw. Sci. Eng. 2021, 8, 126–137. [Google Scholar] [CrossRef]
  12. Yang, H.-Y.; Guo, L.; Xu, B.; Gu, J.-Z. Collaboration Control of Fractional-Order Multiagent Systems with Sampling Delay. Math. Probl. Eng. 2013, 2013, 854960. [Google Scholar] [CrossRef]
  13. Liu, K.; Mu, X. Consensusability of multi-agent systems via observer with limited communication data rate. Int. J. Syst. Sci. 2016, 47, 3591–3597. [Google Scholar] [CrossRef]
  14. Hu, G. Robust consensus tracking of a class of second-order multi-agent dynamic systems. Syst. Control Lett. 2012, 61, 134–142. [Google Scholar] [CrossRef]
  15. Li, Z.; Duan, Z.; Chen, G.; Huang, L. Consensus of Multiagent Systems and Synchronization of Complex Networks: A Unified Viewpoint. IEEE Trans. Circuits Syst. I-Regul. Pap. 2010, 57, 213–224. [Google Scholar] [CrossRef]
  16. Zhang, H.; Lewis, F.L.; Das, A. Optimal Design for Synchronization of Cooperative Systems: State Feedback, Observer and Output Feedback. IEEE Trans. Autom. Control 2011, 56, 1948–1952. [Google Scholar] [CrossRef]
  17. Li, Z.; Liu, X.; Ren, W.; Xie, L. Distributed Tracking Control for Linear Multiagent Systems With a Leader of Bounded Unknown Input. IEEE Trans. Autom. Control 2013, 58, 518–523. [Google Scholar] [CrossRef]
  18. Hong, Y.; Hu, J.; Gao, L. Tracking control for multi-agent consensus with an active leader and variable topology. Automatica 2006, 42, 1177–1182. [Google Scholar] [CrossRef]
  19. Sun, J.; Geng, Z. Adaptive consensus tracking for linear multi-agent systems with heterogeneous unknown nonlinear dynamics. Int. J. Robust Nonlinear Control 2016, 26, 154–173. [Google Scholar] [CrossRef]
  20. Fu, J.; Wen, G.; Huang, T.; Duan, Z. Consensus of Multi-Agent Systems With Heterogeneous Input Saturation Levels. IEEE Trans. Circuits Syst. II-Express Briefs 2019, 66, 1053–1057. [Google Scholar] [CrossRef]
  21. Li, C.-J.; Liu, G.-P. Consensus for heterogeneous networked multi-agent systems with switching topology and time-varying delays. J. Frankl. Inst.-Eng. Appl. Math. 2018, 355, 4198–4217. [Google Scholar] [CrossRef]
  22. Cruz-Ancona, C.D.; Martinez-Guerra, R.; Perez-Pinacho, C.A. A leader-following consensus problem of multi-agent systems in heterogeneous networks. Automatica 2020, 115, 108899. [Google Scholar] [CrossRef]
  23. Luo, S.; Xu, X.; Liu, L.; Feng, G. Output consensus of heterogeneous linear multi-agent systems with communication, input and output time-delays. J. Frankl. Inst.-Eng. Appl. Math. 2020, 357, 12825–12839. [Google Scholar] [CrossRef]
  24. Han, J.; Zhang, H.; Jiang, H.; Sun, X. H-infinity consensus for linear heterogeneous multi-agent systems with state and output feedback control. Neurocomputing 2018, 275, 2635–2644. [Google Scholar] [CrossRef]
  25. Gong, P.; Lan, W. Adaptive robust tracking control for uncertain nonlinear fractional-order multi-agent systems with directed topologies. Automatica 2018, 92, 92–99. [Google Scholar] [CrossRef]
  26. Cai, Y.; Zhang, H.; Gao, Z.; Sun, S. The distributed output consensus control of linear heterogeneous multi-agent systems based on event-triggered transmission mechanism under directed topology. J. Frankl. Inst.-Eng. Appl. Math. 2020, 357, 3267–3298. [Google Scholar] [CrossRef]
  27. Yu, W.; Chen, G.; Cao, M.; Ren, W. Delay-Induced Consensus and Quasi-Consensus in Multi-Agent Dynamical Systems. IEEE Trans. Circuits Syst. I-Regul. Pap. 2013, 60, 2679–2687. [Google Scholar] [CrossRef]
  28. Wang, Z.; Cao, J. Quasi-consensus of second-order leader-following multi-agent systems. IET Control Theory Appl. 2012, 6, 545–551. [Google Scholar] [CrossRef]
  29. Yu, W.; Ren, W.; Zheng, W.X.; Chen, G.; Lu, J. Distributed control gains design for consensus in multi-agent systems with second-order nonlinear dynamics. Automatica 2013, 49, 2107–2115. [Google Scholar] [CrossRef]
  30. Ye, D.; Shao, Y. Quasi-synchronization of heterogeneous nonlinear multi-agent systems subject to DOS attacks with impulsive effects. Neurocomputing 2019, 366, 131–139. [Google Scholar] [CrossRef]
  31. An, J.; Yang, W.; Xu, X.; Chen, T.; Du, B.; Tang, Y.; Xu, Q. Decentralized Adaptive Control for Quasi-Consensus in Heterogeneous Nonlinear Multiagent Systems. Discret. Dyn. Nat. Soc. 2021, 2021, 2230805. [Google Scholar] [CrossRef]
  32. Cheng, L.; Qiu, J.; Chen, X.; Zhang, A.; Yang, C.; Chen, X. Adaptive aperiodically intermittent control for pinning synchronization of directed dynamical networks. Int. J. Robust Nonlinear Control 2019, 29, 1909–1925. [Google Scholar] [CrossRef]
  33. Wang, J.-L.; Wu, H.-N.; Huang, T.; Ren, S.-Y.; Wu, J. Passivity Analysis of Coupled Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions. IEEE Trans. Syst. Man Cybern.-Syst. 2017, 47, 2148–2159. [Google Scholar] [CrossRef]
  34. Xu, Q.; Xu, X.; Zhuang, S.; Xiao, J.; Song, C.; Che, C. New complex projective synchronization strategies for drive-response networks with fractional complex-variable dynamics. Appl. Math. Comput. 2018, 338, 552–566. [Google Scholar] [CrossRef]
  35. Xu, Q.; Zhuang, S.; Liu, S.; Xiao, J. Decentralized adaptive coupling synchronization of fractional-order complex-variable dynamical networks. Neurocomputing 2016, 186, 119–126. [Google Scholar] [CrossRef]
  36. Yu, W.; DeLellis, P.; Chen, G.; di Bernardo, M.; Kurths, J. Distributed Adaptive Control of Synchronization in Complex Networks. IEEE Trans. Autom. Control 2012, 57, 2153–2158. [Google Scholar] [CrossRef]
  37. Zhang, W.; Huang, J.; Wei, P. Weak synchronization of chaotic neural networks with parameter mismatch via periodically intermittent control. Appl. Math. Model. 2011, 35, 612–620. [Google Scholar] [CrossRef]
  38. D’Angeli, D.; Donno, A. Shuffling matrices, Kronecker product and Discrete Fourier Transform. Discret. Appl. Math. 2017, 233, 1–18. [Google Scholar] [CrossRef]
Figure 1. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Figure 1. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Entropy 25 01266 g001
Figure 2. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Figure 2. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Entropy 25 01266 g002
Figure 3. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Figure 3. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Entropy 25 01266 g003
Figure 4. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Figure 4. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Entropy 25 01266 g004
Figure 5. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Figure 5. The errors | | ξ i ( t ) | | 2 without adding control protocols.
Entropy 25 01266 g005
Figure 6. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Figure 6. The errors | | ξ i ( t ) | | 2 under the control protocol. (a) is the errors without pinning; (b) is the errors under the pinning 1; (c) is the errors under the pinning 2; (d) is the errors under the pinning 3.
Entropy 25 01266 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, B.; Xu, Q.; Zhang, J.; Tang, Y.; Wang, L.; Yuan, R.; Yuan, Y.; An, J. Periodic Intermittent Adaptive Control with Saturation for Pinning Quasi-Consensus of Heterogeneous Multi-Agent Systems with External Disturbances. Entropy 2023, 25, 1266. https://doi.org/10.3390/e25091266

AMA Style

Du B, Xu Q, Zhang J, Tang Y, Wang L, Yuan R, Yuan Y, An J. Periodic Intermittent Adaptive Control with Saturation for Pinning Quasi-Consensus of Heterogeneous Multi-Agent Systems with External Disturbances. Entropy. 2023; 25(9):1266. https://doi.org/10.3390/e25091266

Chicago/Turabian Style

Du, Bin, Quan Xu, Junfu Zhang, Yi Tang, Lei Wang, Ruihao Yuan, Yu Yuan, and Jiaju An. 2023. "Periodic Intermittent Adaptive Control with Saturation for Pinning Quasi-Consensus of Heterogeneous Multi-Agent Systems with External Disturbances" Entropy 25, no. 9: 1266. https://doi.org/10.3390/e25091266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop