Next Article in Journal
High-Degree Collisional Moments of Inelastic Maxwell Mixtures—Application to the Homogeneous Cooling and Uniform Shear Flow States
Previous Article in Journal
A Textual Backdoor Defense Method Based on Deep Feature Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems

School of Automation, Guangdong University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 221; https://doi.org/10.3390/e25020221
Submission received: 20 December 2022 / Revised: 21 January 2023 / Accepted: 22 January 2023 / Published: 23 January 2023
(This article belongs to the Topic Complex Systems and Artificial Intelligence)

Abstract

:
This paper focuses on the optimal containment control problem for the nonlinear multiagent systems with partially unknown dynamics via an integral reinforcement learning algorithm. By employing integral reinforcement learning, the requirement of the drift dynamics is relaxed. The integral reinforcement learning method is proved to be equivalent to the model-based policy iteration, which guarantees the convergence of the proposed control algorithm. For each follower, the Hamilton–Jacobi–Bellman equation is solved by a single critic neural network with a modified updating law which guarantees the weight error dynamic to be asymptotically stable. Through using input–output data, the approximate optimal containment control protocol of each follower is obtained by applying the critic neural network. The closed-loop containment error system is guaranteed to be stable under the proposed optimal containment control scheme. Simulation results demonstrate the effectiveness of the presented control scheme.

1. Introduction

Distributed coordination control of multiagent systems (MASs) has drawn expansive interest due to its potential application on agricultural irrigation [1], disaster rescue [2], microgrid scheduling [3], marine survey [4] and wireless communication [5]. The distributed coordination control aims to guarantee that all agents which exchange local information by communicating with their neighbors reach an agreement on some variables of interest [6]. Over the last decade, containment control has received increasing attention because of its remarkable performance in addressing the secure control issues, such as hazardous material treatment [7] and fire rescue [8]. The goal of containment control is to drive the followers to enter and keep within the convex hull spanned by multiple leaders. Numerous interesting and significant results of containment control have been presented. Reference [9] developed a fuzzy-observer-based backstepping control to achieve the containment of MASs. An adaptive funnel containment control was proposed in [10], where the containment errors converged to an adjustable funnel boundary. In practical applications, containment control has been developed for autonomous surface vehicles [4], unmanned aerial vehicles [11] and spacecrafts [12]. Notice that most of the aforementioned works have ignored the control performance with a minimum of energy consumption.
It is well-known that the Riccati equation or the Hamilton–Jacobi–Bellman equation (HJBE) are solved to acquire the optimal control for linear or nonlinear systems [13], respectively. In other words, the Riccati equation is a particular case of the HJBE. As a classical optimization algorithm, dynamic programming (DP) [14] is regarded as an effective way to obtain the optimal solution of the HJBE. However, as the dimension of state variables increases, the computation of the DP approach expands as a geometric series, which arouses the dilemma of the “curse of dimensionality”. With the success of AlphaGo, reinforcement learning (RL) has stimulated increasing enthusiasm from scholars to tackle the “curse of dimensionality” problem [15]. As is synonymous with RL, adaptive DP (ADP) [16] forward-in-time-solves the optimal control problem with the aid of neural network (NN)-based approximators. Moreover, ADP has been increasingly exploited for the optimal coordination control of MASs. Reference [17] established a cooperative policy iteration (PI) algorithm to solve the differential graphical games of linear MASs. In the nonlinear case, Reference [18] investigated the consensus problem via model-based PI with a generalized fuzzy hyperbolic critic structure. An event-triggered ADP-based optimal coordination control was proposed for the communication load and the commutation consumption was reduced [19]. To tackle the optimal containment control (OCC) problem, a finite-time fault-tolerant control was proposed via model-based PI [20]. In the presence of state constraints, Reference [21] presented a proper barrier function to transform the state constraint problem into an unconstrained case, thereafter the event-triggered OCC protocols were obtained. In Reference [22], distributed RL was applied to handle an OCC problem with collision avoidance of nonholonomic mobile robots. When the accurate model of the plant is not obtained, system identification is always employed. It should be pointed out that system identification is intractable for responding to dynamic changes of systems in time, which brings inevitable identification errors.
Recently, the integral RL (IRL) method was adopted to relax the accurate model requirement of the plant by constructing the integral Bellman equation [23,24]. An actor–critic architecture was adopted to execute the IRL algorithm, in which an actor NN learned the optimal control strategy and a critic NN was devoted to approximating the optimal value function. In the presence of heterogeneous linear MASs (HLMASs), the IRL method was developed to handle the robust OCC problem [25]. An adaptive output-feedback method was developed for the containment control for HLMASs via the IRL algorithm [26]. In Reference [27], the off-policy IRL-based OCC scheme was presented for unknown HLMASs with active leaders. However, the OCC problem of the nonlinear MASs with partially unknown dynamics has rarely been investigated via the IRL method. Moreover, the actor–critic architecture requires constructing the actor NN, which makes the control structure more complex. It is crucial to develop an IRL-based OCC scheme by implementing a simplified control structure. In addition, most of the aforementioned OCC approaches ensure the weight estimation error of the critic NN is uniformly ultimately bounded (UUB) only, which may degrade the control performance. All the above concerns motivated our research.
Inspired by the aforementioned works, we developed an IRL-based OCC scheme with asymptotically stable critic structure for partially unknown nonlinear MASs. The main contributions are reflected as follows.
(1)
Different from existing control schemes [9,20], an IRL method is introduced to construct the integral Bellman equation without the system identification. Furthermore, IRL proves to be equivalent to model-based PI, which guarantees the convergence of the developed control algorithm.
(2)
The IRL-based OCC scheme is implemented by a critic-only architecture for nonlinear MASs with unknown drift dynamics, rather than by an actor–critic architecture for linear MASs [25,26,27]. Thus, the proposed scheme simplifies the control structure.
(3)
In contrast to the existing OCC schemes [20,21,22] which guarantee the weight errors to be UUB, a modified weight-updating law is presented to tune the critic NN weights, whose weight error dynamic is asymptotically stable.
This paper is organized as follows. In Section 2, graph theory and its application to the containment of MASs are outlined. In Section 3, the IRL-based OCC scheme and its convergence proof are presented for nonlinear MASs. Then, the stability of the closed-loop containment error systems is analyzed in detail. In Section 4, two simulation examples demonstrate the effectiveness of the proposed scheme. In Section 5, concluding remarks are drawn.

2. Preliminaries and Problem Description

2.1. Graph Theory

For a network with N agents, the information interactions among agents are reflected by a weighted graph G = ( V , ε , A ) with the nonempty finite set of nodes V = { υ 1 , , υ N } , the edge set ε V × V and the nonnegative weighted adjacency matrix A = [ a i p ] . If node υ i links to node υ p , the edge ( υ i , υ p ) ε is available with a i p > 0 ; otherwise, a i p = 0 . For a node υ i , the node υ p is named as a neighbor of υ i when ( υ i , υ p ) ε . In this way, N i = { υ p V : ( υ p , υ i ) ε } represents the set of all neighbors of υ i . Denote the Laplacian matrix as L = D A = [ l i p ] , where D = diag { d 11 , d 22 , , d N N } , d i i = p = 1 N i a i p and l i p satisfies
l i p = q = 1 , q i N i a i q , i = p , a i p , i p .
It implies that each row sum of L equals to zero. A sequence of edges described by ( υ 1 , υ 2 ) , ( υ 3 , υ 4 ) , with υ i V is defined as a directed path. For arbitrary ( υ i , υ p ) V , a directed graph is strongly connected, if there is a directed path from υ i to υ p , while the directed graph is said to contain a spanning tree if there exists a directed path from a root node to every other nodes with respect to G . This paper focuses on a strongly connected digraph with a spanning tree.

2.2. Problem Description

Consider the leader–follower nonlinear MASs in the form of the graph G with M leaders and N followers, where the node dynamic of the ith follower is modeled by
x ˙ i = f ( x i ( t ) ) + g i ( x i ( t ) ) μ i ( t ) ,
where x i R n is the state vector for the ith follower, μ i R m is the control input vector, i = 1 , 2 , , N , and the nonlinear functions f ( x i ) R n and g i ( x i ) R n × m represent the unknown drift dynamic and the control input matrix, respectively. Denote the global state vector as x = [ x 1 T , x 2 T , , x N T ] T R N × n .
Assumption 1.
f ( x i ) and g i ( x i ) are Lipschitz continuous on the compact set Ω i with f ( 0 ) = 0 and the system (1) is controllable.
Define the node dynamic of the jth leader as
r ˙ j = h j ( r j ( t ) ) ,
where r j R n stands for the state vector of the jth leader, j = 1 , 2 , , M and h j ( r j ) R n satisfies Lipschitz continuity.
Definition 1
(Convex hull [8]). A set C R M × n is convex if for any y 1 , y 2 C and ρ ( 0 , 1 ) , ( 1 ρ ) y 1 + ρ y 2 C . A convex hull of a finite set Y = { y 1 , y 2 , , y M } is the minimal convex set, i.e., C o ( Y ) = j = 1 M ρ j y j | y j Y , ρ j R , ρ j 0 , j = 1 M ρ j = 1 .
The containment control aims to find a set of distributed control protocols μ = { μ 1 , μ 2 , , μ N } such that all followers stay in the convex hull formed by the leaders, i.e., x i ( t ) C o ( Y ) with Y = { r 1 , r 2 , , r M } . For the ith follower, the local neighborhood containment error e i is formulated as
e i = p N i a i p ( x i x p ) + j = 1 M b i j ( x i r j ) = d i i x i p N i a i p x p + j = 1 M b i j ( x i r j ) ,
where e i R n , b i j 0 represents the pinning gain. Define B j = diag [ b 1 j , , b i j , , b N j ] R N × N . In fact, the connection between the ith follower and the jth leader is available if and only if b i j > 0 . Denote the communication graph as G x = ( G , x ) . The global containment error vector of G x is
e = ( G I n ) x + ( B ( I M 1 N ) ) I n r ¯ ,
where e = [ e 1 T , e 2 T , , e N T ] T R N × n , r ¯ = [ r 1 T , r 2 T , , r M T ] T R M × n , G = L + B ( 1 M I N ) , I n represents the n-dimension identity matrix, 1 M stands for the M-dimensional column vector whose every element equals to 1 and B = [ B 1 , B 2 , , B M ] R N × N M . Considering (1), (2) and (3), for the ith follower, the local neighborhood containment error dynamic is formulated as
e ˙ i = F i + c i g i ( x i ) μ i + p N i a i p g p ( x p ) μ p ,
where c i = d i i + j = 1 M b i j and F i = c i f ( x i ) p N i a i p f ( x p ) j = 1 M b i j h j ( r j ) . For the ith follower, the local neighborhood containment error is dominated not only by local states and local control inputs, but also by the information from its neighbors and the leaders. In order to implement the synchronization of the partially unknown nonlinear MASs (i.e., e i 0 ), an IRL-based OCC scheme is designed in the next subsection.

3. IRL-Based OCC Scheme

3.1. Optimal Containment Control

For the local neighborhood containment error dynamic (4), define the cost function as
J i ( e i ( 0 ) ) = 0 P i e i ( ξ ) , μ i ( ξ ) , μ i ( ξ ) d ξ ,
where P i ( e i , μ i , μ i ) = e i T Q i e i + p { N i , i } μ p T R i p μ p is a utility function, μ i = { μ p | p N i } represents a set of the local control protocols from the neighbors of node υ i , and Q i R n × n and R i p R m × m are the positive definite matrices.
Definition 2
(Admissible control policies [17]). The feedback control policies μ i ( e i ) ( i I ) are defined to be admissible with respect to (5) on a compact set Ω i , denoted by μ i ( e i ) A ( Ω i ) , if μ i ( e i ) is continuous on Ω i with μ i ( 0 ) = 0 , μ i ( e i ) stabilizes (4) on Ω i and J i ( e i ( 0 ) ) is finite e i ( 0 ) Ω i .
Definition 3
(Nash equilibrium [17]). An N-tuple admissible control policy μ * ( e ) = { μ 1 * ( e 1 ) , μ 2 * ( e 2 ) , , μ N * ( e N ) } is said to constitute a Nash equilibrium solution in graph G x , if the following N inequalities are satisfied
J i ( e i , μ i * , μ i * ) J i ( e i , μ i , μ i * ) , i = 1 , 2 , , N ,
where μ i * = { μ 1 * , , μ i 1 * , μ i + 1 * , , μ N * } .
This paper aims to find an N-tuple optimal admissible control policy μ * ( e ) to minimize the cost function (5) for each follower such that the Nash equilibrium solution in G x (i.e., the OCC protocols) is obtained.
For arbitrary μ i ( e i ) A ( Ω i ) of the ith follower, define the value function
C i e i ( t ) = t P i e i ( ξ ) , μ i ( ξ ) , μ i ( ξ ) d ξ .
When (6) is finite, then the Bellman equation is
0 = e i T Q i e i + p { N i , i } μ p T R i p μ p + C i T ( e i ) F i + c i g i ( x i ) μ i + p N i a i p g p ( x p ) μ p ,
where V i ( 0 ) = 0 and C i ( e i ) = C i ( e i ) / e i . For the ith follower, the local Hamiltonian is
H i ( e i , μ i , μ i , C i ( e i ) ) = e i T Q i e i + p { N i , i } μ p T R i p μ p + C i T ( e i ) F i + c i g i ( x i ) μ i + p N i a i p g p ( x p ) μ p .
Define the optimal value function as
C i * ( e i ) = min μ i A ( Ω i ) C i ( e i ) .
According to [13], the optimal value function C i * ( e i ) satisfies the HJBE as follows
0 = min μ i A ( Ω i ) H i ( e i , μ i , μ i , C i * ( e i ) ) .
The local OCC protocol is
μ i * ( e i ) = arg min μ i A ( Ω i ) H i ( e i , μ i , μ i , C i * ( e i ) ) = 1 2 c i R i i 1 g i T ( x i ) C i * ( e i ) .
It should be mentioned that the analytical solution of the HJBE is intractable to obtain since C i * ( e i ) is unknown. According to [15], the solution of the HJBE is successively approximated through a sequence of iterations with policy evaluation
0 = e i T Q i e i + p { N i , i } μ p ( k 1 ) T R i p μ p ( k 1 ) + C i ( k ) T ( e i ) F i + c i g i ( x i ) μ i ( k 1 ) + p N i a i p g p ( x p ) μ p ( k 1 ) ,
and policy improvement
μ i ( k ) = 1 2 c i R i i 1 g i T ( x i ) C i ( k ) ( e i ) ,
where ( k ) represents the kth iteration index with k N + .
From (11), we can see that the policy evaluation requires the accurate mathematical model of (1). However, the accurate mathematical model is always difficult to obtain in practice. To break this bottleneck, the IRL method is developed to relax the requirement of the accurate model in the policy evaluation.

3.2. Integral Reinforcement Learning

For t τ > 0 , (6) can be rewritten as
C i ( e i ( t ) ) = t t + t τ e i T ( ξ ) Q i e i ( ξ ) + p { N i , i } μ p T ( ξ ) R i p μ p ( ξ ) d ξ + C i ( e i ( t + t τ ) ) .
Based on the integral Bellman Equation (13), V i * ( e i ) and μ i * satisfy
0 = t t + t τ e i T ( ξ ) Q i e i ( ξ ) + p { N i , i } μ p * T ( ξ ) R i p μ p * ( ξ ) d ξ + C i * ( e i ( t + t τ ) ) C i * ( e i ( t ) ) .
Compared to (7), the policy evaluation (14) is not required for the accurate system dynamics in (1).
Theorem 1.
Let C i ( k ) ( e i ) 0 , C i ( k ) ( 0 ) = 0 and μ i ( k ) A ( Ω i ) . C i ( k ) ( e i ) is the solution of the integral Bellman equation
0 = t t + t τ e i T ( ξ ) Q i e i ( ξ ) d ξ + t t + t τ p { N i , i } μ p ( k 1 ) T ( ξ ) R i p μ p ( k 1 ) ( ξ ) d ξ + C i ( k ) ( e i ( t + t τ ) ) C i ( k ) ( e i ( t ) ) ,
if and only if C i ( k ) ( e i ) is the only solution of (11).
Proof of Theorem  1.
Considering (11), the time derivative of C i ( k ) ( e i ) corresponding to (4) is transformed as
d C i ( k ) ( e i ) d t = C i ( k ) ( e i ) F i + c i g i ( x i ) μ i ( k 1 ) + p N i a i p g p ( x p ) μ p ( k 1 ) = e i T Q i e i p { N i , i } μ p ( k 1 ) T R i p μ p ( k 1 ) .
Integrate on both sides of (16) within [ t , t + t τ ] , that is
C i ( k ) ( e i ( t + t τ ) ) C i ( k ) ( e i ( t ) ) = t t + t τ e i T ( ξ ) Q i e i ( ξ ) d ξ t t + t τ p { N i , i } μ p ( k 1 ) T ( ξ ) R i p μ p ( k 1 ) ( ξ ) d ξ .
According to the derivation of (16) and (17), if C i ( k ) ( e i ) is the solution of (11), C i ( k ) ( e i ) satisfies the integral Bellman Equation (15). Next, we verify the uniqueness of the solution C i ( k ) ( e i ) .
Supposing that Υ i ( k ) ( e i ) is another solution of (11) with Υ i ( k ) ( 0 ) = 0 . Similar to the mathematical operation of (16), we have
d Υ i ( k ) ( e i ) d t = e i T Q i e i p { N i , i } μ p ( k 1 ) T R i p μ p ( k 1 ) .
Subtracting (16) into (18) yields
d d t Υ i ( k ) ( e i ) C i ( k ) ( e i ) = 0 .
Solving (19), we have Υ i ( k ) ( e i ) C i ( k ) ( e i ) = ς i with ς i R a real constant. For e i = 0 , we have ς i = Υ i ( k ) ( 0 ) C i ( k ) ( 0 ) = 0 . That is to say, Υ i ( k ) ( e i ) = C i ( k ) ( e i ) . One can derive that C i ( k ) ( e i ) is the unique solution. In summary, C i ( k ) ( e i ) is the unique solution of (15) if and only if C i ( k ) ( e i ) is the only solution of (11). □
Theorem 1 reveals that the IRL algorithm with (15) and (12) theoretically equals to the model-based PI algorithm, whose relevant convergence analysis was provided in [15]. Hence, the IRL algorithm can be guaranteed to be convergent.
Theorem 2.
Considering the nonlinear MAS with partially unknown dynamic as (1), the local neighborhood containment error dynamic as (4) and the optimal value function C i * ( e i ) as (8), the closed-loop containment error system is guaranteed to be asymptotically stable under the local OCC protocol (10). Furthermore, the containment control is achieved with a set of the OCC protocols { μ 1 * , μ 2 * , , μ N * } if there is a spanning tree in the directed graph.
Proof of Theorem 2.
Selecting the Lyapunov function candidate as C i * ( e i ) . Combining (7), (8) and (10), then
C i * T ( e i ) F i = C i * T ( e i ) c i g i ( x i ) μ i * + p N i a i p g p ( x p ) μ p * e i T Q i e i p { N i , i } μ p * T R i p μ p * .
Substituting (20) into the time derivative of V i * ( e i ) , then
C ˙ i * ( e i ) = C i * T ( e i ) F i + c i g i ( x i ) μ i * + p N i a i p g p ( x p ) μ p * = e i T Q i e i p { N i , i } μ p * T R i p μ p * .
Therefore, C ˙ i * ( e i ) 0 . One can conclude that the closed-loop containment error system (4) is asymptotically stable with the local OCC protocol (10). Since a spanning tree exists in the directed graph, the containment control of the nonlinear MAS with partially unknown dynamic can be achieved. □

3.3. Critic NN Implementation

Based on the Stone–Weierstrass approximation theorem, on the compact set Ω i , the optimal function C i * ( e i ) and its partial gradient can be established by a critic NN as
C i * ( e i ) = ϕ i * T σ i ( e i ) + ω i ( e i ) ,
C i * ( e i ) = σ i T ( e i ) ϕ i * + ω i ( e i ) ,
where ϕ i * R l i represents the ideal weight, σ i ( · ) R l i represents the activation function, l i represents the number of hidden neurons and ω i ( e i ) stands for the reconstruction error.
Since the ideal weight vector is unknown, the approximation of C i * ( e i ) and C i * ( e i ) are expressed as
C ^ i ( e i ) = ϕ ^ i T σ i ( e i ) , C ^ i ( e i ) = σ i T ( e i ) ϕ ^ i ,
where σ i ( e i ) = σ i ( e i ) / e i and ϕ ^ i R l i represents the estimation of ϕ i * . Then, the local OCC protocol (10) can be approximated by
μ ^ i ( e i ) = 1 2 c i R i i 1 g i T ( x i ) σ i T ( e i ) ϕ ^ i .
The approximate local Hamiltonian is
e c i = t t + t τ e i T ( ξ ) Q i e i ( ξ ) + p { N i , i } μ ^ p T ( ξ ) R i j μ ^ p ( ξ ) d ξ + ϕ ^ i T σ i ( e i ( t + t τ ) ) σ i ( e i ( t ) ) θ i .
Combining (14) and (21) with (25) yields
e c i = t t + t τ e i T ( ξ ) Q i e i ( ξ ) + p { N i , i } μ ^ p T ( ξ ) R i p μ ^ p ( ξ ) d ξ t t + t τ e i T ( ξ ) Q i e i ( ξ ) + p { N i , i } μ p * T ( ξ ) R i p μ p * ( ξ ) d ξ + ϕ ^ i T θ i ϕ i * T θ i ω i ( e i ( t + t τ ) ) + ω i ( e i ( t ) ) = t t + t τ p { N i , i } μ ^ p ( ξ ) + μ p * ( ξ ) T R i p μ ^ p ( ξ ) μ p * ( ξ ) d ξ ϕ ˜ i θ i ω i ( e i ( t + t τ ) ) + ω i ( e i ( t ) ) = ϕ ˜ i θ i + Φ i ,
where ϕ ˜ i = ϕ i * ϕ ^ i represents the weight estimation error and Φ i = t t + t τ p { N i , i } μ ^ p ( ξ ) + μ p * ( ξ ) T R i p μ ^ p ( ξ ) μ p * ( ξ ) d ξ ω i ( e i ( t + t τ ) ) + ω i ( e i ( t ) ) .
Assumption 2.
Φ i is bounded by η i , i.e., Φ i η i with η i > 0 .
In order to tune ϕ ^ i , the steepest descent algorithm is employed to minimize E c i = 1 2 e c i 2 . A modified updating law of ϕ ^ i is
ϕ ^ ˙ i = l c i θ i ( 1 + θ i T θ i ) 2 e c i η ^ i
where l c i > 0 and η ^ i , the estimation of η i , can be updated by
η ^ ˙ i = l s i ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 ,
where l s i > 0 is a design constant. Considering (26) and (27), the weight estimation error is updated by
ϕ ˜ ˙ i = l c i θ i ( 1 + θ i T θ i ) 2 ϕ ˜ T θ i Φ i + η ^ i .
Theorem 3.
Considering the nonlinear MAS with partially unknown dynamic as (1), the local neighborhood containment error dynamic as (4) and the critic NN with the modified updating laws (27) and (28), then ϕ ˜ i is guaranteed to be asymptotically stable.
Proof of Theorem 3.
Define η ˜ i = η i η ^ i . Choose the Lyapunov function candidate as
Ξ c i = 1 2 l c i ϕ ˜ i T ϕ ˜ i + 1 2 l s i η ˜ i 2 .
According to (28), η ˜ i is updated by
η ˜ ˙ i = l s i ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 .
Considering (29) and (31), the time derivative of (30) is
Ξ ˙ c i = 1 l c i ϕ ˜ i T ϕ ˜ ˙ i + 1 l s i η ˜ i η ˜ ˙ i = ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 ϕ ˜ T θ i Φ i + η ^ i ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 η ˜ i = ϕ ˜ i T Ψ i ϕ ˜ i + ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 ( Φ i η ^ i η ˜ i ) ,
where Ψ i = θ i θ i T / ( 1 + θ i T θ i ) 2 . According to Assumption 2, (32) is derived as
Ξ ˙ c i λ min ( Ψ i ) ϕ ˜ i 2 + ϕ ˜ i T θ i ( 1 + θ i T θ i ) 2 Φ i η i λ min ( Ψ i ) ϕ ˜ i 2 .
It indicates Ξ ˙ c i 0 . Therefore, one can conclude that ϕ ˜ i is ensured to be asymptotically stable. □
Under the framework of the critic-only architecture, the IRL-based OCC scheme is presented. For each follower, the local neighborhood containment error (3) is established by communicating with its neighbors and the leaders. The value function of each follower is approximated by the critic NN (23), whose weights are tuned by a modified weight updating law (27). Based on (1), (3) and (23), the local OCC protocol (24) is obtained. The structural diagram of the developed IRL-based OCC scheme is shown in Figure 1.
Remark 1. 
In the actor–critic architecture, the optimal value function and the optimal control policy are approximated by a critic NN and an actor NN, respectively. While for the critic-only architecture, the optimal value function is approximated by a critic NN and the optimal control policy is directly obtained by combining (10) and (22). Hence, the critic-only architecture keeps the same performance as the actor–critic one. In contrast, the critic-only architecture utilizes a single critic NN only, which implies that the control structure is simplified and the computation burden is reduced.

3.4. Stability Analysis

Assumption 3.
ϕ i * , ϕ ˜ i , σ i ( · ) and ω i ( · ) are norm-bounded, i.e.,
ϕ i * ϕ i M , ϕ ˜ i ϕ ¯ i M , σ i ( · ) σ ¯ i M , ω i ( · ) ω ¯ i M , g i ( · ) g ¯ i M ,
where ϕ i M , ϕ ¯ i M , σ ¯ i M , ω ¯ i M and g ¯ i M are positive constants.
Theorem 4. 
Considering the nonlinear MAS with partially unknown dynamics as (1), the local neighborhood containment error dynamic as (4), the optimal value function as (8) and the critic NN which is updated by (27) and (28), the local containment control protocol (24) can guarantee the closed-loop containment error system (4) to be UUB.
Proof of Theorem 4.
The Lyapunov function candidate is chosen as
Ξ i = C i * ( e i ) .
Considering (20), (21) and Assumption 3, the time derivative of (33) corresponding to (4) is
Ξ ˙ i = C ˙ i * ( e i ) = C i * T ( e i ) F i + c i g i ( x i ) μ ^ i + p N i a i p g p ( x p ) μ ^ p = C i * T ( e i ) c i g i ( x i ) ( μ ^ i μ i * ) + p N i a i p g p ( x p ) ( μ ^ p μ p * ) e i T Q i e i p { N i , i } μ p * T R i p μ p * C i * T ( e i ) c i g i ( x i ) ( μ ^ i μ i * ) + p N i a i p g p ( x p ) ( μ ^ p μ p * ) λ min ( Q i ) e i 2 σ ¯ i M ϕ i M + ω ¯ i M c i g ¯ i M μ ^ i μ i * + p N i a i p g ¯ p M μ ^ p μ p * λ min ( Q i ) e i 2 .
Notice that
μ ^ i μ i * = 1 2 R i i 1 c i g i T ( x i ) σ i T ( e i ) ϕ ^ i + 1 2 R i i 1 c i g i T ( x i ) σ i T ( e i ) ϕ i * + ω i ( e i ) = 1 2 R i i 1 c i g i T ( x i ) σ i T ( e i ) ϕ ˜ i + ω i ( e i ) c i g ¯ i M 2 R i i σ ¯ i M ϕ ¯ i M + ω ¯ i M .
Then, (34) becomes
Ξ ˙ i σ ¯ i M ϕ i M + ω ¯ i M c i 2 g ¯ i M 2 2 R i i σ ¯ i M ϕ ¯ i M + ω ¯ i M + p N i c p a i p g ¯ p M 2 2 R p p σ ¯ p M ϕ ¯ p M + ω ¯ p M λ min ( Q i ) e i 2 .
Let Π i 1 = c i 2 g ¯ i M 2 2 R i i σ ¯ i M ϕ ¯ i M + ω ¯ i M + p N i c p a i p g ¯ p M 2 2 R p p σ ¯ p M ϕ ¯ p M + ω ¯ p M . Thus, (35) turns to
Ξ ˙ i σ ¯ i M ϕ i M + ω ¯ i M Π i 1 Π i 2 λ min ( Q i ) e i 2 = Π i 2 λ min ( Q i ) e i 2 .
It shows L ˙ i 2 < 0 if e i lies outside the compact set
Ω e i = e i : e i Π i 2 λ min ( Q i ) .
Therefore, the closed-loop containment error system (4) is UUB under the local containment control protocol (24). □
Remark 2.
In Assumption 1, we know that the nonlinear functions f ( x ) and g i ( x ) are Lipschitz continuous on a compact set Ω i containing the origin, f ( 0 ) = 0 . It indicates that the developed control scheme is effective in a compact set Ω i . If the system states are outside this compact set, this scheme might be invalid. In Theorem 4, we analyzed the system stability within such a compact set via the Lyapunov direct method, which means the closed-loop system is stable in the compact set under the developed IRL-based OCC scheme.

4. Simulation Study

This section provides two simulation examples to support the developed IRL-based OCC scheme.

4.1. Example 1

Consider a six-node graph network connected by three leader nodes. The directed topology of the graph is displayed in Figure 2.
As displayed in Figure 2, nodes 1–3 stand for the leaders 1–3 and nodes 4–6 represent the followers 1–3. In (3), the edge weights and pinning gains were set to 0.5. The node dynamic of the jth leader is described as r ˙ j = A ¯ r j , where r j = [ r j 1 , r j 2 ] T R 2 represents the state vector, j = 1 , 2 , 3 and
A ¯ = 0.1 1 1 0.1 .
For the ith follower, the node dynamic is formulated as x ˙ i = A ¯ x i + B ¯ i μ i , where x i = [ x i 1 , x i 2 ] T R 2 and μ i R with i = 1 , 2 , 3 , B ¯ 1 = [ 1.5 , 1 ] T , B ¯ 2 = [ 1 , 1 ] T and B ¯ 3 = [ 1 , 0.5 ] T . The local neighborhood containment error vector e i = [ e i 1 , e i 2 ] T R 2 is calculated by (3).
In the simulation, C i ( e i ) was reconstructed by a critic NN with a 2–5–1 structure. The activation function was described as σ i ( e i ) = [ e i 1 2 , e i 1 e i 2 , e i 2 2 , e i 1 2 e i 2 , e i 2 2 e i 1 ] T . The initialization of the node dynamics were characterized as x 1 ( 0 ) = [ 0.50 , 1.00 ] T , x 2 ( 0 ) = [ 1.00 , 0.50 ] T , x 3 ( 0 ) = [ 0.80 , 0.30 ] T , r 1 ( 0 ) = [ 0.62 , 0.83 ] T , r 2 ( 0 ) = [ 0.45 , 0.40 ] T and r 3 ( 0 ) = [ 0.30 , 0.22 ] T . The related parameters were chosen as Q i = 5 I 2 , R i p = R i i = 1 , l c i = 0.1 and l s i = 0.1 .
The simulation results are shown in Figure 3, Figure 4 and Figure 5 using the developed IRL-based OCC protocols. The evolution procedure of the local neighborhood containment errors for triple followers is shown in Figure 3, which indicates that the local neighborhood containment errors were regulated to zero under the developed control protocols. Thus, the containment control of MAS could be reached. Figure 4 and Figure 5 depict the state curves of the leaders and the followers, where all followers moved and stayed within the region formed by the envelope curves. It implies that the satisfactory performance of the containment control was acquired. The state curves of the followers and the leaders are displayed as 2-D phase plane plot in Figure 6 and the region enveloped by the three leaders υ 1 , υ 2 and υ 3 is shown at three different instants ( t = 16.0 s , 20.3 s and 25.0 s ). We can observe from Figure 6 that the followers converged to the convex hull.

4.2. Example 2

Consider the nonlinear MAS consisting of three single-link robot arms and triple leader nodes. A rigid link is attached to each robot arm via a gear train to a direct current motor [28]. In Figure 2, the directed topology among these robot arms is shown. We chose the values of all edge weights and pinning gain as 1.
The state trajectories of the leaders is given by r 1 = [ 0.6 sin ( t ) , 0.6 cos ( t ) ] T , r 2 = [ 0.4 sin ( t + π 6 ) , 0.4 cos ( t + π 6 ) ] T and r 3 = [ 0.2 sin ( t π 6 ) , 0.2 cos ( t π 6 ) ] T . The single-link robot arm for each follower can be described as
J z ¨ i + B ¯ z ˙ i + M ¯ g l sin ( z i ) = u i ,
where J = 9 kg · m 2 , B ¯ = 30.5 , M ¯ = 1 kg , l = 1 m , g = 9.8 m / s 2 and i = 1 , 2 , 3 . The notations of the model (36) are defined in Table 1.
Define x i = [ x i 1 , x i 2 ] T = [ z i , z ˙ i ] T R 2 and μ i = u i . For the ith follower, the model (36) can be rewritten as
x ˙ i 1 x ˙ i 2 = x i 2 M ¯ g l J sin ( x i 1 ) B ¯ J x i 2 + 0 1 J μ i .
Similar to Example Section 4.1, the local neighborhood containment error vector was given as e i = [ e i 1 , e i 2 ] T R 2 .
The critic NN structures and the related activation functions were initialized as in Example Section 4.1. The critic NN weights were initialized as the random values within ( 0 , 36 ) and the parameters of initialization and control were chosen as r 1 ( 0 ) = [ 0 , 0.6 ] T , r 2 ( 0 ) = [ 0.4 sin ( π 6 ) , 0.4 cos ( π 6 ) ] T , r 3 ( 0 ) = [ 0.2 sin ( π 6 ) , 0.2 cos ( π 6 ) ] T , x 1 ( 0 ) = [ 0.8 , 0.1 ] T , x 2 ( 0 ) = [ 0.6 , 0.5 ] T , x 3 ( 0 ) = [ 0.7 , 0.3 ] T , Q i p = 18 I n , R i p = 5 , t τ = 0.1 s , l c i = 0.1 and l s i = 0.1 .
Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the simulation results. The local neighborhood containment errors converged to a small region around zero as depicted in Figure 7, which shows that the containment control of the nonlinear MAS was achieved. In Figure 8 and Figure 9, it can be found that the state trajectories of single-link robot arms (36) entered and stayed within the region enveloped by the leader nodes as the time progressed, which indicated the satisfactory performance of the developed scheme. The evolution curves of all agents are illustrated as the 2-D phase plane plot in Figure 10. We can see that the convex hull formed by the leaders υ 1 , υ 2 and υ 3 contains the followers at the time instants t = 5.0 s , 10.0 s , 14.5 s and 26.0 s , which implies that the followers converged to the convex hull. Figure 11 describes the curves of the containment control inputs, which shows the regulation process of the containment error system.

5. Conclusions

This paper investigated the OCC problem of nonlinear MASs with partially unknown dynamics via the IRL method. Based on the IRL method, the integral Bellman equation was constructed to relax the requirement of the drift dynamics. The proposed control algorithm was guaranteed to converge by analyzing the convergence of IRL. With the aid of the universal approximation capability of the NN, the solution of the HJBE was acquired by a critic NN with a modified weight-updating law which guaranteed the asymptotical stability of the weight error dynamics. By using the Lyapunov stability theorem, we showed that the closed-loop containment error system was UUB. From the simulation results of two examples, the effectiveness of the proposed IRL-based OCC scheme was illustrated. In the considered MASs, the information among all agents was transmitted by a desired communication network, which is always confronted with some security issues, such as attacks and packet dropouts. The focus of our future work is to develop a novel distributed resilient containment control for the MASs subjected to attacks and packet dropouts.

Author Contributions

Conceptualization, Q.W. and Y.W. (Yonghua Wang); methodology, Q.W.; software, Q.W.; investigation, Q.W.; writing—original draft preparation, Q.W.; writing—review and editing, Y.W. (Yongheng Wu); visualization, Y.W. (Yongheng Wu); supervision, Y.W. (Yonghua Wang); funding acquisition, Y.W. (Yonghua Wang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Open Research Fund of The State Key Laboratory for Management and Control of Complex Systems under grant no. 20220118.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within this manuscript.

Acknowledgments

We appreciate all the authors for their contributions and the support of the foundation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jimenez, A.F.; Cardenas, P.F.; Jimenez, F. Intelligent IoT-multiagent precision irrigation approach for improving water use efficiency in irrigation systems at farm and district scales. Comp. Electr. Agric. 2022, 192, 106635. [Google Scholar] [CrossRef]
  2. Vallejo, D.; Castro-Schez, J.; Glez-Morcillo, C.; Albusac, J. Multi-agent architecture for information retrieval and intelligent monitoring by UAVs in known environments affected by catastrophes. Eng. Appl. Artif. Intell. 2020, 87, 103243. [Google Scholar] [CrossRef]
  3. Liu, Y.; Wang, Y.; Li, Y.; Gooi, H.B.; Xin, H. Multi-agent based optimal scheduling and trading for multi-microgrids integrated with urban transportation networks. IEEE Trans. Power. Syst. 2021, 36, 2197–2210. [Google Scholar] [CrossRef]
  4. Deng, Q.; Peng, Y.; Qu, D.; Han, T.; Zhan, X. Neuro-adaptive containment control of unmanned surface vehicles with disturbance observer and collision-free. ISA Trans. 2022, 129, 150–156. [Google Scholar] [CrossRef] [PubMed]
  5. Hamani, N.; Jamont, J.P.; Occello, M.; Ben-Yelles, C.B.; Lagreze, A.; Koudil, M. A multi-cooperative-based approach to manage communication in wireless instrumentation systems. IEEE Syst. J. 2018, 12, 2174–2185. [Google Scholar] [CrossRef]
  6. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  7. Luo, K.; Guan, Z.H.; Cai, C.X.; Zhang, D.X.; Lai, Q.; Xiao, J.W. Coordination of nonholonomic mobile robots for diffusive threat defense. J. Frankl. Inst. 2019, 356, 4690–4715. [Google Scholar] [CrossRef]
  8. Yu, Z.; Liu, Z.; Zhang, Y.; Qu, Y.; Su, C.Y. Distributed finite-time fault-tolerant containment control for multiple unmanned aerial vehicles. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 2077–2091. [Google Scholar] [CrossRef]
  9. Li, Y.; Qu, F.; Tong, S. Observer-based fuzzy adaptive finite-time containment control of nonlinear multiagent systems with input delay. IEEE Trans. Cybern. 2021, 51, 126–137. [Google Scholar] [CrossRef] [PubMed]
  10. Li, Z.; Xue, H.; Pan, Y.; Liang, H. Distributed adaptive event-triggered containment control for multi-agent systems under a funnel function. Int. J. Robust Nonlinear Control 2022. [Google Scholar] [CrossRef]
  11. Li, Y.; Liu, M.; Lian, J.; Guo, Y. Collaborative optimal formation control for heterogeneous multi-agent systems. Entropy 2022, 24, 1440. [Google Scholar] [CrossRef]
  12. Zhao, L.; Yu, J.; Shi, P. Command filtered backstepping-based attitude containment control for spacecraft formation. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1278–1287. [Google Scholar] [CrossRef]
  13. Liu, D.; Wei, Q.; Wang, D.; Yang, X.; Li, H. Adaptive Dynamic Programming with Applications in Optimal Control; Springer: Cham, Switzerland, 2017. [Google Scholar]
  14. Bellman, R.E. Dynamic Programming; Princeton Univ. Press: Trenton, NJ, USA, 1957. [Google Scholar]
  15. Abu-Khalaf, M.; Lewis, F.L. Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach. Automatica 2005, 41, 779–791. [Google Scholar] [CrossRef]
  16. Liu, D.; Xue, S.; Zhao, B.; Luo, B.; Wei, Q. Adaptive dynamic programming for control: A survey and recent advances. IEEE Trans. Syst. Man. Cybern. Syst. 2021, 51, 142–160. [Google Scholar] [CrossRef]
  17. Vamvoudakis, K.G.; Lewis, F.L.; Hudas, G.R. Multi-agent differential graphical games: Online adaptive learning solution for synchronization with optimality. Automatica 2012, 48, 1598–1611. [Google Scholar] [CrossRef]
  18. Zhang, H.; Zhang, J.; Yang, G.; Luo, Y. Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming. IEEE Trans. Fuzzy. Syst. 2015, 23, 152–163. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, W.; Zhang, H. Distributed optimal coordination control for nonlinear multi-agent systems using event-triggered adaptive dynamic programming method. ISA Trans. 2019, 91, 184–195. [Google Scholar] [CrossRef] [PubMed]
  20. Cui, J.; Pan, Y.; Xue, H.; Tan, L. Simplified optimized finite-time containment control for a class of multi-agent systems with actuator faults. Nonlinear Dyn. 2022, 109, 2799–2816. [Google Scholar] [CrossRef]
  21. Xu, J.; Wang, L.; Liu, Y.; Xue, H. Event-triggered optimal containment control for multi-agent systems subject to state constraints via reinforcement learning. Nonlinear Dyn. 2022, 109, 1651–1670. [Google Scholar] [CrossRef]
  22. Xiao, W.; Zhou, Q.; Liu, Y.; Li, H.; Lu, R. Distributed reinforcement learning containment control for multiple nonholonomic mobile robots. IEEE Trans. Circuits Syst. I Reg. Papers 2022, 69, 896–907. [Google Scholar] [CrossRef]
  23. Chen, C.; Lewis, F.L.; Xie, K.; Xie, S.; Liu, Y. Off-policy learning for adaptive optimal output synchronization of heterogeneous multi-agent systems. Automatica 2020, 119, 109081. [Google Scholar] [CrossRef]
  24. Yu, D.; Ge, S.S.; Li, D.; Wang, P. Finite-horizon robust formation-containment control of multi-agent networks with unknown dynamics. Neurocomputing 2021, 458, 403–415. [Google Scholar] [CrossRef]
  25. Zuo, S.; Song, Y.; Lewis, F.L.; Davoudi, A. Optimal robust output containment of unknown heterogeneous multiagent system using off-policy reinforcement learning. IEEE Trans. Cybern. 2018, 48, 3197–3207. [Google Scholar] [CrossRef] [PubMed]
  26. Mazouchi, M.; Naghibi-Sistani, M.B.; Hosseini Sani, S.K.; Tatari, F.; Modares, H. Observer-based adaptive optimal output containment control problem of linear heterogeneous Multiagent systems with relative output measurements. Int. J. Adapt. Control Signal Process. 2019, 33, 262–284. [Google Scholar] [CrossRef] [Green Version]
  27. Yang, Y.; Modares, H.; Wunsch, D.C.; Yin, Y. Optimal containment control of unknown heterogeneous systems with active leaders. IEEE Trans. Control Syst. Technol. 2019, 27, 1228–1236. [Google Scholar] [CrossRef]
  28. Zhang, H.; Lewis, F.L.; Qu, Z. Lyapunov, adaptive, and optimal design techniques for cooperative systems on directed communication graphs. IEEE Trans. Ind. Electron. 2012, 59, 3026–3041. [Google Scholar] [CrossRef]
Figure 1. Structural diagram of the developed IRL-based OCC scheme.
Figure 1. Structural diagram of the developed IRL-based OCC scheme.
Entropy 25 00221 g001
Figure 2. The directed topology of example 1.
Figure 2. The directed topology of example 1.
Entropy 25 00221 g002
Figure 3. Local neighborhood containment errors e i .
Figure 3. Local neighborhood containment errors e i .
Entropy 25 00221 g003
Figure 4. Performance of containment control ( r j 1 and x i 1 ).
Figure 4. Performance of containment control ( r j 1 and x i 1 ).
Entropy 25 00221 g004
Figure 5. Performance of containment control ( r j 2 and x i 2 ).
Figure 5. Performance of containment control ( r j 2 and x i 2 ).
Entropy 25 00221 g005
Figure 6. State trajectories.
Figure 6. State trajectories.
Entropy 25 00221 g006
Figure 7. Local neighborhood containment errors of triple followers.
Figure 7. Local neighborhood containment errors of triple followers.
Entropy 25 00221 g007
Figure 8. Performance of containment control ( r j 1 and x i 1 ).
Figure 8. Performance of containment control ( r j 1 and x i 1 ).
Entropy 25 00221 g008
Figure 9. Performance of containment control ( r j 2 and x i 2 ).
Figure 9. Performance of containment control ( r j 2 and x i 2 ).
Entropy 25 00221 g009
Figure 10. State trajectories.
Figure 10. State trajectories.
Entropy 25 00221 g010
Figure 11. Containment control inputs of triple followers.
Figure 11. Containment control inputs of triple followers.
Entropy 25 00221 g011
Table 1. Notations of the single-link robot arm.
Table 1. Notations of the single-link robot arm.
SymbolNotation
z i Link angle
z ˙ i Angular velocity of the link
J Total rotational inertia of the link and motor
B ¯ Overall damping coefficient
M ¯ Total mass of the link
lDistant from joint axis to mass center of the link
u i Command generator
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Q.; Wu, Y.; Wang, Y. Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems. Entropy 2023, 25, 221. https://doi.org/10.3390/e25020221

AMA Style

Wu Q, Wu Y, Wang Y. Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems. Entropy. 2023; 25(2):221. https://doi.org/10.3390/e25020221

Chicago/Turabian Style

Wu, Qiuye, Yongheng Wu, and Yonghua Wang. 2023. "Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems" Entropy 25, no. 2: 221. https://doi.org/10.3390/e25020221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop