Can Decentralized Control Outperform Centralized? The Role of Communication Latency

In this article, we examine the influence of communication latency on performance of networked control systems. Even though distributed control architectures offer advantages in terms of communication, maintenance costs, and scalability, it is an open question how communication latency that varies with network topology influences closed-loop performance. For networks in which delays increase with the number of links, we establish the existence of a fundamental performance tradeoff that arises from control architecture. In particular, we utilize consensus dynamics with single- and double-integrator agents to show that, if delays increase fast enough, a sparse controller with nearest neighbor interactions can outperform the centralized one with all-to-all communication topology.


I. INTRODUCTION
I T IS widely accepted that modern multi-agent systems cannot rely on centralized control architectures. This conclusion stems from issues related to gathering all decision making to a central node, ranging from lack of robustness and failures proneness, to maintenance costs, and communication overhead. Indeed, large-scale networks have experienced a net shift towards decentralized and distributed architectures [1], [2]. Moreover, the recent deployment of powerful communication protocols for massive networks, e.g., 5G [3], [4], and advances in embedded electronics [5], [6], as well as in algorithms for low-power devices (e.g., TinyML [7]), which allow to spread computational tasks across network nodes according to edge-and fog-computing paradigms [8]- [10], are making such networked systems grow at unprecedented scale, further stressing the importance of distributed controller architectures.
A challenging issue in large-scale wireless network systems is the latency arising from channel constraints, such as limited bandwidth or packet retransmissions. To address this problem, research efforts have been moving towards two main directions.
Related work in control theory deals with control design for distributed architectures, where classical methods, such as LQG or H 2 /H ∞ control, require an all-to-all information exchange which is infeasible for large-scale systems.
A large body of work focuses on stability, e.g., [11], [12] are concerned with finite-time delay-dependent stability of discretetime systems, [13] finds sufficient conditions for uniform stability of linear delay systems, [14] characterizes stability and consensus conditions with homogeneous and heterogeneous feedback delays, and [15], [16] analyze consensus and error compensation for vehicular platoons. Another line of work deals with maximizing performance for structured controllers, e.g., [17]- [19] study H 2 -norm minimization for time-delay network systems, [20] proposes a cyber-physical architecture with LQR for wide-area power systems, [21] develops a procedure for time-varying dead-time compensation by adapting the Filtered Smith Predictor, and [22] investigates sensor-andprocessing selection for optimal estimation in star networks.
A more recent trend is optimizing the controller architecture. For large-scale systems, this means sparsifying the structure to enhance communication and scalability. This is achieved by introducing penalty terms to trade performance for controller complexity [23]- [30]. In particular, [29] proposes the Regularization for Design, addressing optimization of communication links, while [30] investigates communication locality and its relation to control design within the System Level Synthesis.
Related work in optimization theory is concerned with minimization of distributed cost functions, which are only partially accessible at each agent. A large body of literature has been devoted to study suitable algorithms, a short list of which is represented by [31]- [36]. In particular, a line of work has been concerned specifically with the design of algorithms in the presence of communication delays, the main issues being related to convergence conditions. For example, [37]- [41] study consensus of multi-agent systems with additive or multiplicative time-delays under various network topologies and agent dynamics. This approach usually the communication network be given and focuses on the information exchange and processing by the agents from an optimization standpoint.
Addressed Problem. Even though both control design for delay-dependent dynamics and design of controller architectures are well-studied topics, it remains unclear how network connectivity affects the closed-loop performance in the presence of architecture-dependent communication latency. When the total available bandwidth does not increase with the size of the network [41] or when multi-hop communication is used among low-power devices [42], the number of active communication links may affect such latency in non-negligible way. In this case, it is important that the control design takes into account increase in delays when new communication links are introduced.  In Sections III-IV, we lay the ground for our main result. In Section III, we derive conditions for mean-square stability and compute the steady-state variance of continuous-time stochastically forced systems using Stochastic Delay Differential Equations (SDDEs). In Section IV, we prove that the control design problem is convex and in Section V we present our main results: by numerically computing the optimal controller gains, we show that the closed-loop performance is optimized by sparse architectures. Furthermore, we derive analytical expression (I.1) for continuous-time single-integrator dynamics which demonstrates that the minimizer is in general nontrivial.
To address wireless communication, we study discrete-time systems in Section VI and show that the fundamental behavior of the system does not change. Table I summarizes our technical results and the theoretical tools used throughout the paper. Apart from classical control techniques such as the Jury stability criterion, we also leverage more unconventional tools from mathematical literature, such as exponential polynomials [46]. Concluding remarks are given in Section VII.

II. PROBLEM SETUP
We consider an undirected network with N agents in which the state of the ith agent at time t is given byx i (t) ∈ R with the control input u i (t) ∈ R. For notational convenience, we introduce the aggregate state of the systemx(t) and the aggregate control input u(t) by stacking states and control inputs of each subsystemx i (t) and u i (t), respectively.
Problem Statement. The agents aim to reach consensus towards a common state trajectory. The ith component of the vector x(t) . = Ωx(t) represents the mismatch between the state of agent i and the average network state at time t [44], where and 1 N ∈ R N is the vector of all ones, such that Ω1 N = 0. Ring Topology. We focus on ring topology to obtain analytical insights about optimal control design and fundamental performance trade-offs in the presence of communication delays. While some of our notation is tailored to such topology (e.g., see equations (II.2) and (II.5)), in Section IV-A we discuss extension of the optimal control design to generic undirected networks and complement these developments with computational experiments in Section V.
Assumption 1 (Communication model). Data are exchanged through a shared wireless channel in a symmetric fashion. Agent i receives state measurements from all agents within n communication hops. All measurements are received with delay τ n . = f (n) where f (·) is a positive increasing sequence. In particular, in ring topology, agent i receives state measurements from the 2n closest agents, that is, from the n pairs of agents at distance = 1, . . . , n, with 1 ≤ n < N /2. 1 Remark 1 (Architecture parametrization). Parameter n will play a crucial role throughout our discussion. In particular, we will use it to (i) evaluate the optimal performance for a given budget of links (see Problem 1); and to (ii) compare optimal performance of different control architectures. In the first part of the paper, we examine circular formations and n represents how many neighbor pairs communicate with each agent. For general undirected networks, n determines the number of communication hops for each agent. In general, n characterizes sparsity of a controller architecture: sparse controllers correspond to small n while highly connected ones to large n.
Feedback Control. Agent i uses the received information to compute the state mismatches y i, ± (t) relative to its neighbors, and the proportional control input is given by where measurements are delayed according to Assumption 1.
For networks with double integrator agents, the control input u i (t) may also include a derivative term, The derivative term in (II.4) is delay free because it only requires measurements coming from the agent itself, which we assume to be available instantaneously. The proportional input can be compactly written as u With ring topology, the feedback gain matrix is k , −k 1 , . . . , −k n , 0, . . . , 0, −k n , . . . , −k 1 , (II.5) where circ (a 1 , . . . , a n ) denotes the circulant matrix in R n×n with elements a 1 , . . . , a n in the first row.
For agents with additive stochastic disturbances (see Sections III and VI), we consider the following problem for each n. Problem 1. Design the feedback gains in order to minimize the steady-state variance of the consensus error, P control: argmin and w.l.o.g. we assume E [x(·)] ≡ E [x(0)] = 0.

III. CONTINUOUS-TIME AGENT DYNAMICS
We now examine continuous-time networks with single-(Section III-A) and double-integrator (Section III-B) agent dynamics, derive conditions for mean-square stability, and compute the steady-state variance of a stochastically forced system. These developments are instrumental for the formulation of the control design problem which is used to compare different control architectures. In the optimal control problem, the steadystate variance determines the objective function and stability 4 conditions represent constraints. While we first formulate and solve the problem for continuous-time dynamics, our results also hold for discrete-time systems; see Section VI. Also, all results in this section hold for generic undirected topologies.

A. Single Integrator Model
The dynamics of the ith agent are described by the first-order differential equation driven by standard Brownian noisew i (·), (III.1) The network error dynamics are where the process noise is given by dw(t) ∼ N 0, ΩΩ dt . Exploiting symmetry of the matrix K, we employ the change of variables where λ j is the jth eigenvalue of K. The subsystem with is a single integrator driven by standard Brownian noise. Stability Analysis. Mean-square stability of scalar stochastic differential equations of the form (III.3) has been addressed in the literature. We build on the classical result in [45] to characterize consensus stability for the multi-agent formation.

Proposition 1 (Stability of CT single integrators). The network error x(t) is mean-square stable if and only if
In this case, x(t) is a Gaussian process and its steady-state variance is determined by where σ 2 I (λ j ) is the variance of the trivial solution of (III.3). Sketch of Proof. In view of the decoupling, stability of (III.2) amounts to stability of all subsystems (III.3), j = 1, . . . , N , with the variances of x(t) andx(t) being equal. Condition (III.4) and expression (III.5) were derived in [45].
While the variance of delay-free systems is bounded for any positive eigenvalues λ 2 , . . . , λ N , the presence of delay constrains a stabilizing control according to (III.4). In fact, longer delays τ n induce smaller upper bounds on the eigenvalues.
The following result will turn useful in the control design.
Corollary 1. Let λ satisfy (III.4). Then the function σ 2 I (λ) is strictly convex and the minimizer λ * is determined by Proof. Follows from standard computations over the derivatives of σ 2 I (·). See Appendices A-B in the technical report [53].

B. Double Integrator Model
We now examine networks in which each agent obeys a second-order dynamics with the PD control input (II.4): For simplicity, we normalize the delay by rescaling (III.7), (III.8) Stacking the agent errors and their derivatives in the formation vector, the error dynamics can be decoupled as before, yielding Stability Analysis. We have the following result.
Proof. The proof is based on [46]. See Appendix A.
Remark 2 (Non-normalized delay). Under the original delay τ n in (III.7), for j = 2, . . . , N condition (III.10) becomes Similar to the single-integrator case, Proposition 2 states that the presence of delay requires more restrictive conditions than positive gains. In words, the system is stable if the instantaneous component of the control input in (II.4) is sufficiently "strong" compared to the delayed one. The steady-state variance of x j (t) for j = 1 can be computed using [48,Section 4], Fig. 2. Model Approximation. Because embedding integral (III.14) into an optimization problem is computationally challenging, we provide an alternative tractable formulation that can be used to achieve insight into fundamental performance tradeoffs. As shown in Appendix B, when the feedback gain η is sufficiently high, separation of time scales [49] allows us to approximate (III.9) with first-order dynamics, where the variance of Brownian motion n(t) is inversely proportional to η. In words, when the damping is high enough, the derivative ofx j (t) converges to zero much faster thanx j (t), which represents the dominant component of the dynamics. Utility of this approximation is illustrated in Fig. 2: with fixed η, the point of minimum of the corresponding 1D variance curve, i.e., argmin λj σ 2 II (η, λ j ) (solid black line), approaches the minimizer λ * of the single integrator model (dashed black, see Corollary 1) with increase ofη. We also note that the variance decreases with η.

IV. CONTROL DESIGN
Single Integrator Model. For system (III.2) Problem 1 amounts to and parameterization (III.3) allows to rewrite it as with stability condition given by (III.4). Linear dependence of the eigenvalues of K on the feedback gains [54] and Corollary 1 guarantee convexity of optimization problem (IV.2). Thus, the optimal feedback gains can be computed efficiently.
To make analytical progress and gain intuition, we also consider the following approximation of (IV.2), which squeezes the spectrum of K about the "optimal" eigenvalue λ * . The variance σ 2 I (·) can be approximated with a quadratic function around its minimum because it is strictly convex, differentiable in the stability region, and it blows up at the boundaries {0, π /2}, see Fig. 3.
Proposition 3 (Near-optimal proportional control). The solution of problem (IV.3) is determined bỹ Proof. The result follows by applying properties of the DFT to (IV.3). See Appendices C-D in the technical report [53].
Proposition 3 shows that spatially-constant feedback gains provide good performance even when spatially-varying feedback gains are allowed. According to Corollary 1, the suboptimal gaink * decreases with the delay τ n and with the number of agents involved in the feedback loops, thereby reflecting benefits of communication.
Remark 3 (Convexity enables comparison). Convexity of the optimal control design problems (IV.2)-(IV.4) enables both efficient numerical computations of the optimal feedback gains for given n and fair comparison of the best achievable performance for different values of n.
Remark 5 (Optimal design for double integrators). Local minimizer of the original problem approximated by (IV.4) can be solved using the gradient-based method proposed in [17]. However, this approach has no guarantees of global optimality and its computational complexity is impractical for large-scale systems. In contrast, convex approximation (IV.4) draws a parallel to the optimal design for the single-integrator model and provides insight into a centralized-decentralized trade-off.

A. General Symmetric Network Topology
Even though we utilized ring topology to derive analytical results (see Section V-A), the control design can be extended to general undirected networks with symmetric feedback gain matrices K. For the single integrator model, this reads (IV.5) The steady-state network error variance σ 2 (K) is a convex function if and only if σ 2 I (λ j ) is convex [56], which is proved in Corollary 1 for continuous-time and in Appendix E for discrete-time systems. The optimal gains can then be found numerically via gradient-based methods, where gradients of the eigenvalues can be computed using analytical [57], [58] or numerical [59] methods. On the other hand, the derivative feedback gain in σ 2 II (η, λ j ) prevents us from establishing convexity for second-order systems in general. However, if σ 2 II (η, λ j ) is convex in each coordinate 2 , the design problem can be solved by alternatively optimizing proportional and derivative gains and the centralized-decentralized trade-off can be studied irrespective of the particular topology.

V. THE CENTRALIZED-DISTRIBUTED TRADE-OFF
In the previous sections we formulated the optimal control problem for a given controller architecture (i.e., the number of links) parametrized by n and showed how to compute minimum-variance objective function and the corresponding constraints. In this section, we present our main result: we solve the optimal control problem for each n and compare the best achievable closed-loop performance with different control architectures. 3 For delays that increase linearly with n, i.e., f (n) ∝ n, we demonstrate that distributed controllers with few communication links outperform controllers with larger number of communication links. Figure 4a shows the steady-state variances obtained with single-integrator dynamics (IV.1) and the quadratic approximation (IV.3) for ring topology with N = 50 nodes. The best performance is achieved for a sparse architecture with n = 2 in which each agent communicates with the two closest pairs of neighboring nodes. This should be compared and contrasted to nearest-neighbor and all-to-all communication topologies which induce higher closed-loop variances. Thus, the advantage of introducing additional communication links diminishes beyond a certain threshold because of communication delays. Figure 4b shows that the use of approximation (IV.4) with η * = 70 identifies nearest-neighbor information exchange as the near-optimal architecture for a double-integrator model with ring topology. This can be explained by noting that the variance of the process noise n(t) in the reduced model (III.15) is proportional to 1 /η and thereby to τ n , according to (III.8), making the variance scale with the delay. Figures 4c-4d show the results obtained by solving the optimal control problem for discrete-time dynamics. The oscillations about the minimum in Fig. 4d are compatible with the investigated centralized-decentralized trade-off (I.1): in general, the sum of two monotone functions does not have a unique local minimum. Details about discrete-time systems are deferred to Section VI. Interestingly, double integrators with continuous- (Fig. 4b) ad discrete-time (Fig. 4d) dynamics exhibits very different trade-off curves, whereby performance monotonically deteriorates for the former and oscillates for the latter. While a clear interpretation is difficult because there is no explicit expression of the variance as a function of n, one possible explanation might be the first-order approximation used to compute gains in the continuous-time case.
Finally, Fig. 5 shows the optimization results for a random graph topology with discrete-time single integrator agents. Here, n denotes the number of communication hops in the "original" network, shown in Fig. 5: as n increases, each agent can first communicate with its nearest neighbors, then with its neighbors' neighbors, and so on. For a control architecture that utilizes different feedback gains for each communication link (i.e., we only require K = K ) we demonstrate that, in this case, two communication hops provide optimal closed-loop performance.
Additional computational experiments performed with different rates f (·) show that the optimal number of links increases 7 for slower rates: for example, the optimal number of links is larger for f (n) = √ n than for f (n) = n. These results are not reported because of space limitations.

A. Ring Topology: Analytical Insight into the Trade-Off
For a ring topology with continuous-time single-integrator agent dynamics, a centralized-decentralized trade-off can be explicitly quantified. By utilizing Proposition 3 to compute the feedback gains, the objective function can be factorized as where σ 2 I (λ * j ) =C * j (n)τ n andC * j (n) only depends on n and can be computed exactly; see Appendix C. This holds because the suboptimal eigenvalues can be expressed asλ * j =c * j (n)λ * (cf. Proposition 3). Such a decomposition can be interpreted as a decoupling of the impact of network (c * j (n)) and latency (λ * ) effects on the control design. By inspection, it can be seen that J network (n) is a decreasing function of n and thatJ latency (n) is determined by f (n). Furthermore, when f (·) is sublinear, the above expression can be equivalently written in form (I.1), where σ 2 I (λ * ) = C * τ n is the optimal variance according to (III.5) and Corollary 1. Indeed, the summation decreases with superlinear rate, so that J network (n) is a decreasing sequence. The terms in J network (n), each associated with a decoupled subsystem (III.3), illustrate benefits of communication: as n increases, the eigenvalues of K have more degrees of freedom and can squeeze more tightly about λ * , reducing performance gaps between subsystems and theoretical optimum. We note that J network (n) vanishes for the fully connected architecture.
Even though analogous expressions could not be obtained for other dynamics, the curves in Fig. 4 exhibit trade-offs which are consistent with the above analysis.

VI. DISCRETE-TIME AGENT DYNAMICS
We now consider discrete-time agent dynamics to illustrate that the afore-established fundamental trade-offs hold in this case as well. In what follows, we denote time instants by {k} k∈N . = {kT } k∈N , T being the sampling time. Similarly, we re-define the delay as the number of delay steps τ n . = τn /Ts . Agent Models. The discrete-time versions of the agent dynamics considered in Section III are given bȳ for the single-integrator model, withw i (·) ∼ N (0, 1), and for the double-integrator model, with u P,i (k) defined in (II.3).  In general, given a delay τ n , stability conditions with respect to the control gains can be derived in the form of polynomial inequalities through the Jury criterion. For the single-integrator case, one simple condition can be computed analytically. The upper bound in (VI.3) approaches its continuous-time counterpart (III.4) from below as the delay steps tend to infinity (see Fig. 6). A discussion on general stability conditions and the proof of Proposition 4 are provided in Appendix D. The basic argument is the same as for the continuous-time case.
Performance Evaluation. With fixed parameters, the steadystate variance of each decoupled subsystem can be computed numerically via the Wiener-Khintchine formula. Also, for any given value of τ n , a closed-form expression of the variance can be obtained via moment matching through a recursive formula, see Appendix E. Such closed-form expressions have been used for our computational experiments illustrated in Fig. 4. Figure 7 shows the typical profiles of the variance function for decoupled subsystems with single-and double-integrator dynamics (see (D.1) and (D. 3) in Appendix E, respectively).

VII. CONCLUSION AND FUTURE RESEARCH
We study minimum-variance control design problem for undirected networks with both continuous-and discrete-time 8 agent dynamics in the presence of communication delays. When feedback delays increase with the number of communication links, we identify fundamental performance trade-offs and show that distributed control architectures can offer superior performance to centralized ones that utilize all-to-all information exchange. Our hope is to pave the way to a new body of research which will enable control design with a deeper understanding of the fundamental behavior and limitations of large-scale wireless network systems. Future work will focus on extending our results to other classes of control problems which include more complex system dynamics and communication models, more realistic information about structure of delays in a distributed scenario, as well as different cost functions.

B. Derivation of First-Order Reduced Model for Continuous-Time Double Integrators
We now show that subsystem (III.9) can be approximated to first-order dynamics when the gain η is sufficiently high. Let us consider (A.2) with states(t) = [x(t),z(t)] . Assume that the feedback gain η is large, so that the variablez(t) evolves faster thanx(t). We can then approximate the dynamics of z(t) by lettingx(t − 1) ≡ x 0 be constant overtime, (B.1) Eq. (B.1) defines a standard Ornstein-Uhlenbeck process, (B.2) In view of the time-scale separation, we assume that (B.2) holds (withx(t − 1) constant) tillz(t) settles at steady state,

D. Stability Conditions for Discrete-Time Systems
General Case. In the following, we replace τ n with τ for the sake of readability. For the single-integrator case, decoupling the error dynamics yields scalar subsystems of the form The characteristic polynomial h(z) of (D.1) is obtained by applying the lag operator z such thatx(k)h(z) =w(k), Similarly, the double-integrator decoupled subsystems arẽ 2) can be studied as a root locus by varying the gain λ. In particular, λ = 0 yields a multiple root at z * 1 = 0 and a simple root at z * 2 = 1. Negative values of λ are discarded as they push the latter outside the unit circle. As λ increases, the branches leave the unit ball along their asymptotes. The admissible values for λ are upper bounded by a threshold gain λ th beyond which some roots leave the unit ball. In particular, we are interested in the minimum gain for which at least one root lies exactly on the unit circle. Thus, we are looking for roots of (D.2) of the form z = e jθ , e j(τ +1)θ − e jτ θ + λ = 0. (D.5) Eq. (D.5) can be equivalently written as the system cos((τ + 1)θ) − cos(τ θ) + λ = 0 sin((τ + 1)θ) = sin(τ θ).
Numerator. We demonstrate the formula for odd delays τ = 2k + 1, k ∈ N. The other case can be obtained similarly and is thus omitted.