Designing communication networks for discrete-time consensus for performance and privacy guarantees

Discrete-time consensus plays a key role in multi-agent systems and distributed protocols. Unfortunately, due to the self-loop dynamics of the agents (an agent’s current state depends only on its own immediately previous state, i.e., one time-step in the past), they often lack privacy guarantees. Therefore, in this paper, we propose a novel design that consists of a network augmentation, where each agent uses the previous iteration values and the newly received ones to increase the privacy guarantees. To formally evaluate the privacy of a network of agents, we define the concept of privacy index, which intuitively measures the minimum number of agents that should work in coalition to recover all the initial states. Moreover, we aim to explore if there is a trade-off between privacy and accuracy (rate of convergence) or if we can increase both. We unveil that, with the proposed method, we can design networks with higher privacy index and faster convergence rates. Remarkably, we further ensure that the network always reaches consensus even when the original network does not. Finally, we illustrate the proposed method with examples and present networks that lead to higher privacy levels and, in the majority of the cases, to faster consensus rates.


Introduction
The pervasiveness of interconnected devices having communication capabilities triggered a growing interest in distributed systems and distributed methods.These large-scale systems of devices (or, generically speaking, agents) are usually spatial distributed.Hence, there is a frequent interest in jointly compute a function on data from all the agents in the system via vicinity interactions, i.e., where the agents transmit/receive data only from the neighbors [1][2][3][4][5][6][7][8][9].
It is of utmost importance to study and ensure beyond accuracy properties in this type of distributed agents' systems such as privacy [10][11][12].The common approaches that aim to study/achieve privacy in consensus methods may be categorized in one of the following classes: homomorphic encryption-based (HE-based); differential privacy-based (DP-based); and observability-based (O-based).
Briefly, HE-based average consensus methods demand for costly computations and communications, resulting in a potentially prohibitory cost of use in applications with limited computation and communication power [13][14][15][16][17][18][19][20].DP-based approaches try to gain privacy by introducing uncertainty through the addition of noise to shared information [21][22][23][24][25][26][27][28][29].In this scenario, the consensus obtained will be in the expected value which may have uncertainty, it may not be suitable for proper decision making, and its implementation and the finite time analysis becomes a harder problem to study [30,31].Also, noise generation is usually achieved via a pseudo-random generation that depends on the initial seed.Consequently, the privacy assurances depend on the seed used (that should be secret) or the use of an expensive random number generator device [32].
With a different approach to privacy, in [33], the authors introduces a privacy-preserving finite transmission event-triggered quantized average consensus algorithm for battery-powered or energy-harvesting wireless networks.The algorithm ensures efficient communication and transmission ceasing, thereby preserving available energy.The study establishes topological conditions for maintaining node privacy comparing the method with existing algorithms.
In contrast, our work is aligned with the O-based approaches that focus on curious agents that try to retrieve other agent's states by considering the dynamics evolution, and therefore, estimate the states that were deemed to be private.Therefore, observability (in dynamical systems) yields necessary and sufficient conditions to obtain an estimator capable of retrieving https://doi.org/10.1016/j.sysconle.2023.1056080167-6911/© 2023 The Author(s).Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).agents' initial states that agents wanted to be unknown to the remaining parties [11,34].
In [35], the authors propose a distributed average information consensus algorithm that ensures confidentiality of each agent's initial state without introducing noise to the state values.They achieve privacy using concealing factors assigned to agents by a central authority before initiating the consensus algorithm.The method also requires a balancing constraint on the edge weights.In contrast, our approach does not rely on a central authority and eliminates the need for a balancing constraint.
The work in [29] derives closed-form expressions for both the optimal distributed estimation and privacy parameters.Moreover, in [31], the authors propose a privacy-preserving approach based on state decomposition for the network average consensus problem, where each node decomposes its state into sub-states with random initial values.Our method differs by not requiring closed-form expressions and state decomposition.
In [36], it is proposed a dynamic average consensus algorithm, which ensures accuracy and privacy of initial values under topological restrictions.However, their algorithm creates a virtual network with a complexity of O(n 2 ) nodes, whereas our methods require only O(n) nodes.The work in [11] analyzes the interplay between network topologies and observability subspace.
Finally, in [37], the authors use observability and optimization techniques to present an algorithm for network synthesis with privacy guarantees.Their method optimizes communication graph weights to maximize node privacy.However, the design complexity and privacy guarantee for all agents are challenging to achieve, which distinguishes our method.

Main contributions.
We propose a novel design that consists of a network augmentation, where each agent uses the previous iteration values and the newly received ones to increase the privacy guarantees.We define the concept of privacy index, to formally assess the privacy of a network of agents, which intuitively measures the minimum number of agents that should work in coalition to recover all the initial states.Furthermore, we aim to explore if there exists a trade-off between privacy and accuracy (rate of convergence) or if we can improve both.We unveil that, with the proposed method, we can design networks with a higher privacy index and attain higher convergence rates.Furthermore, we ensure that the network always reaches consensus even when the original network does not.
Paper structure.In Section 1.1, we summarize the concepts and notation used in this paper.In Section 2, we formally state the problem we aim to address.In Section 3, we present a discretetime consensus method that allows to augment the privacylevel of consensus networks.We show illustrative examples in Section 4, and Section 5 closes the paper with future research directions.

Preliminaries & terminology
We denote vectors with lower-case letter (e.g., x) and matrices with upper-case letters (e.g., A).We denote the set of integers from 1 to n by [n] = {i ∈ Z : 1 ≤ i ≤ n}.We denote the ith entry of vector x ∈ R n by x i , with i ∈ [n], the ith row of matrix A ∈ R n×m by A i , and we use A ij to denote the jth entry of the ith row of A, where i ∈ [n] and j ∈ [m].Moreover, we denote by e n i the ith canonical n-dimensional column vector, a vector of size n with all entries equal to zero, except the ith entry that is one.We denote by I n the n × n identity matrix.Analogously, we denote by 1 n×m an n×m matrix with all entries equal to 1, by 0 n×m an n×m matrix with all entries equal to 0, and when m = 1 we simply drop the m to denote a vector of size n (e.g., 1 n ).Moreover, we denote the transpose of a square-matrix A by A ⊺ .If A ∈ R n×m and I ⊂ [m], we denote by A(I) the matrix composed by the columns of A with indices in I.
Additionally, we denote by span(A) the linear span of A ∈ R n×n and its spectrum (set of eigenvalues) by σ (A).A matrix A ∈ R n×n is row-stochastic if the following hold: (a) A ij ≥ 0, for all i, j = 1, . . ., n, and (b) ∑ n j=1 A ij = 1, for all i = 1, . . ., n.Similarly, a matrix A ∈ R n×n is column-stochastic if A ⊺ is row-stochastic.If A is both row-and column-stochastic then we say that A is doublystochastic.We denote the structure of a matrix A ∈ R n×m by Ā, where Ā ∈ {0, ⋆} n×m , with Āij = ⋆ whenever A ij ̸ = 0 and Āij = 0, otherwise.
A (directed) network of agents is a graph G = ⟨X , E X ,X ⟩, where X = [n] are the nodes that denote the set of n agents, and E X ,X ⊂ X ×X are the (directed) edges that correspond to pairs of agents (nodes).If (i, j) ∈ E X ,X then the agenti transmits to agent j.Given a matrix A ∈ R n×n , we associate to it a directed network of agents via a digraph representation G(A) = ⟨X , E X ,X ⟩, where X = [n] and (i, j) ∈ E X ,X if and only if A ji ̸ = 0.
For a network of agents (communication graph) G = ⟨X , E X ,X ⟩ and an agent i ∈ X in the network of agents G, we denote the inneighborhood of agent i by N in i , where Similarly, we denote the out-neighborhood of agent i by N out i , where N out i = {j : (i, j) ∈ E X ,X }.A network is strongly connected if there is a path between each pair of nodes (i.e., if for each x ∈ X there is a sequence of nodes x, x 1 , . . ., x k , y for all y ∈ X such that (x, x 1 ), (x k , y), (x i , x i+1 ) ∈ E X ,X for all i = 1, . . ., k − 1).

Problem statement
Consider a discrete-time consensus method modeled as a linear time-invariant system (LTI) where k ∈ N, x(k) is a vector collecting the states of all the agents, x(k) ∈ R n , with x i (k) denoting the state of agent i at time k, A is a row-stochastic matrix, and x(0) = x 0 is the initial state.
Furthermore, we will work under the following commonly adopted assumption in the context of consensus.
A 1 The network of agents described by G(A) is strongly connected.
Now, suppose that a set of one or more agents, in coalition, seeks to determine the initial states of all the other agents, i.e., observe the system's states in (1) according to where C ∈ R m×n is the output matrix.Under this setup, we say that the system in ( 1) is observable if and only if given the values of y(k) for k = 0, . . ., n − 1, we can uniquely determine x 0 , under the additional assumption that system (1)-( 2) described by the pair (A, C ) is known.
Remark 1.We can study observability in a generic sense, structural observability [38], by looking at Ā which simply represents which entries of A are fixed zero or not.A pair ( Ā, C) is structurally observable if there is a pair (A, C ) respecting the sparsity pattern in ( Ā, C) that is observable [38].Moreover, if a pair ( Ā, C) is structurally observable, then almost all pairs (A, C ) that respect the sparsity pattern are observable.Finally, if the pair (A, C ) is observable then the pair ( Ā, C) is structurally observable.•In other words, Remark 1 states that structural observability is a necessary condition for observability.Subsequently, given a dynamics matrix A, with G ≡ G(A), we denote by |G| O the minimum number of state variables that we need to measure such that the system is structurally observable.This, allow us to introduce the notion of privacy index as follows.
Definition 1.Given a system modeled as in (1) with network of agents G(A), we define the |I| such that the pair (A, C ≡ I n (I)) is structurally observable.In other words, the number of agents' states (which can be one or more state variables) uniquely measured by a sensor required so that the network is structural observable.
In a broadcast scenario, each agent sends its state multiplied by the corresponding dynamics matrix weight.Hence, an agent trying to recover other agents initial states corresponds to placing an output in that agent.In this setup, the privacy index counts the minimum number of agents that should collude to recover all the agents' initial state.Note that, if we need to observe |G O | agents' states to ensure structural observability, then with |G O |−1 the system is not structurally observable and not observable, by Remark 1.
Another important property of a consensus method is how fast it converges.Given the dynamics matrix A, the rate of convergence [39] is computed using the spectral gap of A as: where ρ(A) = max {|λ| : λ ∈ σ (A) \ {1}}.In particular, the higher the spectral gap R A the fastest is the convergence of the consensus protocol.
It is a common belief that there exists a trade-off between privacy and accuracy, which we measure here as the rate of convergence.Therefore, we aim to explore if such trade-offs exists or if we can increase privacy and still increase the rate of convergence.Hereafter, we will see that there are several cases were we do not need to compromise accuracy to increase the privacy level.
Subsequently, we devote the remainder of this work to answering the following problem.P 1 Given N agents with a communication digraph G = (X , E X ,X ), if there exists a minimum size augmented dynamics augment the dynamics such that (i) the state is xi (4a) such that the following properties hold: where p ∞ is the limit distribution of A (the left-eigenvector of A associated with the eigenvalue 1, normalized to sum 1).Moreover, we want to ensure this even when A has more than one eigenvalues with absolute value 1 (and cannot reach consensus); (i.e., each output measures the augmented states);

Rate of convergence
• the rate of convergence improves, R Ã > R A . (4d) Notice that this is an idyllic problem that we aim to address.Unfortunately, as we will see, the proposed solution has some cases where all the conditions in P 1 cannot be achieved.Nonetheless, we identify several cases where the proposed solution is able to ensure all the conditions of P 1 .
If we consider a simple averaging scheme modeled by a rowstochastic dynamics matrix with zero diagonal entries, then we end up with a plethora of networks which do not reach consensus.Such approach would increase the privacy index but would fail, in several cases, to reach consensus.Hence, we propose a new scheme to overcome this limitation.

Designing communication networks for discrete-time consensus with privacy guarantees: can the past help?
In this section, we address problem P 1 .We propose an augmentation of the system that encodes the idea of each agent using the its previous state together with the received neighbors' states to the state update phase.We show that the proposed extended system reaches consensus in Theorem 1, and show that the final consensus is the same as the one of the original dynamics in Theorem 2. In Corollary 1 and Remark 4, we show how the proposed method can be used to reach average consensus.Finally, we present a lower bound for the converge rate of the augmented system in Theorem 3.
The following observation will be important to tackle the problem that we identify in this work.

Remark 2. If the original row stochastic dynamics matrix A ∈
R N×N has eigenvalues with magnitude 1 besides the eigenvalue then the system in (1) does not reach consensus, and it reaches a periodic behavior (Perron-Frobenius Theorem [40]).
• First, we propose to do an augmentation network design, aiming to improve the overall network of agents' privacy.To this end, we propose that an agent share with the neighbors not only its current state but also its previous state as captured in the following update rule: let x(0) = 0, x(1) = 3 2 x 0 , and Notice that (5) can be written as in (1), where A is the result of normalizing the rows of the agents' network adjacency matrix.Notwithstanding, we may start from any A that is row stochastic and generalize (5) as the following discrete LTI: where We would like to notice that, from the representation point of view, in both the case of self-loops and the augmented network scheme proposed above the states are locally available to an agent.However, the dynamics generated by integrating these augmented states does not lead to the existence of selfloops.Hence, the overall dynamics matrix does not have non-zero elements in its diagonal (i.e., no self-loops).
To illustrate how this augmentation changes the network of agents, consider the network represented by black nodes and edges in Fig. 1 6), the network gains the additional red nodes (the augmented states) and red edges, depicted in Fig. 1, Subsequently, we show that this augmented dynamics achieves consensus, i.e., the second part of P 1 (4b).
Theorem 1.The extended system in (6)-( 7) reaches consensus.•Proof.We start by verifying that 1 is an eigenvalue of Ã associated with the eigenvector 1 2n and the remaining eigenvalues have all magnitude strictly smaller than 1.Let λ be an eigenvalue of A associated with the eigenvector v.Then, it readily follows that ṽ1 = ] are eigenvectors of Ã associated with the eigenvalues α 1 and α 2 . Specifically, [ which is equivalent to and, because Av = λv, it follows that from which we conclude that ) .
Therefore, we just need to set α 1 = 1 4 . Finally, we need to ensure that there is only one eigenvalue of Ã equal to 1 and that the remaining ones have strictly smaller magnitude.Let λ = 1 be the eigenvalue of A associated with the eigenvector 1 n .We have that α 1 = 1 is an eigenvalue of Ã associated with the eigenvector [1 n 1 n ] ⊺ = 1 2n .Moreover, for λ = 1 we have that α 2 = − 1 2 .Additionally, we have that for ) .
Next, since arg max λ∈C,|λ|≤1 In fact, in Fig. 2. □ Remark 3.Even if the original dynamics matrix A has eigenvalues with magnitude 1 besides the eigenvalue 1 then the system in ( 6)-( 7) reaches consensus, as asserted by Theorem 1, with , where p ∞ is the left-eigenvector of A associated with the eigenvalue 1, by Theorem 2.

•
Notice that Remark 3 states that we no longer need to carefully select the network of agents to avoid networks that do not reach consensus, see examples in Table 1.In other words, we have a more flexible choice concerning the consensus network, and P 1 (4b) holds.
The next result states that if the agents in the original discrete LTI system (1) reach the consensus value x ∞ then the agents using ( 6)-( 7) not only reach consensus but also converge to x ∞ .Theorem 2. Consider the discrete LTI system in (1), with A a rowstochastic matrix.If the state of (1) is such that lim k→∞ x(k) = x ∞ 1 n , then the state of (6)-( 7) is such that lim k→∞ x(k) = x ∞ 1 2n .•Proof.If A is a row-stochastic matrix then it corresponds to a Markov-chain.Moreover, the limit distribution is given by the normalized (to sum up to 1) left-eigenvector associated with the eigenvalue 1.The existence of this limit distribution is guaranteed by the result in Theorem 1.We denote this limit distribution by which is the same as In fact, v 2 = p ∞ because it is the left-eigenvector of A associated with the eigenvalue 1.Hence, the left-eigenvector of , and, when normalized to sum up to 1, is . Finally, we have that x ∞ = p ⊺ ∞ x 0 , and, consequently, Hence, the consensus value is as desired.□ It immediately follows from Theorem 2 that average consensus can be attained under the following setting.(1) is doublystochastic then the system in (6) reaches average consensus.

Corollary 1. If the original dynamics matrix A in
• Nonetheless, when the objective is to do the design to reach average consensus and the conditions of Corollary 1 do not hold, we can do it considering the following observation.
Remark 4. If we aim to achieve average consensus, then we just need to re-weight the initial agents state according to the limit distribution p ∞ , setting the new initial state of agent i as • Lastly, we would like to see how the convergence rate of the original dynamics matrix and the augmented version relate.Theorem 3. Let A be the dynamics matrix of (1) and Ã the dynamics augmented matrix of (6)- (7).Let σ (A) = {λ 1 , . . ., λ n−1 , 1} and Then, the following hold:

}
, where a i 1 , a i 2 are the eigenvalues associated with λ i , computed in the proof of Theorem 1; , where a 1 , a 2 are the eigenvalues associated with λ. • Proof.We have that (i) follows directly from the proof of Theorem 1, and (ii) follows from the definition of rate of convergence in (3).Concerning (iii) the proof follows from noticing that λ is the second eigenvalue of A with higher absolute value that is transformed into α 1 and α 2 , two eigenvalues of Ã. Therefore , where α 1 , α 2 are the eigenvalues obtained in the proof of Theorem 1. □ It is worth noticing that we could use the proposed augmentation with two additional nodes or even more.However, the structure of the respective augmented matrix would have repeated blocks.Therefore, such a strategy may be adopted and explored but it will not lead to a better privacy index.

Theoretical guarantees
Given the dynamics matrix A ∈ R N×N , consider its structure Ā ∈ {0, ⋆} N×N , where Āij = 0 if and only if A ij = 0 and Āij = ⋆, otherwise.Let I ⊂ [N] denote the agents that are measured.This corresponds to have an output matrix structure C = [ ĪN (I)], i.e., C ∈ {0, ⋆} |I|×N that is the structural matrix composed by the subset of rows of ĪN indexed by I.
The system with dynamics matrix Ā and observed state variables indexed by I is structurally observable if and only if grank where the grank (generic rank) of a structural matrix M ∈ {0, ⋆} N 1 ×N 2 is the maximum rank achievable with a matrix M ′ ∈ R N 1 ×N 2 such that M′ = M [38].In other words, the structural output matrix C compensates the grank deficiency of Ā.
Subsequently, the following result relates the privacy index of a network without self-loops with its augmentation.Theorem 4. Consider a connected network of agents with adjacency matrix A ∈ R N×N without self-loops.If A has privacy index k then Ã, as described in (7), also has privacy index k.

•
Proof.Suppose that A ∈ R N×N has privacy index k.It follows that grank( Ā) = N − k, where Ā ∈ {0, ⋆} N×N is the structural matrix.In other words, there is a generic rank deficiency of k in Ā to achieve a generic full rank (i.e., N) [38].Moreover, the value k yields the number of agents that should be measured to attain structural observability.Now, consider the augmented matrix ] .
The structural pattern is In this case, it is easy to see that Hence, there is the same rank deficiency, and the privacy index of Ã is also k. □ Next, we identify a class of networks, referred to as starnetworks (see Fig. 3 depicting a star network for N = 4, 5, 6), where the proposed approach always yield a higher privacy index. Corollary Then the privacy index of the network without self-loops ( Ā) is N −2 and the privacy index of the network with self-loops ( W ) is 1. • Proof.We can easily see that grank( W ) = N, by considering the diagonal parameters to be different from zero and setting the off-diagonal ones to zero.Therefore, with an output in any of the agents (i.e., setting C = ēi , where e i is the ith canonical row vector in R N corresponding to measure the state of agent i), we obtain a structurally observable system since grank That is, by observing a single agent the system is structurally observable.
On the other hand, we have that grank( Ā) = 2, meaning we need to observe N − 2 agents to ensure that the system is structurally observable, i.e., it follows by definition that privacy index equals N − 2. By Theorem 4, it readily follows that the privacy index of Ã is also N − 2. □

Illustrative examples
It is common that the agents update their states using information received by neighbors together with their current state, which corresponds to have non-zero diagonal elements in the dynamics matrix A. A well-known way of selecting the dynamics matrix entries (making use of non-zero diagonal entries) is by using the so-called Metropolis weights [41], which are given as follows: This self-loop dynamics makes possible that an external entity, by observing any agent's state evolution, is able to observe the entire system, leading to low privacy guarantees.In fact, under this dynamics, the privacy index is always 1. We use the Metropolis weights to compare with the proposed approach.
In the examples that follow, in addition to the rate of convergence, we mark in the consensus evolution plots the point where the maximum absolute difference between the agents states (error) starts to be less than a specific value.This property further illustrates how fast the methods are converging.
Consider the network of agents G depicted by the black nodes and edges in Fig. 4. If we use, for instance, the Metropolis weights to design the dynamics matrix utilized to do consensus, then the network of agents becomes the one depicted by black nodes and edges and red edges in Fig. 4. In Fig. 5(a), we depict the agents' states evolution from the initial state x 0 = [ 0.1 0.3 0.6 1 ] ⊺ when using the dynamics of (1) and A is defined with the Metropolis weights.In this case, we have a privacy index of 1.In the case where we do not consider self-loop dynamics and we use a rowstochastic matrix A in the dynamics of (1), we actually cannot reach consensus, as noticed in Remark 3, see Fig. 5(b).
Notwithstanding, when we use the proposed augmented consensus ( 6)-( 7), we increase the privacy index to 2, and we can reach consensus.In Fig. 5(c), we portray this scenario, but we only show the second half of agents' states evolution, i.e., we omit the first half that corresponds to the past and is equal if we start all the states with 0 and shift the presented ones a time unit ahead.
To further illustrate the proposed consensus method and concept of privacy index, consider the network of agents G 1 , depicted in Fig. 6.
In Fig. 7, we show the agents' states evolution for the initial state x 0 = [ 0 1.5 −0.8 2.4 −1.7 3.9 0.6 4.7 −3.1 5.5 −4.3 6 ] ⊺ .Again, notice that when using the Metropolis weights, we achieve a privacy index of 1, and with the proposed consensus method we get a privacy index of 3. Finally, in Table 1, we present some networks of agents and evaluate their privacy index depending on the used consensus protocol.Notice that in the majority of the reported cases, besides ensuring consensus, the proposed method reaches a higher privacy level and a higher rate of convergence.
It is worth noticing that star-like networks do not allow to reach consensus when we do not consider self-loop dynamics.Notwithstanding, with the proposed augmentation we not only achieve consensus but also increase both the privacy index and the rate of convergence, as we may see in the first, third and penultimate networks of Table 1.

Conclusions
In this paper, we developed a discrete-time consensus method where each agent uses the previous iteration values together with the recently received ones.The proposed method consists of, at each time step, the agents computing the average of the neighbors' received states from the current and previous iterations.Furthermore, we do not consider the agent to have self-loop dynamics, i.e., they do not use their own states in the state update phase, as this would prevent the network from reaching some privacy level.In other words, an external entity can recover all the agents' states by observing merely one agent.Notwithstanding, it is known that if an agent averages the neighbors' states (without considering its own state), then it may reach a periodic behavior (instead of conducting consensus).
We unveil that, with the proposed method, we can not only design networks with higher privacy levels but also ensure that the network always reaches consensus.Moreover, we unveil that we may do so without compromising (most of the times) the rate of convergence and further (most of the times) we actually increase the rate of convergence.
Additionally, if the initial dynamics matrix is doublystochastic, the proposed method reaches average consensus.We illustrate the proposed method with examples and present networks that lead to higher privacy levels and, in the majority of the cases, to a faster consensus (higher rate of convergence).Future work includes building upon this approach to assess the design problem of given a set of states that are required to be private, what should be the network topology that guarantees such requirement.
where |G( Ã)| O = arg min I⊂[N] |I| such that the pair (A, C (I)) is structurally observable, and C = [ , the digraph representation of A = After the augmentation in (

Fig. 1 .
Fig. 1.Virtual network of agents representing the dynamics of (6) for the original network of agents depicted by the black nodes and edges (i.e., G( Ã), with Ã given as in (6)).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 5 .Fig. 6 .
Fig. 5. Consensus evolution for the network of agents depicted by the black nodes and edges Fig. 4 (black nodes and edges and red edges for the Metropolis weights).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 7 .
Fig. 7. Consensus evolution for the network of agents G 1 depicted if Fig. 6. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)