Ultrafast Synchronization via Local Observation

Rapid expansions of their size and frequent changes of their topology make it difficult to observe and analyze complex networks. We explore the properties of the Hankel matrix and propose an algorithm for calculating the final synchronization state that uses a local observation of a single node for a time period significantly shorter than the synchronization process. We find that synchronization can be achieved more quickly than the routine rhythm. This finding refines our understanding of the abundant ultrafast synchronization phenomena observed in nature, and it enables the efficient design of self-aligned robots.

Rapid expansions of their size and frequent changes of their topology make it difficult to observe and analyze complex networks. We explore the properties of the Hankel matrix and propose an algorithm for calculating the final synchronization state that uses a local observation of a single node for a time period significantly shorter than the synchronization process. We find that synchronization can be achieved more quickly than the routine rhythm. This finding refines our understanding of the abundant ultrafast synchronization phenomena observed in nature, and it enables the efficient design of self-aligned robots.
Many mechanisms have been proposed to explain synchronization phenomena [12 ]. The best known is the neighborhood coordination mechanism [13 ] in which the activity of each individual is affected by their nearest neighbors. The neighbors of an individual are defined to be (i) those inside a ball-shaped range of a fixed radius [13 , 14 ], (ii) those directly connected in a network [15 , 16 ], or (iii) those, limited in number, that are closest [17 ]. A "hierarchical leadership model" was proposed [18 ] to explain the flock activity of pigeons. Each pigeon follows its leader and is in turn followed by other pigeons, resulting in a hierarchical leader-follower network without directed circles.
Empirical studies have found that synchronization emerges quickly in real-world ecological and biological systems [19 -21 ]. In contrast, the synchronization produced by the neighborhood coordination mechanism is gradual. Although many methods have been proposed to speed up the synchronizing process-e.g., adjusting the range of the neighborhood [22 ], introducing individual adaptive speeds [23 ], and optimizing the strength of interactions [24 ]-the neighborhood coordination mechanism cannot achieve the extremely rapid synchronization * zhutou@ustc.edu † yye@hust.edu.cn observed in real-world systems. The hierarchical leadership model has also not been validated in large-scale systems and has been challenged by an in-depth analysis that finds no acyclic structure [25 ]. Two candidate mechanisms, information propagation [21 , 26 ] and predictive protocol [27 -29 ], have been proposed to explain ultrafast synchronization. The former argues that direction change information can quickly propagate throughout the flock without attenuation, and the latter that an individual, such as a bird or a fish, is able to predict the near-future moving trajectories of neighbors, and thus is more able to anticipate collective motion. Understanding ultrafast synchronization is still an open challenge, however, because these two proposed mechanisms need further experimental validation. In addition, it is probable that the observed phenomena are the result of the integrated effects of multiple mechanisms. We here propose an alternative mechanism for ultrafast network synchronization. In connected networks with first-order linear dynamics [30 ], we find that the record of the past states of a single node can be used to achieve ultrafast synchronization. Monitoring additional nodes in the neighborhood of the initial node further accelerates the synchronization. We demonstrate the ultrafast synchronizing speed of this mechanism using simulations of representative network models and of a variety of real networks.
In an N -node directed network, when there is an edge from node j to node i there is an entry a ij = 1 in the adjacency matrix A. If not, then a ij = 0. The state x i of an arbitrary node i follows a discrete-time linear dynamics where ǫ is the sampling period, which is small enough (ǫ ≤ 1/d max , d max being the maximal out-degree) to guarantee convergence [30 ]. Then the dynamics of the entire network is where x = (x 1 , x 2 , · · · , x N ) T , P = I − ǫ(D − A), I is the unit matrix, and D = diag{1 T A} where 1 is an N -dimensional all-1 vector. The state x(t) asymptotically converges to the final value x(∞) = µx(0)1 if the spectral radius of P is no greater than 1. Here µ is the left eigenvector of P corresponding to eigenvalue 1, which also satisfies the normalization condition µ1 = 1. Specially, for undirected networks or balanced directed networks (i.e., j a ij = j a ji for every node We designate node i to be the node from which we gather time-sequential information about itself and about ℓ neighboring nodes i 1 , · · · , i ℓ ("monitored nodes") as We define an (L + 1) × N matrix C i in which column i in the first row and columns i j in (j + 1)th rows are 1 (j = 1, 2, · · · , l). All other elements are 0. Thus We designate D i to be the smallest integer that satisfies condition C i q i (P ) = 0, where q i is a monic polynomial with a degree D i + 1, where α where α Focusing on the Z-transform Y i (z) = Z(y i (t)), from Eq. (5) and the time-shift property of the Z-transform we have . (6) Note that according to the definition of P in (2) [45 ] the only unstable root of q i (z) is the one at 1. We then define From Eq. (7) we deduce that Using the final value theorem in (7) and some simple algebra we find the consensus value φ1, where We denote the Hankel matrix Node i then stores y i (t) (t = 0, 1, . . .) in memory and recursively builds the Hankel matrix H k i,ℓ , where ⌈x⌉ is the nearest integer not less than x, and H k i,ℓ always has more rows than columns. Node i then calculates the rank of H k i,ℓ and increases the dimension k until H k i,ℓ loses column rank and stores the first defective Hankel matrix H K i,ℓ . Here K is a good estimation of D i [45 ]. Node i then calculates the normalized kernel β = (β 0 , · · · , β K−1 , 1) T of H K i,ℓ , i.e., H K i,ℓ β = 0 according to Eq. (8). Once β is obtained and combined with the previously memorized [y i (0) y i (1) . . . y i (K)], node i can use the final value theorem in Eq. (9) and calculate the final global synchronized value where φ = µx(0). We calculate the global synchronized value φ using node i within N i,ℓ = ⌈ K+1 ℓ+1 ⌉ + K iteration steps from the proposed algorithm. All nodes then propel themselves toward the calculated destination φ. Given the observed node i and its ℓ monitored nodes, the synchronizing time of the method can thus be quantified using N i,ℓ . To quantify the synchronization speed of the routine process, we directly simulate the dynamics (2) and define the minimal convergence steps M as when the state difference of all node pairs, e.g., i>j |x i − x j |, drops below a small threshold δ (here we set δ = 10 −3 ). The smaller the value of M , the more rapid the synchronization.  We consider three types of undirected network model, i.e., the Erdös-Rényi (ER) [32 ], the Barabási-Albert (BA) [33 ], and the Watts-Strogatz (WS) [34 ]. In an ER network, node pairs are connected with a probability ρ. Initially a BA network is a small clique of m nodes, and at each time step a single node is added with m edges connecting to existing nodes. The probability of selecting an existing node is proportional to its degree. Initially a WS network is a one-dimensional lattice in which each node connects to z neighbors, and each edge then has a constant probability p of being rewired. The average degree of an BA network is approximately 2m, and the average degree of an WS network is z. We generate 100 networks of size N = 100 for each model. In each network, each node has a single chance of being chosen, and we then independently pick up its ℓ neighbors for 100 times. Both the average minimal memory length (average MML, or AMML, N ℓ ) and average minimal convergence steps (average MCS or AMCS, M ) are obtained by averaging over all independent runs. As shown in Table 1, even when ℓ = 0 we know only the record of the observed node and not of the neighboring monitored nodes, and the synchronization speed of this method is much faster than the routine process, as indicated by how much smaller the value of N 0 is than M . In addition, N ℓ decreases when ℓ increases, suggesting that the synchronization can be further accelerated by including the monitored nodes.  Fig. 2. AMML as a function of the number of monitored neighbors l for the 10 real networks. Given ℓ, we only select observable nodes with degree no less than ℓ to implement the simulation and then get average MML over these nodes. Since the maximal degree of Power is 7, the corresponding maximal ℓ in the first plot is 7 as well.
The state of a node is directly affected by its neighbors, and the state of the neighbors are in turn affected by their neighbors, i.e., the influence spreads over edges leading to interplay among all node pairs. According to discrete dynamics (1), the average number of steps required for the influence from a randomly selected node to reach another randomly selected node is equal to the average distance d . Thus the synchronization time is strongly dependent on the average distance. Figure 1 shows a varying of the model parameters and the relationship between the synchronization time and the average distance in both our method and the routine process. The synchronization time M required by routine method is much longer than N l even when l = 0. Both relationships M − d and N 0 − d approximately fit a linear function, but the in-creasing rate of M is much larger than that of N 0 . We thus expect that in networks with a larger d the advantage enjoyed by N 0 will become even more significant. Table 2. Topological features and synchronization time for the 10 real networks under consideration. N , E, k and d represent the number of nodes, the number of edges, the average degree and the average distance, respectively. The former 8 networks are undirected while the last two are directed. The average distance of a directed network is defined j) is the distance from node i to node j. If the network is not strongly connected, d = ∞. As clearly observed from this table, N 0 is much smaller than M , indicating the remarkable advantage of the present method.

FFHI Bison
Approxi. N 0 Fig. 3. The synchronization time, M and N 0, versus the average distance d for the 9 real networks except Highschool whose average distance is infinite. The red circles and blue squares denote the synchronization times by the routine procedure and the present method, respectively. The red and blue lines represent the linear fittings for data points by the least squares estimation.
We next examine ten real-world networks: (i) "Karate": A karate network that comprises the friendships among 34 members of a karate club at a US university [35 ].
(ii) "Power": Data on a US electric power network from the early 1960s, which we acquired from a 57-bus case download from [36 ].
(iii) "Dolphin": An undirected social network of frequent contacts among 62 bottlenose dolphins in a community at Doubtful Sound, New Zealand [37 ].
(iv) "Lesmis": The network of fictional characters in Victor Hugo's novel Les Miserables, where an edge denotes the co-appearance of the two corresponding characters [38 ].
(v) "Polbooks": The network of books about recent US politics in Amazon.com, where the edges between books represent the frequent co-purchasing of books by the same buyer [39 ].
(vi) "Football": The network of American faootball games between Division IA colleges during the regular season in Fall 2000, where each node represents a team and two teams are connected if they have regular seasonal games [40 ].
(vii) "FFHI": The face-to-face human interaction network where each node denotes an individual in a school [41 ].
(viii) "Corporate": A European corporate community network in which nodes represent firms and two firms are connected if they share at least one manager or director [42 ].
These eight networks are undirected. To verify the generality of the results, we also examine two directed networks.
(ix) "Bison": Is the dominance relationships among American bison in 1972 on the National Bison Range in Moiese, Montana, where each node denotes a bison and a directed edge from node i to node j represents the dominance relationship: i over j.
(x) "Highschool": The network of friendships among boys in a small high school in Illinois [44 ]. Highschool is directed because student i can identify j as a friend even when j does not identify i as a friend. Table 2 provides the structural statistics and synchronization time of the ten real-world networks. When we compare the last two columns we see that our method produces much faster synchronization. In most cases our method is ten times faster than the routine procedure even without using monitored nodes. Figure 2 shows that despite some fluctuations the synchronization time N ℓ further decreases as ℓ increases, indicating additional benefit when monitored nodes are introduced. Similar to that shown in Fig. 1, both N 0 and M increase with d . Figure 3 shows linear fittings for visual guidance, where it is clear that the increasing rate of M is much larger than N 0 . Thus experimental analyses of disparate real-world networks once again demonstrate the results obtained from network models, i.e., (i) our method speeds up synchronization, (ii) monitoring more neighbors further accelerates synchronization, and (iii) the synchronization time is positively correlated with d , while the present method grows more slowly.
We have found a mechanism that leads to the ultrafast synchronization of general networked discrete-time linear dynamics and that requires only the historical dynamical trajectory of one observable node. In a networked dynamical system, the state of a node is directly affected by its neighbors, who are directed affected by their neighbors, and so on. Thus the state of a node will affect and be affected by all other nodes after a sufficiently long period of time. Our major contribution here is successfully realizing this theoretical possibility by applying Hankel matrix analysis.
Compared to the information propagation [21 , 26 ] and predictive protocol [27 -29 ], the our proposed mechanism requires little intelligence from most individuals but a higher level of intelligence from the observable node. This includes both the memory to store the historical dynamical trajectory and an ability to analyze this trajectory. In a biological system, it is unlikely that a leader would use a Hankel matrix-like process to determine in advance a travel direction in which to lead the flock. It is more likely individuals with a short memory would not use the elaborately designed Hankel matrix but instead use recent dynamical records to acquire approximate synchronization. Thus our proposed mechanism is also a candidate for achieving ultrafast synchronization.
We believe that this mechanism will have significant applications in engineering systems. A group with one super leader is unlikely in the biological world but easy to design and implement in the humanly-constructed world. A distributed sensor network in which each sensor communicates and interacts with its neighbors must be able to align and move together in such scenarios as field investigation or battleground detection. Our proposed mechanism does not require a large number of low-intellegence sensors but only one sensor with high intelligence. Modern information technology (in particular, the rapid development of intelligent hardware) allows us to produce a smart sensor with a sufficiently long memory and the ability to analyze a Hankel matrix. Thus this smart sensor could predict the future global state of any kind of networked linear dynamics and shape the consensus of the entire sensor group.