Evolutionary dynamics under interactive diversity

As evidenced by many cases in human societies, individuals often make different behavior decisions in different interactions, and adaptively adjust their behavior in changeable interactive scenarios. However, up to now, how such diverse interactive behavior affects cooperation dynamics has still remained unknown. Here we develop a general framework of interactive diversity, which models individuals’ separated behavior against distinct opponents and their adaptive adjustment in response to opponents’ strategies, to explore the evolution of cooperation. We find that interactive diversity enables individuals to reciprocate every single opponent, and thus sustains large-scale reciprocal interactions. Our work witnesses an impressive boost of cooperation for a notably extensive range of parameters and for all pairwise games. These results are robust against well-mixed and various networked populations, and against degree-normalized and cumulative payoff patterns. From the perspective of network dynamics, distinguished from individuals competing for nodes in most previous work, in this paper, the system evolves in the form of behavior disseminating along edges. We propose a theoretical method based on evolution of edges, which predicts well both the frequency of cooperation and the compact cooperation clusters. Our thorough investigation clarifies the positive role of interactive diversity in resolving social dilemmas and highlights the significance of understanding evolutionary dynamics from the viewpoint of edge dynamics.


Introduction
Cooperation is ubiquitous on many levels of biological organization ranging from eukaryotic cells, multicellular organisms to human societies [1,2]. Understanding how cooperation emerges and persists has been an enduring conundrum in evolutionary biology since Darwin [3,4]. Resorting to the powerful mathematical framework of evolutionary game theory, especially by virtue of several classic metaphors, like Prisoner's Dilemma, Snowdrift Game, and Stag-hunt Game, researchers have made much effort to explain the prevalent cooperative traits [5,6]. Traditional work has been focusing on the well-mixed setup that all individuals encounter evenly, in stark contrast with many real-life situations where local interactions are more frequently observed [7]. Particularly, recent advance in exploring underlying topologies concerning human interactions reveals a few typical interacting patterns entailing networks, where nodes represent individuals and edges depict who interact with whom [8,9]. This has aroused much interest in investigating cooperation problems on various network-structured populations [10][11][12][13][14][15][16][17], as reviewed in [18].
Cooperation dynamics in complex networks has been studied in much detail, such as studies about punishment [19,20], multiple strategies [21,22], as well as individual heterogeneity and population diversity [23][24][25]. However, up to now, most research has concentrated on competition of individuals over nodes as if it forms the only basis of this evolutionary system [19][20][21][22][23][24][25]. The latest work about network dynamics has revealed that dynamical properties of a dynamics process defined on edges may considerably differ from nodal dynamics [26,27]. Correspondingly, researchers have attempted to resolve cooperation problems from the perspective of

Model
In a typical two-player game, each player has two strategies to choose, cooperation (C) and defection (D). The general payoff matrix can be written as The value in the matrix corresponds to the payoff of a player who takes the strategy in the row against its opponent with a strategy in the column. We make = = R P 1, 0 and change two parameters, T and S, to cover three major social dilemmas, Prisoner's Dilemma (-< < < < S T 1 0 ,1 2 ), Snowdrift Game ( < < < < + < S T T S 0 1 2 , 2 ) and Stag-hunt Game (-< < < < S T 1 0 1 ) [23]. Here we illustrate our model with figure 1. Players are located on nodes of a network, and two players linked by an edge interact with each other. Initially, cooperation and defection are selected randomly in all interactions, and both them represent 50% individually. In each generation, players play games with all neighbors with separated strategies (figure 1(a)) and obtain payoffs according to the above payoff matrix. In the meanwhile, each player accumulates its payoffs from all interactions. Considering questions about cumulative payoffs on strongly heterogeneous networks, especially the little possibility for a player to maintain a large number of social ties for free [49], we use degree-normalized payoffs in the main text (see figure S1 available online atstacks.iop.org/ NJP/19/103023/mmedia in the supplementary data for results using cumulative payoffs). One's degreenormalized payoff (Π) is calculated by averaging cumulative payoffs over the number of its neighbors.
At the end of each generation, players update their strategies in terms of the asynchronous update rule [18,23]. Firstly a random player x is selected with a uniform probability, and then among xʼs neighbors a random player y is chosen uniformly, indicating that x gets the chance to adjust its strategy towards y (s xy ). As classic rules [18], x needs a reference player to imitate. Here we model players' adaptive adjustment by capturing the principle that xʼs strategy towards y is more dependent on y than on others [48]. Thus x adaptively adjusts s xy by choosing y as the reference player with larger likelihood. We introduce a parameter p to describe the simplest case. As shown in figure 1(b), x chooses y for reference with a probability p and chooses another neighbor among the rest with a probability -- where k x denotes the number of xʼs neighbors. If y is the reference player, x updates s xy by imitating s yx (yʼs strategy towards x) with a probability determined by the Fermi function [50] b P -P = + P -P ( ) ( ( )) ( ) f 1 1 exp , 1 x y x y where P x P ( ) y is xʼs (yʼs) payoff and β characterizes the inverse noise introduced to permit irrational choices. Whereas if the reference player is z, with the probability P -P ( ) f x z , x replaces s xy with s zx (zʼs strategy towards x). That is, x imitates zʼs strategy and applies it to interact with y, consistent with the 'upstream reciprocity' that after received help from Alice, Bob goes on to help Charlie [51]. Our model follows conventional setup that each player only gets the information about how neighbors treat itself because of personal involvement while is not informed of neighbors' strategies against the third parties [23,52,53].
A closer look at p helps to clarify the role of p in the strategy adjustment. p varies from k , and thus all neighbors could be the reference player with the same likelihood. The dependence of xʼs strategy towards y on y is removed, and only separated behavior setting remains. We term it degenerated interactive diversity. Increasing p ( > p k 1 x ) enlarges s xy ʼs reliance on y. If p increases to one, xʼs strategy towards y strictly relies on y, and y is bound to be the reference player in the process of adjusting s xy [54]. In such a case, strategies taken in other interactions, like the interaction between x and z, are never accessible to the game between x and y. Thus, all interactions terminate into enduring confrontations, namely, defection versus defection and cooperation against cooperation. The system reaches a deadlock. This seemingly reasonable state, however, is unstable since any tiny disturbance probably resolves a few confrontations and then drives the system to a new equilibrium. < p 1 favors imitating a strategy from a neighbor and applying it to interactions with others [1,51]. This helps the system to escape from the deadlock and reach its stabilization. We term such a driven pattern imitation-driven dynamics.
In addition, recent behavioral experiments have observed that subjects often experiment with new behavior which has never appeared in neighborhood before [55][56][57]. Random exploration (experimenting with new behavior) has already been demonstrated to play an important role in the emergence and maintenance of cooperation [55]. It also facilitates the escape of the system from the deadlock. We term it exploration-driven dynamics. The evolutionary details are illustrated in figure 1(c). For the adjustment of s xy , with a probability p, x chooses y for reference and then adopts s yx with the probability P -P ( ) f x y . With a probabilityp 1 , s xy is randomly updated to cooperation or defection.

Results
In section 3.1, we run Monte Carlo simulations to investigate interactive diversity for three pairwise games on six social networks, such as complete network representing finite well-mixed population, lattice with periodical boundary conditions, regular network, small world network generated with rewiring probability 0.1 [8], random network, and scale-free network [9]. Sizes of populations are N=100 for complete network, N=1600 for other networks with average connectivity = k 4. In the main text, we consider three interactive settings: traditional setting with the constraint of behavior identity, degenerated interactive diversity, and interactive diversity. We model degenerated interactive diversity by taking p = 0.25 (p equals to -- 1in the homogeneous networks with = k 4). We take p = 0.999 25 to capture the highly adaptive adjustment, a key element of interactive diversity (p is much larger than --( ) (¯) p k 1 1). Besides, we offer a detailed investigation of p in the supplementary data (see figure S1). We set b = 8.  Interactive diversity in social networks. Large green circles are players and small circles over edges are strategies (red means cooperation and blue means defection). Each gray sphere contains a player and all its strategies. Each player plays games with its neighbors with separated strategies and meanwhile accumulates payoffs from all interactions (a). After that, its degree-normalized payoff (Π) is calculated. Here we focus on a player x and its three neighbors w, y, and z. s xy denotes xʼs strategy towards y, and other strategies take the similar notation using. Panel (b) illustrates how x updates s xy under imitation-driven dynamics (thick gray line). With a probability p, x selects y as the reference player and then replaces s xy with s yx in terms of a probability P -P 2 , x chooses z (w) for reference and imitates s zx (s wx ) with a probability P -P . Panel (c) shows x updates s xy under exploration-driven dynamics (thick gray line). With a probability p, x selects y as the reference player and then replaces s xy with s yx in terms of a probability P -P ( ) f x y . With a probability -( ) p 1 2 , x chooses cooperation (defection) to replace s xy . elaborate on microscopic mechanisms responsible for the boost of cooperation by analyzing an array of characteristic spatial patterns. In section 3.3, we develop a theoretical method to predict both frequencies of cooperation in different parameter settings and the stationary strategy distribution.

Evolution on social networks
We begin with evolution of cooperation on six typical networks. Figure 2(a) presents the stationary cooperation frequency in the traditional population setting, where each player is constrained to adopt a fixed strategy against all opponents within a generation. Evidently, in the well-mixed population, defection shows overwhelming  advantages over cooperation in the entire PD domain. Even when individuals are distributed over spatial structures, cooperation dies out in the most range of PD domain. Taking separated behavior against different neighbors, as shown in figure 2(b), indeed enlarges the parameter region for cooperation to survive. Especially, in the well-mixed population, cooperation thrives under the Prisoner's Dilemma. When focusing on the SG domain, we find that cooperation shrinks instead. Actually, as the separated strategy setting brings about more strategy pairs and meanwhile payoff parameters of the Snowdrift Game facilitate the matching of opposite strategies, the formation of compact cooperation clusters is hindered [58].
On the basis of the separated strategy setting, we introduce individuals' adaptive adjustment and achieve the interactive diversity. As shown in figure 2(c), interactive diversity elevates cooperation to a significantly high level for a notably extensive range of payoff values, regardless of population structures. In particular, cooperation outperforms defection in the entire zone of Snowdrift Game and Stag-hunt Game, which totally wipes out the nonuniform effects caused by the variance of game metaphors [58]. More to this point, in the most adverse conditions, these facilitative effects of interactive diversity on the evolution of cooperation are robust against the cumulative payoff pattern and against the exploration-driven dynamics (see figures S2 and S3 in the supplementary data). For the first time, we find that a simple setting could make defection maladaptive while elevate cooperation so pronouncedly. A subsequent insight from the microscopic level reveals the underlying mechanisms.

Underlying mechanisms for cooperation's success
For a clear interpretation, we implement subsequent studies in the most-used Prisoner's Dilemma and take a payoff transformation = - , where r denotes the ratio of the cost over the net benefit of cooperation. In this paper, we call an interaction a reciprocal interaction if two participants take same strategies, namely, cooperation against cooperation (mutual cooperation, CC) and defection versus defection (mutual defection, DD). Figure 3(a) provides the overview of an evolutionary process that starts from a random strategy distribution. As the blue line shows, the fraction of reciprocal interactions + f f CC DD rapidly rises to 1, and participants in almost all interactions terminate unilateral cooperation or unilateral defection. Although all interactions are reciprocal, as the red line shows, f C still experiences an initial decrease and then a dramatic rise, indicating that the system does not reach the equilibrium. How does the system evolves? The inset of figure 3(a) provides instrumental clues for this: + f f CC DD decreases below 1 from time to time, which means that a few reciprocal interactions like CC or DD are resolved to CD types. However, these resolved interactions just exist transiently and then transform to DD or CC once again. The conversion of DD to CC or CC to DD drives the system to evolve to a stable equilibrium. The detailed graphic explanation is provided in figure S4 in the supplementary data.
To deepen the understanding to underlying mechanisms, we illustrate a series of characteristic strategy snapshots which depict the time evolution of a system from a designated initial state (see figure 3(c)). We first discuss how interactive diversity helps to maintain cooperation in harsh situations. Initially, each cooperator obtains effective payoffr 0.5 0.5 , and its neighboring defector obtains + r 0.25 0.25 (figure 3(c)). Taking ( ) x 4,4 and ( ) x 3,4 (see figure 3 for notation using) for example, ( ) x 3,4 easily defeats ( ) x 4,4 for r = 0.4. In the traditional setup, ( ) x 4,4 probably switches to a defector to retaliate ( ) x 3,4 ʼs exploitation. However, its switching inevitably hurts adjacent cooperators and finally leads to the collapse of cooperation. Allowing separated behavior solely neither changes the fate of cooperation remarkably, since random selection of the reference player probably imposes the sanction to defection on neighboring cooperators. When adaptive adjustment is introduced, responding with defection to successful defectors weakens defectors' advantages, and reacting with cooperation against successful cooperators consolidates cooperators' advantages. As previously observed in group interactions [59], mutual cooperation provides a long-time well-being for the neighborhood while temporary advantages of defection cannot be maintained. Here cooperation clusters form strictly on the basis of pairwise interactions (see figure 3(e)), much more stable than clusters based on individuals in most previous work. Thus the former resists the invasion of defection and maintains cooperation even in the most testing conditions.
Next we elucidate the dissemination of cooperation over the entire network under interactive diversity. Figures 3(e)-(g) have portrayed how the interaction scenario between ( ) x 4,2 and ( ) x 4,3 changes from mutual defection to mutual cooperation. Firstly, as figure 3(f) shows, ( ) x 4,3 imitates cooperation from ( ) x 4,4 and applies it to interact with ( ) x 4,2 . Since ( ) x 4,4 is located in the center of the cooperation cluster, it obtains more payoffs with more cooperative neighbors and owns the advantage to spread its strategies. Subsequently, in figure 3(g), ( ) x 4,3 spreads cooperation to ( ) x 4,2 . Here we have to point out that switching strategies from defection to cooperation is certain to incur damage and weaken one's own advantage, like ( ) x 4,3 . Especially in traditional settings, converting from a defector to a cooperator means loss from all interactions, which impedes the further spreading of cooperation. Here interactive diversity enables an individual to adjust strategies in a few interactions while remain the rest unchanged. This reduces the impairment significantly when strategies are switched to cooperation. Therefore, interactive diversity facilitates the percolation of cooperation along edges into the entire network even in a severe situation.

Theoretical predictions based on evolution of edges
Here we elucidate the main idea of the theoretical method while leave derivative details in the appendix and supplementary data. As shown in figure 4, we embed interactive diversity into a networked system, where four interaction scenarios (mutual cooperation, mutual defection, unilateral cooperation and unilateral defection) are respectively captured by four edge types (CC DD CD DC , , , ). All edges are endowed with direction, by which interaction and strategy update in the original model are mapped into the evolution of directed edges, where direction of an edge decides who updates strategies to whom. In figure 3, we have concluded that for sufficiently large p, almost all edges exhibit either CC or DD. CD (DC) just acts as a transitional type for the conversion between DD and CC (see figures S5 and S6 in the supplementary data for abundant evidence). In other words, before the appearance of next CD edge, previous CD edges have already transited to either CC or DD, in line with the setting of weak mutation in much seminal work that the whole population reaches a homogenous state before next individual mutates [60].
Based on above analysis, figure 4 illustrates how a DD edge switches to a CC edge under imitation-driven dynamics. Generally, each conversion of DD to CC experiences two fundamental steps: DD is resolved to CD; DC turns to CC. We denote p ij DD and p ij CC the proportions of E ij DD and E ij CC (see figure 4 for notation using). Using p ij DD and p ij CC , we obtain fractions of nodes, the probability of a E ij DD edge converting to + + ( )( ) E i j CC 1 1 , and change in the number of various edges within a conversion. Therefore, in a homogeneous network with connectivityk, the whole dynamical system could be captured by only + (¯) k 2 1 2 variables relevant to edges (see appendix and the supplementary data). Furthermore, if we ignore the computational complexity, this method can be extended to a general case with other adjustment rules.
In figure 5, we have verified the validity of this theoretical method against different selection intensities, connectivity, and driven dynamics. In comparison with analytic results in most previous studies with large selection intensities [31,58], our predictions are much more precise. Our model shows strong robustness against the initial strategy configuration. Thus, although all Monte Carlo simulations start from a random state, deviating the theoretical assumption that every interaction presents either CC or DD, this method still well predicts the stable frequency of cooperation. When evolution is driven by imitating successful neighbors, a moderate selection density (β) benefits cooperation most (see figure S7 in the supplementary data). The decreasing selection density introduces more noise and inhabits the reciprocation to defectors (figure 5(a)(i)). Whereas the increasing selection density probably impedes the spreading of cooperation. Under explorationdriven dynamics, cooperation levels decrease monotonously as the increasing selection intensity, as shown in figure 5(b)(i). As random exploration probably introduces defection inside cooperation clusters, strong selection enhances the advantage of defection over cooperation. Interestingly, regardless of driven dynamics, the increasing edge density consistently elevates cooperation (see figure 5(ii) and S7 in the supplementary data), coinciding with the boom of cooperation in complete network (see figure 2(c)).
We proceed by a brief comparison between these dynamics by minus (b) from (a). As shown in figure 5(c), for a wide parameter range of cost-to-benefit ratios, selection intensities, and edge densities, imitating strategies from successful neighbors is more conducive to the long-term development of the population. However, random exploration is not always inferior to imitation in terms of stabilizing cooperation. For example, for large r and small β, random exploration fosters more cooperation. The results are robust against all pairwise games 1 . Green circles are players and small circles are strategies (red means cooperation and blue means defection). V i denotes a player adopting cooperation in i interactions and defection in the rest. E ij DD (E ij CC ) is a directed DD (CC) edge directing from V i to V j . The direction decides who updates strategy to whom. For example, if E ij DD is selected to update, V i adjusts its strategy to V j . The conversion includes two fundamental steps: a DD edge is resolved to a CD edge when E ij DD is selected (a)(b); this DC edge transits to a CC edge when V j is selected to update its strategy to . The former is reachable by V i learning a neighbor V k (red arrow ki). The latter is achieved through V j imitating (see figure S8 in the supplementary data). Our work here provides a new insight into understanding the relation between these two classical learning dynamics [61].
Finally, we study the stationary strategy distribution. To highlight the effects of evolution on strategy distribution, we provide a static system for comparison, where CC and DD individually constitute 50% and they scatter disorderly. As shown in figure 6(i), in the homogeneous network with average connectivity = k 4, V 2 (see figure 4 for notation using) makes up the largest part, and both CC and DD are more frequently observed around V 2 . Figure 6(ii) shows the stable equilibrium of a system evolving from a random initial strategy configuration. Although CC and DD still constitute same fractions, 50%, we find the number of V 4 increases. Besides, more CC edges exists around V 4 (player cooperating with all neighbors), and more DD edges appear near V 0 (players defecting to all neighbors), validated by the theoretical equilibrium shown in figure 6(iii). A further look at stationary strategy distribution in figure 6(ii) reveals a hierarchical structure. That is, in the process of evolution, CC gathers together to form cooperation clusters, in the center of which are highly cooperative players like cooperators (V 4 ), while as approaching the boundary players' preferences to cooperate decrease gradually till to defectors (V 0 ). This structure is different from previous cooperation clusters, in the boundary of which players own totally opposite attitudes towards cooperation, such as either cooperators or defectors [7].

Discussion and conclusions
In this paper, we propose a general framework of interactive diversity that each player adopts separated behavior against different neighbors and takes adaptive adjustment in response to opponents' strategies. We make a thorough investigation into the role of interactive diversity in the resolution of social dilemmas. We find that interactive diversity elevates cooperation to an impressively high level, which is strongly robust against variations of game metaphors (Prisoner's Game, Snowdrift Game, Stag-hunt Game), population types (well mixed, homogeneous and heterogeneous structured populations), payoff patterns (degree-normalized payoffs or cumulative payoffs), and learning manners (imitation-driven dynamics or exploration-driven dynamics). The microscopic evolutionary process reveals that the system evolves in the form of behavior diffusing along edges, in stark contrast with traditional evolutionary dynamics that individuals compete for nodes. The former is proved to be more beneficial to cooperation, validated by both Monte Carlo simulations and theoretical analysis based on the evolution of edges.
A recent behavioral experiment has provided powerful supports to our findings [62]. Unlike previous experiments with the constraint of action identity [52,56,[63][64][65], it allows human subjects to take cooperation and defection simultaneously. Eventually, it sees persistent cooperation although the condition for cooperation to succeed in the traditional setup is unsatisfied [57,62,66]. Our work provides a systematical investigation of behavior diversity from the theoretical level and paves the way for proceeding behavioral studies so that human subjects could behave in a manner close to their daily life. Actually, the feasibility of reciprocity is responsible for the predominance of cooperation. Traditionally, once a player determines to retaliate neighboring defectors by switching its strategy to defection, it inevitably causes damage to cooperator neighbors, which locates cooperators in a vulnerable situation. Interactive diversity endows players with the ability to reciprocate a single In column (i), both CC and DD represent 50% individually, and they are randomly distributed. This system does not experience any evolution and serves as a comparison group. Column (ii) corresponds to a simulated equilibrium after long-time evolution. The Monte Carlo simulation starts from a random strategy distribution and ends up with 50% CC and 50% DD. We get proper parameters from figure 5(a)(i) to ensure that the stable f C is near 0.5: N=1600, = k 4, r = 0.525, b = 2.0. The theoretical equilibrium is presented in column (iii), and parameters is same as column (ii). In row (a), the square with label V i (see figure 4 for notation using) in the row and label V j in the column corresponds to neighbor. Especially they can take adaptive adjustment in response to opponents' strategies and protect the innocent from the unintentional hurt. Such a mechanism can also be extended to group interactions where players reciprocate a single group rather than all involved groups. As observed in a previous study about conditional strategies, conditional cooperators (players decide to cooperate or defect based on characteristics of other participants from the same group) effectively weaken advantages of defectors and thus promote the evolution of cooperation [47]. From a macroscopic view, such reciprocity is achievable by reactive strategies [54,67] and neighborhood adjustment [31-33, 53, 68]. However, both them are fundamentally different from interactive diversity. Users adopting reactive strategies like TFT [54] realize direct reciprocity fully depending on opponents' strategies last round, while ignoring opponents' payoffs (fitness) accumulated from multiple interactions, the most important determinant of success in the realm of evolutionary games [2]. Thus in the repeated two-player game, two TFT users are easily caught in the long runs of mutual backbiting [67]. Interactive diversity could effectively avoid such a predicament by adaptively assessing both strategies and payoffs. Taking an unsuccessful cooperator and a successful defector for example, the defector's success incurs the revenge from the cooperator, which weakens the advantage of defection. Considering a successful cooperator and an unsuccessful defector, the cooperator could impose cooperation to the defector and meanwhile resist defection because of its success, which further consolidates the advantage of cooperation. Altogether, interactive diversity provides more advantages for cooperation. When it comes to neighborhood adjustment such as strategic tie formation and dissolution, cooperators sever connections to defectors as a way of punishment [29,[31][32][33]. Nevertheless, this 'link reciprocity' relies heavily on the frequent modification of interaction patterns and players' abilities to control whom they interact with [53,68]. Besides, given that it is costly to create and break social ties, interactive diversity may be the simplest and most effective way to sustain such large-scale reciprocal interactions.
Few theoretical studies to date have found the thrive of cooperation in highly connected groups [66]. Even though synergy effects have been widely accepted to motivate cooperation, its introduction to structured populations still fails to stabilize cooperation as the increasing density of social ties [14]. Without invoking any additional mechanisms like link adjustment [29,[31][32][33] and reputation [69], we find the prevailing cooperation on densely and even completely connected networks, demonstrating the strong reciprocity stimulated by interactive diversity. Thereby, our work provides a possible explanation to the sustained cooperation in some real social networks associated with sizable connectivity [70,71]. Here we point out that acting differently toward distinct opponents generally requires more strategy information and it may induce additional costs for players, especially for those interacting with many neighbors. Costs for additional information or skills like the ability to punish free-riders have shown much effects on the evolution of cooperation [72]. Therefore introducing a cost associated with interactive diversity will probably bring new insights into the evolutionary dynamics, which is worth further investigations.
Behaving identically across all interactions has been a canonical setting, which depicts various scenarios and helps researchers focus on key issues in their studies [19,[23][24][25]. However, on the topic of interacting patterns [26,27], we should not neglect that interactive diversity abounds [35][36][37][38][39][40][41], and it constitutes the realistic representation of many living systems [43]. From the perspective of network dynamics, our work in this paper reveals evolutionary dynamics defined on diverse edges significantly differs from evolutionary processes based on nodes [29,[31][32][33][44][45][46]. Furthermore, associating interactive diversity with interactions on multi-layer networks is helpful to deepen our understanding of its implication. Interactive diversity allows independence of strategies and irrelevance of interactions. As for a focal individual, its multiple interactions could be deemed to happen in different layers, and its accumulative payoff couples its strategies in all layers [73]. More importantly, interactive diversity offers us a new view to understand the collective behavior emerging in complex systems. It could be further extended to other disciplines [74]. We believe that research along this direction will stimulate ample novel insights into this competitive world. from 0 tok. Figure 4 illustrates how a E ij DD edge switches to a + +  figure 4(a)). This event happens with probability Varying k from 0 tok to cover all cases, we get the probability for CC switching to CD In next step, this +  figure 4). Altogether, before the appearance of next CD edge, this CD edge transits to + + with the probability  = P -P where P -P = -+ - Similarly, under the same neighborhood configuration, the probability for a E ij CC edge evolving to a --