Evolution of quantum and classical strategies on networks by group interactions

In this paper, quantum strategies are introduced within evolutionary games in order to investigate the evolution of quantum and classical strategies on networks in the public goods game. Comparing the results of evolution on a scale-free network and a square lattice, we find that a quantum strategy outperforms the classical strategies, regardless of the network. Moreover, a quantum strategy dominates the population earlier in group interactions than it does in pairwise interactions. In particular, if the hub node in a scale-free network is occupied by a cooperator initially, the strategy of cooperation will prevail in the population. However, in other situations, a quantum strategy can defeat the classical ones and finally becomes the dominant strategy in the population.


Introduction
The evolution of individual behavior in a population has attracted much interdisciplinary attention. In recent years, in the framework of evolutionary games on networks, many interesting results have been reported, such as spatial structure promoting the level of cooperation [1][2][3][4][5], in which pairwise interactions or two-person games, say Prisoner's Dilemma (PD) and the Snowdrift (SD) games, are often used to describe the interactions between agents [6][7][8][9][10][11]. In reality, however, collective actions of groups of individuals cannot be described appropriately by two-person games. Therefore, in this situation, it is usual to resort to another particularly important paradigm, the public goods game (PGG) [12,13], an n-person game. In a PGG, each agent in a group of n agents needs to independently and simultaneously decide whether to contribute a unit, say one dollar, to a public goods pool or not. A cooperative agent (Cooperator, C) will contribute a unit to the public goods pool, while a non-cooperative agent (Defector, D) does nothing. After each agent has made a decision, the total contribution in the pool is multiplied by an enhancement factor r < n and the result is equally distributed among the n agents, irrespective of their strategies. Thus, the payoff of a cooperator or a defector has the following form: where n c represents the number of cooperators in the group of n agents. As each of n c cooperators contributes a unit, then n c is also the total contribution. Obviously, defectors receive payoffs without any contribution, but their behavior disadvantages cooperators in the group. In the next round, more agents will choose not to contribute. Finally, no one will receive benefits from the game and then the dilemma is raised. More recently, many researchers have focused on the emergence of cooperation in a population on complex networks, especially on scale-free (SF) networks, based on the PGG. When agents interact with each other in terms of social ties defined by a heterogeneous graph, Santos et al [14] found that the level of cooperation was promoted by the social diversity in the PGG. Gómez-Gardeñes et al [15] noted that there were many interacting subgroups of agents (also called modules) in networks, and interactions were on these subgroups. Hence, the neighbors of a node were divided into several groups with fixed sizes and overlapped only 3 slightly. In this setting, Gómez-Gardeñes et al reported that cooperation was not fostered by social heterogeneity in the PGG and the updating rule. Szolnoki et al [16] studied how group sizes in the PGG influenced the evolution of cooperation and reported that cooperation was not necessarily promoted by a large group size. In addition, some researchers [17,18] introduced new methods to distribute payoffs among agents, which led to the heterogeneity of payoffs when the PGG was adopted. Other related studies have also been carried out [19][20][21][22].
While classical game theory has been studied and applied to a large range of disciplines, a new field of quantum game theory has emerged as the generalization of classical game theory. Quantum game theory is based on quantum mechanics, so it exhibits new features that have no classical counterparts and opens up new lines of research. For example, when an agent implements a quantum strategy against the opponent's classical strategy, she/he can always defeat her/him and increase her/his expected payoff in the quantized PQ penny flip game [23]. And, the dilemma in the classical PD is removed when both agents resort to quantum strategies in a restricted space [24], in which quantum entanglement, a particular quantum state, is involved. Marinatto et al [25] found a unique Nash equilibrium for the Battle of the Sexes game, when entangled strategies are allowed. Later, Iqbal et al [26] studied evolutionarily stable strategies in quantum games and Kay et al [27] presented an evolutionary quantum game. Other related studies have also been performed [28][29][30][31][32]. For further background on quantum games, see [33,34].
If players in evolutionary games are replaced by agents who can use both quantum and classical strategies, how will the agents' behavior in a population evolve on networks? In this paper, we investigate the evolution of quantum and classical strategies on networks when the PGG with group interactions is employed. Our previous research [35] showed that quantum strategies win in a defector-dominated population but, in that paper, only two specific quantum strategies were studied. However, in the present work, quantum strategies are selected randomly from a full quantum strategy spaceŜ. The full quantum strategy space is a much larger space than the classical one, i.e. the classical strategy space is only a subset of the quantum one, by which the diversity of agents' behavior in a population can be represented and the performance of quantum strategies can be measured. In addition, as mentioned above, the structure of a network plays a key role in the evolution of strategies, so quantum and classical strategies will evolve on a strongly heterogeneous network and a homogeneous network, say an SF network and a square lattice (SL). Agents occupy all nodes of the network and resort to quantum games to interact with one another, where pairwise and group interactions are applied. In an SF network, there exists a particular type of node whose degrees are much larger than those of most nodes in the network. If one of these nodes is occupied by a quantum agent (Q), a cooperator or a defector initially, then how the evolution of strategies is influenced can be studied. Further, the number of quantum strategies is increased and two updating rules are introduced. Based on them, the evolution of two types of strategy is discussed in detail. Finally, it is worth noting that a quantum strategy is not a probabilistic sum of pure classical strategies (except under special conditions) and that it cannot be reduced to pure classical strategies [26].
The rest of this paper is organized as follows. Firstly, the physical model of an n-person quantum game is introduced briefly. Secondly, the model for the evolution is described. In the next sections, the results of the evolution on an SF network and on an SL are compared, and then pairwise and group interactions of agents are discussed when two updating rules are introduced. Later, the hub node of the SF network is assigned an agent with a given strategy at the start and the effects on the evolution are analyzed. Finally, the relationship between the evolution and the number of quantum strategies is discussed. The conclusion is given in section 7. Figure 1 is the physical model of an n-person quantum game [36], which is a generalization of a two-person quantum game introduced by Eisert et al [24]. In the model, the possible outcomes of the classical strategies, C = 0 and D = 1, are assigned to two basis vectors {|C = 0 , |D = 1 } in the Hilbert space, respectively.

n-Person quantum games
Assume that an n-person game starts at an initial state, |ψ 0 =Ĵ | 0 · · · 0 n , whereĴ is an entangling operator that is known to n agents. It can be written as [36] when a maximally entangled game is employed. Here, the parameter γ ∈ [0, π/2] is a measure of entanglement of a quantum game. If γ = 0, a quantum game recovers a classical game. For best exhibiting the features of a quantum game, a maximally entangled quantum game is adopted. In the following, each agent chooses a unitary operatorŶ as a strategy from the full quantum strategy spaceŜ [33], where α, β ∈ [−π, π], θ ∈ [0, π], and operates it on the qubit that belongs to her/him. Before a projective measurement in the basis {|0 , |1 } is carried out, the final state is Thus, the first agent's expected payoff has the following form: where P(|· ) denotes the first agent's payoff under the strategy profile corresponding to that state |· . For example, the state | 0 . . . 0 n means each agent in the game plays the strategy C.
Correspondingly, the agent's payoff in this state is

The model
Assume that there is an undirected network G(V, E) with N nodes, where V is the set of nodes and E is the set of links. In this paper, two networks are considered. One is an SL with periodic boundary conditions and the other is a Barabási-Albert SF network [37,38]. The periodic boundary conditions guarantee that each node in the SL has four neighbors and the SF network is established according to the Barabási-Albert model [37]. It starts from a small network with d 0 N fully connected nodes and then a new node with d d 0 links will be added to the network. Its d links will be connected to d different nodes, each of which is chosen with probability p sf (i) = k i / j k j , where k is the degree of a node. This procedure will be repeated many times till the number of nodes of the network is N .
Also, each node i ∈ V is occupied by an agent and its neighbor j is any other agent such that there is a link e i j ∈ E between them, so the set of neighbors of an agent i can be defined as where V \i means the set of nodes, V , not including the ith node (a complement of {i} in V ) and |·| is the cardinality of a set. Initially, each agent on the network is randomly assigned one of m quantum strategies or two classical strategies (C and D) with equal probability, all of which are taken from the full quantum strategy spaceŜ. In particular, the strategies, C and D, have the forms while m quantum strategies are produced at random by choosing the parameters, α, β and θ , in equation (3) randomly before each simulation starts. Next, a randomly selected agent i will play n-person (n 2) maximally entangled quantum games with its neighbors in terms of the physical model of a quantum game (figure 1). When group interactions are considered, we agree with Gómez-Gardeñes [15] that there exist some interacting subgroups of agents (modules) in an agent's neighborhood. Therefore, before quantum games are played, the neighbors of the agent i will be randomly divided into q i = k i /(n − 1) groups and there is no overlap between agents of each group, where q i 1 is satisfied and the symbol · indicates the integral part of a number. The number of neighbors k i is not necessarily an integer multiple of n − 1, if · is applied. For the k i − q i (n − 1) remaining neighbors, they are omitted from the current round. Later, the agent i will play games with n − 1 neighbors in each of q i groups in turn according to the model of an n-person quantum game and its payoff (Ŷ 1 , . . . ,Ŷ n ) can be calculated by equation (5). The agent's total payoff F i is obtained by accumulating all it receives, Once a round is finished, the agent i chooses a neighbor j ∈ i at random from its neighborhood and then the neighbor j acquires the total payoff F j according to the 6 above-mentioned method. In the framework of the replicator dynamics, the agent i will imitate the neighbor's strategy with probability p i . In this paper, two updating rules are introduced to calculate the probability p i .
Updating Rule 1: Updating Rule 2: In Updating Rule 1, the variable max j represents the maximum possible payoff of agent j when it plays a game as a focal player [15]. Conversely, min i represents the minimum payoff. Here, q i and q j are the numbers of groups into which the neighbors of agents i and j are divided, respectively. Updating Rule 2 is also called the Fermi rule, in which the parameter λ is the intensity of selection. If agent i decides to imitate the neighbor's strategy, it will update its strategy and apply it in the next round. This whole process is iterated for a maximum number of 5 × 10 4 generations and the fractions of agents with different strategies are calculated by averaging another 1000 generations after the maximum, which produces a result of the evolution of strategies

Homogeneous versus heterogeneous networks
In recent years, it has been shown [14,17,18] that strong heterogeneity in a network facilitates the evolution of cooperation. When group interactions are used in the PGG, cooperation is promoted further [14]. In this section, we focus on the evolution of quantum and classical strategies on a homogeneous network (SL) and a heterogeneous network (SF) in the PGG. Moreover, pairwise interactions and group interactions are adopted in simulations, respectively, in order to investigate the differences between the results of strategy evolution. Finally, the results obtained from two updating rules are shown in figures 2 and 3.
First, assume that there is a population of N = 2500 agents who occupy all nodes of an SL or an SF network. Throughout all simulations, the network topology remains static and the intensity of selection λ = 0.05. To guarantee each agent has at least four neighbors in fiveperson games, both the number of links of a new node and the number of nodes in the initial core are set at d = d 0 = 4 when the SF network is established. In the SF network, the hub node is occupied randomly by an agent with a certain strategy, if not otherwise explicitly stated. Also, because quantum strategies are taken from a very large spaceŜ at random before each simulation starts, the final result ϕ( ), ∈ [0, 1] is obtained statistically in order to reduce randomicity. In the result of each simulation ω h ( ), ∈ [0, 1], for the curves corresponding to the quantum strategies, the quantum strategy that produces the topmost curve is defined as Q 1 , the second curve as Q 2 and so on. In all figures, there exists a curve that is higher than others when the evolution comes to an end, which means that the strategy is played by most of the agents in the population and is also a dominant strategy in the population. It can be found that a quantum strategy is the dominant strategy over the combinations of all conditions when r > 1, although its fractions are variable in terms of different conditions. Comparing with pairwise interactions, we can see that the fraction of agents using a quantum strategy drops a little when group interactions are applied. For the case of group interactions, the neighbors of each agent are divided into q groups 8 first. The number of groups q in the group interactions must be clearly less than that of pairs k in the pairwise interactions, which causes the payoffs of agents to be less than those when pairwise interactions are used and the differences between payoffs are also reduced. That is why the fraction of agents using the dominant strategy in the group interactions drops.
It is worth noting that the value of r/n when a quantum strategy becomes the dominant strategy (the critical value of r/n) is decreased to about 0.2 in the group interactions in contrast with about 0.4 in the pair-wise interactions. This means that, under the same conditions, if group interactions are adopted, the dominant strategy will appear in the population much earlier.
More interestingly, when r/n = 0, not only cooperators can survive in the population, but also the fraction even reaches 20% when both group interactions and Updating Rule 1 are used. However, this is not the case when only strategies C and D evolve on the network. In that case, cooperators die out in the population before the critical value of r/n, let alone r/n = 0 [15]. Considering the homogeneous network (SL) and a heterogeneous network (SF), we can find that the fraction of cooperators in the SF network at r/n = 0 is higher than that in the SL network, and this fraction at r/n = 0 rises with an increase of the number of agents in the PGG.
As to the two updating rules, when Rule 2 is applied, the fraction of agents using the dominant strategy is a little higher than it is under Rule 1. In Rule 2, the probability is a function of the differences of average payoffs. When average payoffs are used, an agent who wants to acquire a high average payoff needs a strategy that brings high payoffs from most neighbors. Otherwise, the agent cannot receive a high average payoff, even if it occupies the node with the largest degree. It is clear that the strategy bringing higher average payoffs for agents is adopted more by others. In contrast, Rule 1 emphasizes the total payoffs. Generally speaking, agents with more neighbors can accumulate more and also acquire high total payoffs. Nevertheless, myopic agents always focus on the total payoffs, so the strategy of the agent occupying the node with a large degree easily spreads throughout the population.

The hub node versus strategies evolution
It is well known that one of the features of an SF network is the power-law distribution of the degrees of nodes. In the network, there exist certain nodes that have much larger degrees than the majority. Generally, they are called hub nodes. In our cases, the term hub node will denote the node with the largest degree in the SF network. The degree of a node represents the number of links (or neighbors), which also indicates the level of importance to the network. For example, even when many nodes with small degrees are removed from the network, the network remains connected. However, once the hub node is removed, the network will fragment into several independent parts. As for our cases, when an agent with a certain strategy initially occupies the hub node, the agent becomes more powerful due to possessing more resources, and even has an opportunity to change the evolution of strategies on the network.
In the previous section, initially the hub node is occupied randomly by an agent with a certain strategy. However, in this part, the hub node will be designated as a quantum agent using a quantum strategy (Q), a cooperator or a defector before the evolution starts, and its impact on the evolution of strategies is then investigated. It is worth noting that the designation takes place only once at the beginning for all the evolution. The results of the evolution of strategies are shown in figures 4 and 5, when an agent with different strategies occupies the hub node, respectively. From the figures, it can be observed that a quantum strategy becomes the dominant strategy from r = 1 independent of updating rules and the number of agents n in a game, when (c) Figure 4. The evolution of strategies on SF networks using Updating Rule 1 when the hub node is occupied initially by (a) a quantum agent (Q), (b) a cooperator or (c) a defector, respectively. The upper and lower subfigures of (a)-(c) exhibit the fractions of two quantum strategies (m = 2) and two classical strategies (C and D) in the population after the evolution when pairwise (n = 2) and group (n = 5) interactions are adopted, respectively.
the hub node is occupied by a quantum agent initially. Moreover, the fraction of agents with a quantum strategy in the population is a little higher than that of quantum agents in other cases. If a cooperator occupies the hub node, in the end, the strategy C will be the dominant strategy from around r/n = 0.4, when group interactions and Updating Rule 1 are involved. In this case, the fraction of agents with a quantum strategy is slightly lower than that of cooperators. On the other hand, if pair-wise games are adopted, a quantum strategy is played by most of the agents in the population, except when a cooperator initially occupies the hub node, the strategy C outperforms the quantum strategy (Q 1 ) temporarily only after r/n > 0.875. Interestingly, even if the hub node is assigned a defector, the strategy D cannot be the dominant strategy in the population when r > 1, although its fraction is slightly higher than that in those cases when a quantum agent or a cooperator occupies the hub node and five-person games are played. In the case of a defector occupying the hub node, however, a quantum strategy finally becomes the dominant strategy in the population after r = 1. When Updating Rule 2 is applied, the superior performance of quantum strategies is further exhibited. As shown in figure 5, it can be seen that in all cases the dominant strategy is a quantum strategy when r > 1 after the evolution comes to an end. Although the strategy C becomes the dominant strategy with Rule 1 when a cooperator occupies initially the hub node, it cannot prevent a quantum strategy from prevailing in the population with Rule 2. In contrast to Updating Rule 1, the fraction of agents using a quantum strategy rises significantly in Rule 2. Furthermore, when five-person games are employed, the fraction increases by about 10%.
In summary, the fraction of cooperators is largely improved by group interactions and finally the strategy of cooperation becomes the dominant strategy in the population when a cooperator occupies initially the hub node and where Updating Rule 1 is used. However, a quantum strategy seems more powerful and outperforms classical strategies in most cases.

The number of quantum strategies versus strategy evolution
As mentioned above, the full quantum space is very large. If the number of quantum strategies m is increased, then more quantum strategies will be taken from the space and played by agents in the same generations, and more complex behavior is expected. In this section, the behavior of agents will be discussed for different conditions, such as two updating rules, pair-wise or group interactions, when the number of quantum strategies is increased from m = 2 to m = 4 and 6. Figures 6 and 7 are obtained for Updating Rule 1 and Updating Rule 2 when two-and five-person games are used, respectively. From them, it can be seen that the fraction of agents using a quantum strategy is highest in the population from r = 1 in all cases. Moreover, as the number of quantum strategies rises, the critical value of r/n when a quantum strategy becomes the dominant strategy becomes lower, which means that a quantum strategy can dominate the population at a smaller enhancement factor r . The critical values of r/n are improved by about 7.5% (m = 4) and 12.5% (m = 6) compared with that at m = 2 when Updating Rule 1 and pair-wise interactions are used, while similar results are obtained in other cases.
When the group interactions are involved, more complex results are produced. For Rule 1, a quantum strategy is still the dominant strategy in the population when r > 1, but its fraction is reduced significantly compared with the case at m = 2. With an increase of the number of quantum strategies, the initial fractions of agents with different strategies drop largely after the assignment of initial strategies. For instance, when m = 6, the initial fraction of agents using a certain strategy is only half of what is the case when m = 2. Furthermore, Rule 1 focuses on the heterogeneity of the SF network, i.e. large degrees of nodes often bring high total payoffs. Consequently, these conditions cause the reduction of all fractions at the end of the evolution. On the other hand, the average payoffs are used in the case of Updating Rule 2, which decreases the heterogeneity of the SF network slightly. In other words, an agent should count on more than just the large degrees of nodes in order to acquire a high average payoff. Hence, the fraction of agents using the dominant strategy, a quantum strategy, in Updating Rule 2 is higher than that in Rule 1.

Conclusions
In summary, the evolution of quantum and classical strategies on SF and SL networks has been investigated in the PGG. Based on the model of a quantum game, agents on a network interact with their neighbors in pairs or groups and then update their strategies in terms of two updating rules, respectively. Because the evolution of strategies is influenced largely by the structure of a network, we compare the results of the evolution on a strongly heterogeneous network (SF) and a homogeneous one (SL). It is found that a quantum strategy outperforms the classical ones and finally becomes the dominant strategy in the population regardless of the networks when r > 1. When group interactions are involved, the critical value of r/n is decreased considerably, which means that a quantum strategy can prevail in the population even if the enhancement factor r is smaller. Interestingly, cooperators can survive in the population when r/n = 0 in all cases and the fraction of cooperators even reaches 20% at r/n = 0 in the group interactions with Updating Rule 1, whereas cooperators die out in the population before a certain value of r/n when only the strategies C and D evolve on a network. Moreover, the fraction of cooperators at r/n = 0 on an SF network is higher than that on an SL network.
Further, if the hub node in the SF network is occupied initially by a cooperator, the fraction of cooperators C will be highest with Updating Rule 1 after the evolution, but a quantum strategy follows it closely, i.e. the fraction of agents using a quantum strategy is a little lower than that. Conversely, even if a defector occupies the hub node, the strategy D is still defeated by a quantum strategy and cannot prevail in the population after r = 1. On the other hand, when Updating Rule 2 is applied, a quantum strategy is always dominant in the population when r > 1, irrespective of pairwise or group interactions and who occupies the hub node.
In addition, the critical value of r/n when a quantum strategy becomes the dominant strategy is decreased further with an increase in the number of quantum strategies, although the fraction of agents using the dominant strategy drops. In the case of group interactions and Updating Rule 1, the fraction of agents using the dominant strategy drops significantly and is slightly higher than the second largest fraction. However, in Updating Rule 2, the situation is changed. In other words, a quantum strategy returns to be completely dominant in the population when r > 1.