Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Learning and Innovative Elements of Strategy Adoption Rules Expand Cooperative Network Topologies

Figure 3

Long-term learning and innovative elements of strategy adoption rules, when applied together allow cooperation in a large number of model networks.

(Top middle panel) The small-world (spheres) and scale-free (cones) model networks were built as described in the legends of Figures 1 and 2. The rewiring probability, p of the links of the original regular lattices giving small-world networks was increased from 0 to 1 with 0.05 increments, the number of edges linking each new node to former nodes in scale-free networks was varied from 1 to 7, and the means of shortest path-lengths and clustering coefficients were calculated for each network. Cubes and cylinders denote regular (p = 0) and random (p = 1.0) extremes of the small-world networks, respectively. For the description of the canonical repeated Prisoner's Dilemma game, as well as the best-takes-over (green symbols); long-term learning best-takes-over (blue symbols); long-term learning innovative best-takes-over (magenta symbols) and Q-learning (red symbols) strategy adoption rules used, see Methods and the ESM1. For each network 100 random runs of 5,000 time steps were executed at a fixed T value of 3.5. (Left and right panels) 2D side views of the 3D top middle panel showing the proportion of cooperators as the function of the mean length of shortest paths or the mean clustering coefficient, respectively. (Bottom middle panel) Color-coded illustration of the various network topologies used on the top middle panel. Here the same simulations are shown as on the top middle panel with a different color-code emphasizing the different network topologies. The various networks are represented by the following colors: regular networks – blue; small-world networks – green; scale-free networks – yellow; random networks – red (from the angle of the figure the random networks are behind some of the small-world networks and, therefore are highlighted with a red arrow to make there identification easier). The top middle panel and its side views show that the best-takes-over strategy adoption rule (green symbols) at this high temptation level results in a zero (or close-to-zero) cooperation. As opposed to this, the long-term best-takes-over strategy adoption rule (blue symbols) raise the level of cooperation significantly above zero, but the individual values vary greatly at the different network topologies. When the long-term strategy adoption rule is combined with a low level of randomness (magenta symbols) the cooperation level stays in most cases uniformly and its variation becomes high greatly diminished. Q-learning stabilizes cooperation further even at regular networks, which otherwise give an extremely variable outcome.

Figure 3

doi: https://doi.org/10.1371/journal.pone.0001917.g003