Skip to main content

Advertisement

Log in

Engineering Optimal Cooperation Levels with Prosocial Autonomous Agents in Hybrid Human-Agent Populations: An Agent-Based Modeling Approach

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

The evolution of cooperation in social interactions remains a central topic in interdisciplinary research, often drawing debates on altruistic versus self-regarding preferences. Moving beyond these debates, this study investigates how autonomous agents (AAs) with a range of social preferences interact with human players in one-shot, anonymous prisoner’s dilemma games. We explore whether AAs, programmed with preferences that vary from self-interest to other-regarding behavior, can foster increased cooperation among humans. To do this, we have refined the traditional Bush–Mosteller reinforcement learning algorithm to integrate these diverse social preferences, thereby shaping the AAs’ strategic behavior. Our results indicate that even a minority presence of AAs, programmed with a moderate aspiration level, can significantly elevate cooperation levels among human participants in well-mixed situations. However, the structure of the population is a critical factor: we observe increased cooperation in well-mixed populations when imitation strength is weak, whereas networked populations maintain enhanced cooperation irrespective of the strength of imitation. Interestingly, the degree to which AAs promote cooperation is modulated by their social preferences. AAs with pro-social preferences, which balance their own payoffs with those of their opponents, foster the highest levels of cooperation. Conversely, AAs with either extremely altruistic or purely individualistic preferences tend to hinder cooperative dynamics. This research provides valuable insights into the potential of advanced AAs to steer social dilemmas toward more cooperative outcomes. It presents a computational perspective for exploring the complex interplay between social preferences and cooperation, potentially guiding the development of AAs to improve cooperative efforts in human societies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Axelrod, R. (1980). More effective choice in the prisoner’s dilemma. Journal of Conflict Resolution, 24(3), 379–403.

    Article  Google Scholar 

  • Axelrod, R. (1980). Effective choice in the prisoner’s dilemma. Journal of Conflict Resolution, 24(1), 3–25.

    Article  Google Scholar 

  • Bolton, G. E., Katok, E., & Ockenfels, A. (2005). Cooperation among strangers with limited information about reputation. Journal of Public Economics, 89(8), 1457–1468.

    Article  Google Scholar 

  • Bolton, G. E., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity, and competition. American Economic Review, 91(1), 166–193.

    Article  Google Scholar 

  • Burton-Chellew, M. N. (2022). The restart effect in social dilemmas shows humans are self-interested not altruistic. Proceedings of the National Academy of Sciences, 119(49), 2210082119.

    Article  Google Scholar 

  • Burton-Chellew, M. N., & Guérin, C. (2021). Decoupling cooperation and punishment in humans shows that punishment is not an altruistic trait. Proceedings of the Royal Society B, 288(1962), 20211611.

    Article  PubMed  PubMed Central  Google Scholar 

  • Burton-Chellew, M. N., & West, S. A. (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences, 110(1), 216–221.

    Article  ADS  CAS  Google Scholar 

  • Burton-Chellew, M. N., & West, S. A. (2021). Payoff-based learning best explains the rate of decline in cooperation across 237 public-goods games. Nature Human Behaviour, 5(10), 1330–1338.

    Article  PubMed  PubMed Central  Google Scholar 

  • Bush, R. R., & Mosteller, F. (1955). Stochastic models for learning.

  • Charness, G., & Rabin, M. (2002). Understanding social preferences with simple tests. The Quarterly Journal of Economics, 117(3), 817–869.

    Article  Google Scholar 

  • Crandall, J. W. (2015). Robust learning for repeated stochastic games via meta-gaming. In Proceedings of International Conference on Artificial Intelligence, pp. 416–3422.

  • Crandall, J. W., Oudah, M., Ishowo-Oloko, F., Abdallah, S., Bonnefon, J.-F., Cebrian, M., Shariff, A., Goodrich, M. A., Rahwan, I., et al. (2018). Cooperating with machines. Nature Communications, 9(1), 1–12.

    Article  CAS  Google Scholar 

  • Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31(1), 169–193.

    Article  Google Scholar 

  • Dawes, R. M., & Thaler, R. H. (1988). Anomalies: cooperation. Journal of Economic Perspectives, 2(3), 187–197.

    Article  Google Scholar 

  • Ezaki, T., Horita, Y., Takezawa, M., & Masuda, N. (2016). Reinforcement learning explains conditional cooperation and its moody cousin. PLoS Computational Biology, 12(7), 1005034.

    Article  ADS  Google Scholar 

  • Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425(6960), 785–791.

    Article  ADS  CAS  PubMed  Google Scholar 

  • Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137–140.

    Article  ADS  CAS  PubMed  Google Scholar 

  • Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817–868.

    Article  Google Scholar 

  • Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans. Evolution and Human Behavior, 24(3), 153–172.

    Article  Google Scholar 

  • Gintis, H., Smith, E. A., & Bowles, S. (2001). Costly signaling and cooperation. Journal of Theoretical Biology, 213(1), 103–119.

    Article  ADS  MathSciNet  CAS  PubMed  Google Scholar 

  • Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). Ai and the transformation of social science research. Science, 380(6650), 1108–1109.

    Article  ADS  CAS  PubMed  Google Scholar 

  • Guo, H., Shen, C., Hu, S., Xing, J., Tao, P., Shi, Y., & Wang, Z. (2023). Facilitating cooperation in human-agent hybrid populations through autonomous agents. iScience.

  • Han, T. A., Santos, F. C., Lenaerts, T., & Pereira, L. M. (2015). Synergy between intention recognition and commitments in cooperation dilemmas. Scientific Reports, 5(1), 9312.

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  • Hu, S., & Leung, H.-F. (2017). Achieving coordination in multi-agent systems by stable local conventions under community networks. In IJCAI, pp. 4731–4737.

  • Hu, S., & Leung, H.-F. (2018). Do social norms emerge? the evolution of agents’ decisions with the awareness of social values under iterated prisoner’s dilemma. In 2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO), pp. 11–19. IEEE

  • Hu, S., Leung, C.-W., & Leung, H.-F. (2019). Modelling the dynamics of multiagent q-learning in repeated symmetric games: A mean field theoretic approach. Advances in Neural Information Processing Systems32.

  • Hu, S., Leung, C.-W., Leung, H.-F., & Soh, H. (2022). The dynamics of q-learning in population games: A physics-inspired continuity equation model. arXiv preprint arXiv:2203.01500.

  • Ishowo-Oloko, F., Bonnefon, J.-F., Soroye, Z., Crandall, J., Rahwan, I., & Rahwan, T. (2019). Behavioural evidence for a transparency-efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence, 1(11), 517–521.

    Article  Google Scholar 

  • Macy, M. W., & Flache, A. (2002). Learning dynamics in social dilemmas. Proceedings of the National Academy of Sciences, 9(3(suppl)), 7229–7236.

    Article  ADS  Google Scholar 

  • Masuda, N., & Nakamura, M. (2011). Numerical analysis of a reinforcement learning model with the dynamic aspiration level in the iterated prisoner’s dilemma. Journal of Theoretical Biology, 278(1), 55–62.

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  • Maynard Smith, J. (1976). Evolution and the theory of games. American Scientist, 64(1), 41–45.

    ADS  MathSciNet  Google Scholar 

  • Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314(5805), 1560–1563.

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  • Nowak, M. A., & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359(6398), 826–829.

    Article  ADS  Google Scholar 

  • Paiva, A., Santos, F., & Santos, F. (2018). Engineering pro-sociality with autonomous agents. In Proceedings of the AAAI Conference on Artificial Intelligence, (Vol. 32).

  • Rabin, M. (1993). Incorporating fairness into game theory and economics. The American Economic Review, 1281–1302.

  • Sharma, G., et al. (2023). Small bots, big impact: Solving the conundrum of cooperation in optional prisoner’s dilemma game through simple strategies. Journal of the Royal Society Interface, 20(204), 20230301.

    Article  PubMed  Google Scholar 

  • Sigmund, K., & Nowak, M. A. (1999). Evolutionary game theory. Current Biology, 9(14), 503–505.

    Article  Google Scholar 

  • Sigmund, K., De Silva, H., Traulsen, A., & Hauert, C. (2010). Social learning promotes institutions for governing the commons. Nature, 466(7308), 861–863.

    Article  ADS  CAS  PubMed  Google Scholar 

  • Szabó, G., & Fath, G. (2007). Evolutionary games on graphs. Physics Reports, 446(4–6), 97–216.

    Article  ADS  MathSciNet  Google Scholar 

  • Wang, Z., Kokubo, S., Jusup, M., & Tanimoto, J. (2015). Universal scaling for the dilemma strength in evolutionary games. Physics of Life Reviews, 14, 1–30.

    Article  ADS  PubMed  Google Scholar 

Download references

Funding

We acknowledge the support provided by (i) Major Program of National Fund of Philosophy and Social Science of China (Grants No. 22 &ZD158 and 22VRCO49) to LS, (ii) China Scholarship Council (No. 202308530309) and Yunnan Provincial Department of Education Science Research Fund Project (Project No. 2023Y0619) to ZH, (iii) a JSPS Postdoctoral Fellowship Program for Foreign Researchers (Grant No. P21374), an accompanying Grant-in-Aid for Scientific Research from JSPS KAKENHI (Grant No. JP 22F31374) to CS, and (iv) the grant-in-Aid for Scientific Research from JSPS, Japan, KAKENHI (Grant No. JP 20H02314, JP 23H03499) awarded to JT.

Author information

Authors and Affiliations

Authors

Contributions

TG, ZH, and CS conceptualized, designed the study; TG, ZH and CS performed simulations and wrote the initial draft; LS and JT provided overall project supervision; All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Chen Shen or Lei Shi.

Ethics declarations

Conflict of Interest

Authors declare no conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, T., He, Z., Shen, C. et al. Engineering Optimal Cooperation Levels with Prosocial Autonomous Agents in Hybrid Human-Agent Populations: An Agent-Based Modeling Approach. Comput Econ (2024). https://doi.org/10.1007/s10614-024-10559-8

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10614-024-10559-8

Keywords

Navigation