Skip to main content

OptimizingMARL: Developing Cooperative Game Environments Based on Multi-agent Reinforcement Learning

  • Conference paper
  • First Online:
Entertainment Computing – ICEC 2022 (ICEC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13477))

Included in the following conference series:

  • 811 Accesses

Abstract

Intelligent agents are critical components of the current game development state of the art. With advances in hardware, many games can simulate cities and ecosystems full of agents. These environments are known as multi-agent environments. In this domain, reinforcement learning has been explored to develop artificial agents in games. In reinforcement learning, the agent must discover which actions lead to greater rewards by experimenting with these actions and defining a search by trial and error. Specifying when to reward agents is not a simple task and requires knowledge about the environment and the problem to be solved. Furthermore, defining the elements of multi-agent reinforcement learning required for the learning environment can be challenging for developers who are not domain experts. This paper proposes a framework for developing multi-agent cooperative game environments to facilitate the process and improve agent performance during reinforcement learning. The framework consists of steps for modeling the learning environment and designing rewards and knowledge distribution, trying to achieve the best environment configuration for training. The framework was applied to the development of three multi-agent environments, and tests were conducted to analyze the techniques used in reward design. The results show that the use of frequent rewards favors the emergence of essential behaviors (necessary for the resolution of tasks), improving the learning of agents. Although the knowledge distribution can reduce task complexity, dependency between groups is a decisive factor in its implementation.

This work is supported by CAPES and FAPERJ.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bellemare, M.G., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47(1), 253–279 (2013)

    Article  Google Scholar 

  2. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning (2019)

    Google Scholar 

  3. Cohen, A., et al.: On the use and misuse of absorbing states in multi-agent reinforcement learning (2021)

    Google Scholar 

  4. Foerster, J., Assael, Y., Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 2145–2153. NIPS’16, Curran Associates Inc. (2016)

    Google Scholar 

  5. Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, pp. 2974–2982. AAAI (2018)

    Google Scholar 

  6. Hessel, M., et al.: Rainbow: Combining improvements in deep reinforcement learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, vol. 32, pp. 3215–3222. PKP Publishing Services Network (2018)

    Google Scholar 

  7. Iqbal, S., Sha, F.: Actor-attention-critic for multi-agent reinforcement learning. In: Proceedings of the 36th International Conference on Machine Learning, pp. 2961–2970. PMLR 97, Long Beach, California (2019)

    Google Scholar 

  8. Johnson, M., Hofmann, K., Hutton, T., Bignell, D.: The malmo platform for artificial intelligence experimentation. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, pp. 4246–4247. IJCAI’16, AAAI Press (2016)

    Google Scholar 

  9. Jorge, E., Kågebäck, M., Johansson, F., Gustavsson, E.: Learning to play guess who? and inventing a grounded language as a consequence (2016)

    Google Scholar 

  10. Kaelbling, L., Littman, M., Moore, A.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  11. Kempka, M., Wydmuch, M., Runc, G., Toczek, J., Jaśkowski, W.: Vizdoom: A doom-based ai research platform for visual reinforcement learning. In: IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8 (2016)

    Google Scholar 

  12. Mnih, V., et al.: Playing atari with deep reinforcement learning (2013)

    Google Scholar 

  13. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Google Scholar 

  14. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017)

    Google Scholar 

  15. Summerville, A., et al.: Procedural content generation via machine learning (pcgml). IEEE Trans. Games 10(3), 257–270 (2018)

    Article  Google Scholar 

  16. Sunehag, P., et al.: Value-decomposition networks for cooperative multi-agent learning based on team reward. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. pp. 2085–2087. AAMAS ’18, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2016)

    Google Scholar 

  17. Sutton, R., Barto, A.: Reinf. Learn.: Introduction. MIT Press, London, England (2018)

    Google Scholar 

  18. Vidhate, D., Kulkarni, P.: Enhanced cooperative multi-agent learning algorithms (ecmla) using reinforcement learning. In: 2016 International Conference on Computing, Analytics and Security Trends (CAST), pp. 556–561. IEEE (2016)

    Google Scholar 

  19. Vinyals, O., et al.: Grandmaster level in Starcraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019)

    Article  Google Scholar 

  20. Yannakakis, G., Togelius, J.: Artificial Intelligence and Games. Springer (2018). https://doi.org/10.1007/978-3-319-63519-4

  21. Zhang, Q., Zhao, D., Lewis, F.: Model-free reinforcement learning for fully cooperative multi-agent graphical games. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2018)

    Google Scholar 

  22. Zhao, Y., Borovikov, I., Rupert, J., Somers, C., Bierami, A.: On multi-agent learning in team sports games. In: Proceedings of the 36th International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank NVIDIA, CAPES and FAPERJ for the financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thaís Ferreira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ferreira, T., Clua, E., Kohwalter, T.C., Santos, R. (2022). OptimizingMARL: Developing Cooperative Game Environments Based on Multi-agent Reinforcement Learning. In: Göbl, B., van der Spek, E., Baalsrud Hauge, J., McCall, R. (eds) Entertainment Computing – ICEC 2022. ICEC 2022. Lecture Notes in Computer Science, vol 13477. Springer, Cham. https://doi.org/10.1007/978-3-031-20212-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20212-4_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20211-7

  • Online ISBN: 978-3-031-20212-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics