Abstract
Being difficult to scale poses great problems in multi-agent coordination. Multi-agent Reinforcement Learning (MARL) algorithms applied in small-scale multi-agent systems are hard to extend to large-scale ones because the latter is far more dynamic and the number of interactions increases exponentially with the growing number of agents. Some swarm intelligence algorithms simulate the mechanism of pheromones to control large-scale agent coordination. Inspired by such algorithms, PooL, a pheromone-inspired indirect communication framework applied to large-scale multi-agent reinforcement learning is proposed in order to solve the large-scale multi-agent coordination problem. Pheromones released by agents of PooL are defined as outputs of value-based reinforcement learning algorithms, which reflect agents’ views of the current environment. The pheromone update mechanism can efficiently organize the information of all agents and simplify the complex interactions among agents into low-dimensional representations. Pheromones perceived by agents can be regarded as a summary of the views of nearby agents which can better reflect the real situation of the environment. Q-Learning is taken as our base model to implement PooL and PooL is evaluated in various large-scale cooperative environments. Experiments show agents can capture effective information through PooL and achieve higher rewards than other state-of-arts methods with lower communication costs.
Supported by the National Key Research and Development Program of China: Science and Technology Innovation 2030-“New Generation Artificial Intelligence” Major Project under Grant 2018AAA0102301.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bourenane, M., Mellouk, A., Benhamamouch, D.: Reinforcement learning in multi-agent environment and ant colony for packet scheduling in routers. In: Proceedings of the 5th ACM International Workshop on Mobility Management and Wireless Access, pp. 137–143 (2007)
Buşoniu, L., Babuška, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. Innovations in Multi-agent Systems and Applications-1, pp. 183–221 (2010)
Das, A., et al.: Tarmac: targeted multi-agent communication. In: International Conference on Machine Learning, pp. 1538–1546. PMLR (2019)
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Hansen, E.A., Bernstein, D.S., Zilberstein, S.: Dynamic programming for partially observable stochastic games. In: AAAI, vol. 4, pp. 709–715 (2004)
Jiang, J., Dun, C., Huang, T., Lu, Z.: Graph convolutional reinforcement learning. ICLR (2020)
Jiang, J., Lu, Z.: Learning attentional communication for multi-agent cooperation. Advances in Neural Information Processing Systems. NIPS 2018, vol. 31, pp. 7254–7264 (2018)
Kennedy, J.: Swarm intelligence. In: Zomaya, A.Y. (ed.) Handbook of Nature-Inspired and Innovative Computing, pp. 187–219. Springer, Boston (2006). https://doi.org/10.1007/0-387-27705-6_6
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in Neural Information Processing Systems. NIPS 2017, vol. 30, pp. 6379–6390 (2020)
Luo, R., Yang, Y., Li, M., Zhou, M., Zhang, W., Wang, J.: Mean field multi agent reinforcement learning. The 35th International Conference on Machine Learning (ICML 2018). PMLR, pp. 5567–5576 (2018)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Monekosso, N., Remagnino, P.: Phe-Q: a pheromone based Q-learning. In: Stumptner, M., Corbett, D., Brooks, M. (eds.) AI 2001. LNCS (LNAI), vol. 2256, pp. 345–355. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45656-2_30
Oliehoek, F.A., Spaan, M.T., Vlassis, N.: Optimal and approximate q-value functions for decentralized POMDPs. J. Artif. Intell. Res. 32, 289–353 (2008)
Sukhbaatar, S., Szlam, A., Fergus, R.: Learning multiagent communication with backpropagation. In: Advances in Neural information Processing Systems. NIPS 2016, vol. 29, pp. 2252–2260 (2016)
Terry, J.K., et al.: Pettingzoo: Gym for multi-agent reinforcement learning. arXiv preprint arXiv:2009.14471 (2020)
Wei, E., Wicke, D., Freelan, D., Luke, S.: Multiagent soft q-learning. In: 2018 AAAI Spring Symposium Series (2018)
Xu, X., Li, R., Zhao, Z., Zhang, H.: Stigmergic independent reinforcement learning for multiagent collaboration. IEEE Trans. Neural Networks Learn. Syst. (2021)
Xu, X., Zhao, Z., Li, R., Zhang, H.: Brain-inspired stigmergy learning. IEEE Access 7, 54410–54424 (2019)
Yang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., Wang, J.: Mean field multi-agent reinforcement learning. In: International Conference on Machine Learning, pp. 5571–5580. PMLR (2018)
Zheng, L., Yang, J., Cai, H., Zhou, M., Zhang, W., Wang, J., Yu, Y.: Magent: a many-agent reinforcement learning platform for artificial collective intelligence. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Zhou, M., et al.: Factorized q-learning for large-scale multi-agent systems. In: Proceedings of the First International Conference on Distributed Artificial Intelligence, pp. 1–7 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cao, Z., Ma, X., Shi, M., Zhao, Z. (2022). Pheromone-inspired Communication Framework for Large-scale Multi-agent Reinforcement Learning. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13530. Springer, Cham. https://doi.org/10.1007/978-3-031-15931-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-15931-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15930-5
Online ISBN: 978-3-031-15931-2
eBook Packages: Computer ScienceComputer Science (R0)