Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning

Authors

  • Songtao Lu IBM Research
  • Kaiqing Zhang University of Illinois at Urbana-Champaign
  • Tianyi Chen Rensselaer Polytechnic Institute
  • Tamer Başar University of Illinois at Urbana-Champaign
  • Lior Horesh IBM Research

DOI:

https://doi.org/10.1609/aaai.v35i10.17062

Keywords:

Optimization, Reinforcement Learning, Distributed Machine Learning & Federated Learning

Abstract

This paper deals with distributed reinforcement learning problems with safety constraints. In particular, we consider that a team of agents cooperate in a shared environment, where each agent has its individual reward function and safety constraints that involve all agents' joint actions. As such, the agents aim to maximize the team-average long-term return, subject to all the safety constraints. More intriguingly, no central controller is assumed to coordinate the agents, and both the rewards and constraints are only known to each agent locally/privately. Instead, the agents are connected by a peer-to-peer communication network to share information with their neighbors. In this work, we first formulate this problem as a distributed constrained Markov decision process (D-CMDP) with networked agents. Then, we propose a decentralized policy gradient (PG) method, Safe Dec-PG, to perform policy optimization based on this D-CMDP model over a network. Convergence guarantees, together with numerical results, showcase the superiority of the proposed algorithm. To the best of our knowledge, this is the first decentralized PG algorithm that accounts for the coupled safety constraints with a quantifiable convergence rate in multi-agent reinforcement learning. Finally, we emphasize that our algorithm is also novel in solving a class of decentralized stochastic nonconvex-concave minimax optimization problems, where both the algorithm design and corresponding theoretical analysis are of independent interest.

Downloads

Published

2021-05-18

How to Cite

Lu, S., Zhang, K., Chen, T., Başar, T., & Horesh, L. (2021). Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8767-8775. https://doi.org/10.1609/aaai.v35i10.17062

Issue

Section

AAAI Technical Track on Machine Learning III