Relative Policy-Transition Optimization for Fast Policy Transfer

Authors

  • Jiawei Xu Tencent Robotics X The Chinese University of Hong Kong, Shenzhen
  • Cheng Zhou Tencent Robotics X
  • Yizheng Zhang Tencent Robotics X
  • Baoxiang Wang The Chinese University of Hong Kong, Shenzhen
  • Lei Han Tencent Robotics X

DOI:

https://doi.org/10.1609/aaai.v38i14.29550

Keywords:

ML: Reinforcement Learning

Abstract

We consider the problem of policy transfer between two Markov Decision Processes (MDPs). We introduce a lemma based on existing theoretical results in reinforcement learning to measure the relativity gap between two arbitrary MDPs, that is the difference between any two cumulative expected returns defined on different policies and environment dynamics. Based on this lemma, we propose two new algorithms referred to as Relative Policy Optimization (RPO) and Relative Transition Optimization (RTO), which offer fast policy transfer and dynamics modelling, respectively. RPO transfers the policy evaluated in one environment to maximize the return in another, while RTO updates the parameterized dynamics model to reduce the gap between the dynamics of the two environments. Integrating the two algorithms results in the complete Relative Policy-Transition Optimization (RPTO) algorithm, in which the policy interacts with the two environments simultaneously, such that data collections from two environments, policy and transition updates are completed in one closed loop to form a principled learning framework for policy transfer. We demonstrate the effectiveness of RPTO on a set of MuJoCo continuous control tasks by creating policy transfer problems via variant dynamics.

Published

2024-03-24

How to Cite

Xu, J., Zhou, C., Zhang, Y., Wang, B., & Han, L. (2024). Relative Policy-Transition Optimization for Fast Policy Transfer. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 16164-16172. https://doi.org/10.1609/aaai.v38i14.29550

Issue

Section

AAAI Technical Track on Machine Learning V