A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning

Authors

  • Yinmin Zhang University of Sydney Shanghai Artificial Intelligence Laboratory
  • Jie Liu The Chinese University of Hong Kong Shanghai Artificial Intelligence Laboratory
  • Chuming Li University of Sydney Shanghai Artificial Intelligence Laboratory
  • Yazhe Niu Shanghai Artificial Intelligence Laboratory
  • Yaodong Yang Peking University
  • Yu Liu Shanghai Artificial Intelligence Laboratory
  • Wanli Ouyang Shanghai Artificial Intelligence Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i15.29633

Keywords:

ML: Reinforcement Learning

Abstract

Offline-to-online Reinforcement Learning (O2O RL) aims to improve the performance of offline pretrained policy using only a few online samples. Built on offline RL algorithms, most O2O methods focus on the balance between RL objective and pessimism, or the utilization of offline and online samples. In this paper, from a novel perspective, we systematically study the challenges that remain in O2O RL and identify that the reason behind the slow improvement of the performance and the instability of online finetuning lies in the inaccurate Q-value estimation inherited from offline pretraining. Specifically, we demonstrate that the estimation bias and the inaccurate rank of Q-value cause a misleading signal for the policy update, making the standard offline RL algorithms, such as CQL and TD3-BC, ineffective in the online finetuning. Based on this observation, we address the problem of Q-value estimation by two techniques: (1) perturbed value update and (2) increased frequency of Q-value updates. The first technique smooths out biased Q-value estimation with sharp peaks, preventing early-stage policy exploitation of sub-optimal actions. The second one alleviates the estimation bias inherited from offline pretraining by accelerating learning. Extensive experiments on the MuJoco and Adroit environments demonstrate that the proposed method, named SO2, significantly alleviates Q-value estimation issues, and consistently improves the performance against the state-of-the-art methods by up to 83.1%.

Published

2024-03-24

How to Cite

Zhang, Y., Liu, J., Li, C., Niu, Y., Yang, Y., Liu, Y., & Ouyang, W. (2024). A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16908-16916. https://doi.org/10.1609/aaai.v38i15.29633

Issue

Section

AAAI Technical Track on Machine Learning VI