Efficient Exploration in Resource-Restricted Reinforcement Learning

Authors

  • Zhihai Wang University of Science and Technology of China
  • Taoxing Pan University of Science and Technology of China
  • Qi Zhou University of Science and Technology of China
  • Jie Wang University of Science and Technology of China Hefei Comprehensive National Science Center

DOI:

https://doi.org/10.1609/aaai.v37i8.26224

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

In many real-world applications of reinforcement learning (RL), performing actions requires consuming certain types of resources that are non-replenishable in each episode. Typical applications include robotic control with limited energy and video games with consumable items. In tasks with non-replenishable resources, we observe that popular RL methods such as soft actor critic suffer from poor sample efficiency. The major reason is that, they tend to exhaust resources fast and thus the subsequent exploration is severely restricted due to the absence of resources. To address this challenge, we first formalize the aforementioned problem as a resource-restricted reinforcement learning, and then propose a novel resource-aware exploration bonus (RAEB) to make reasonable usage of resources. An appealing feature of RAEB is that, it can significantly reduce unnecessary resource-consuming trials while effectively encouraging the agent to explore unvisited states. Experiments demonstrate that the proposed RAEB significantly outperforms state-of-the-art exploration strategies in resource-restricted reinforcement learning environments, improving the sample efficiency by up to an order of magnitude.

Downloads

Published

2023-06-26

How to Cite

Wang, Z., Pan, T., Zhou, Q., & Wang, J. (2023). Efficient Exploration in Resource-Restricted Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10279-10287. https://doi.org/10.1609/aaai.v37i8.26224

Issue

Section

AAAI Technical Track on Machine Learning III