Reward-Respecting Subtasks for Model-Based Reinforcement Learning (Abstract Reprint)

Authors

  • Richard S. Sutton DeepMind, Edmonton, Alberta, Canada University of Alberta, Edmonton, Alberta, Canada Alberta Machine Intelligence Institute (Amii), Edmonton, Alberta, Canada Canada CIFAR AI Chair, Canada
  • Marlos C. Machado DeepMind, Edmonton, Alberta, Canada University of Alberta, Edmonton, Alberta, Canada Alberta Machine Intelligence Institute (Amii), Edmonton, Alberta, Canada Canada CIFAR AI Chair, Canada
  • G. Zacharias Holland DeepMind, Edmonton, Alberta, Canada
  • David Szepesvari DeepMind, Edmonton, Alberta, Canada
  • Finbarr Timbers DeepMind, Edmonton, Alberta, Canada
  • Brian Tanner DeepMind, Edmonton, Alberta, Canada
  • Adam White DeepMind, Edmonton, Alberta, Canada University of Alberta, Edmonton, Alberta, Canada Alberta Machine Intelligence Institute (Amii), Edmonton, Alberta, Canada Canada CIFAR AI Chair, Canada

DOI:

https://doi.org/10.1609/aaai.v38i20.30613

Keywords:

Journal Track

Abstract

To achieve the ambitious goals of artificial intelligence, reinforcement learning must include planning with a model of the world that is abstract in state and time. Deep learning has made progress with state abstraction, but temporal abstraction has rarely been used, despite extensively developed theory based on the options framework. One reason for this is that the space of possible options is immense, and the methods previously proposed for option discovery do not take into account how the option models will be used in planning. Options are typically discovered by posing subsidiary tasks, such as reaching a bottleneck state or maximizing the cumulative sum of a sensory signal other than reward. Each subtask is solved to produce an option, and then a model of the option is learned and made available to the planning process. In most previous work, the subtasks ignore the reward on the original problem, whereas we propose subtasks that use the original reward plus a bonus based on a feature of the state at the time the option terminates. We show that option models obtained from such reward-respecting subtasks are much more likely to be useful in planning than eigenoptions, shortest path options based on bottleneck states, or reward-respecting options generated by the option-critic. Reward respecting subtasks strongly constrain the space of options and thereby also provide a partial solution to the problem of option discovery. Finally, we show how values, policies, options, and models can all be learned online and off-policy using standard algorithms and general value functions.

Downloads

Published

2024-03-24

How to Cite

Sutton, R. S., Machado, M. C., Holland, G. Z., Szepesvari, D., Timbers, F., Tanner, B., & White, A. (2024). Reward-Respecting Subtasks for Model-Based Reinforcement Learning (Abstract Reprint). Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22713-22713. https://doi.org/10.1609/aaai.v38i20.30613