From Past to Future: Rethinking Eligibility Traces

Authors

  • Dhawal Gupta University of Massachusetts
  • Scott M. Jordan University of Alberta
  • Shreyas Chaudhari University of Massachusetts
  • Bo Liu Amazon
  • Philip S. Thomas University of Massachusetts
  • Bruno Castro da Silva University of Massachusetts

DOI:

https://doi.org/10.1609/aaai.v38i11.29115

Keywords:

ML: Reinforcement Learning

Abstract

In this paper, we introduce a fresh perspective on the challenges of credit assignment and policy evaluation. First, we delve into the nuances of eligibility traces and explore instances where their updates may result in unexpected credit assignment to preceding states. From this investigation emerges the concept of a novel value function, which we refer to as the ????????????? ????? ????????. Unlike traditional state value functions, bidirectional value functions account for both future expected returns (rewards anticipated from the current state onward) and past expected returns (cumulative rewards from the episode's start to the present). We derive principled update equations to learn this value function and, through experimentation, demonstrate its efficacy in enhancing the process of policy evaluation. In particular, our results indicate that the proposed learning approach can, in certain challenging contexts, perform policy evaluation more rapidly than TD(λ)–a method that learns forward value functions, v^π, ????????. Overall, our findings present a new perspective on eligibility traces and potential advantages associated with the novel value function it inspires, especially for policy evaluation.

Published

2024-03-24

How to Cite

Gupta, D., Jordan, S. M., Chaudhari, S., Liu, B., Thomas, P. S., & Castro da Silva, B. (2024). From Past to Future: Rethinking Eligibility Traces. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12253-12260. https://doi.org/10.1609/aaai.v38i11.29115

Issue

Section

AAAI Technical Track on Machine Learning II