Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes

Authors

  • Chao Qu Ant Group, Hangzhou, China
  • Xiaoyu Tan Ant Group, Hangzhou, China
  • Siqiao Xue Ant Group, Hangzhou, China
  • Xiaoming Shi Ant Group, Hangzhou, China
  • James Zhang Ant Group, Hangzhou, China
  • Hongyuan Mei Toyota Technological Institute at Chicago, Chicago, IL, United States

DOI:

https://doi.org/10.1609/aaai.v37i8.26142

Keywords:

ML: Reinforcement Learning Algorithms

Abstract

We consider a sequential decision making problem where the agent faces the environment characterized by the stochastic discrete events and seeks an optimal intervention policy such that its long-term reward is maximized. This problem exists ubiquitously in social media, finance and health informatics but is rarely investigated by the conventional research in reinforcement learning. To this end, we present a novel framework of the model-based reinforcement learning where the agent's actions and observations are asynchronous stochastic discrete events occurring in continuous-time. We model the dynamics of the environment by Hawkes process with external intervention control term and develop an algorithm to embed such process in the Bellman equation which guides the direction of the value gradient. We demonstrate the superiority of our method in both synthetic simulator and real-data experiments.

Downloads

Published

2023-06-26

How to Cite

Qu, C., Tan, X., Xue, S., Shi, X., Zhang, J., & Mei, H. (2023). Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9543-9551. https://doi.org/10.1609/aaai.v37i8.26142

Issue

Section

AAAI Technical Track on Machine Learning III