Probabilistic Planning with Reduced Models

Authors

  • Luis Pineda University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v28i1.8775

Keywords:

Markov Decision Processes, Determinization, Continual Planning, Planning Under Uncertainty

Abstract

Markov decision processes (MDP) offer a rich model that has been extensively used by the AI community for planning and learning under uncertainty. However, solving MDPs is often intractable, which has led to the development of many approximate algorithms. In my dissertation work I introduce a new paradigm to handle this complexity by defining a family of MDP reduced models characterized by two parameters: the maximum number of primary outcomes per action that are fully accounted for and the maximum number of occurrences of the remaining exceptional outcomes that are planned for in advance. Reduced models can be solved much faster using heuristic search algorithms, benefiting from the dramatic reduction in the number of reachable states. This framework places recent work on MDP determinization in a broader context and lays the foundation for efficient and systematic exploration of the space of MDP model reductions. Progress so far work includes a formal definition of this family of MDP reductions, a continual planning paradigm to handle the case when the number of exceptions reaches the maximum allowed, a simple greedy approach to generate good reductions for a given planning domain, and a compilation scheme that generates MDP reductions from a PPDDL description of a planning problem.

Downloads

Published

2014-06-21

How to Cite

Pineda, L. (2014). Probabilistic Planning with Reduced Models. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8775