Abstract
In a typical reinforcement learning (RL) setting details of the environment are not given explicitly but have to be estimated from observations. Most RL approaches only optimize the expected value. However, if the number of observations is limited considering expected values only can lead to false conclusions. Instead, it is crucial to also account for the estimator’s uncertainties. In this paper, we present a method to incorporate those uncertainties and propagate them to the conclusions. By being only approximate, the method is computationally feasible. Furthermore, we describe a Bayesian approach to design the estimators. Our experiments show that the method considerably increases the robustness of the derived policies compared to the standard approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Calafiore, G., El Ghaoui, L.: On distributionally robust chance-constrained linear programs. In: Optimization Theory and Applications (2006)
D’Agostini, G.: Bayesian Reasoning in Data Analysis: A Critical Introduction. World Scientific Publishing, Singapore (2003)
Delage, E., Mannor, S.: Percentile optimization in uncertain Markov decision processes with application to efficient exploration. In: Proc. of the Int. Conf. on Machine Learning (2007)
Engel, Y., Mannor, S., Meir, R.: Bayes meets Bellman: the Gaussian process approach to temporal difference learning. In: Proc. of the Int. Conf. on Machine Learning (2003)
Engel, Y., Mannor, S., Meir, R.: Reinforcement learning with Gaussian processes. In: Proc. of the Int. Conf. on Machine Learning (2005)
Ghavamzadeh, M., Engel, Y.: Bayesian policy gradient algorithms. In: Advances in Neural Information Processing Systems (2006)
Ghavamzadeh, M., Engel, Y.: Bayesian actor-critic algorithms. In: Proc. of the Int. Conf. on Machine Learning (2007)
Nilim, A., El Ghaoui, L.: Robustness in Markov decision problems with uncertain transition matrices. In: Advances in Neural Information Processing Systems (2003)
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons Canada, Ltd., Chichester (1994)
Schneegass, D., Udluft, S., Martinetz, T.: Uncertainty propagation for quality assurance in reinforcement learning. In: Proc. of the Int. Joint Conf. on Neural Networks (2008)
Strehl, A.L., Littman, M.L.: An empirical evaluation of interval estimation for markov decision processes. In: 16th IEEE Int. Conf. on Tools with Artificial Intelligence, pp. 128–135 (2004)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Tresp, V.: The wet game of chicken. Siemens AG, CT IC 4, Technical Report (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hans, A., Udluft, S. (2009). Efficient Uncertainty Propagation for Reinforcement Learning with Limited Data. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-04274-4_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04273-7
Online ISBN: 978-3-642-04274-4
eBook Packages: Computer ScienceComputer Science (R0)