Abstract
In this contribution Markov games are considered in which the first, player knows the current state but the second player knows the current state and the current action of the first player. Such Markov games are called Markov games with complete information or minimax decision models. By means of a Bellman equation a sufficient condition for the average optimality of a stationary deterministic strategy is given. Furthermore, Howard’s strategy improvement known for Markov decision models is generalized to Markov games.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer-Verlag Heidelberg
About this paper
Cite this paper
Küenle, HU. (1993). Average optimal strategies in stochastic games with complete information. In: Hansmann, KW., Bachem, A., Jarke, M., Katzenberger, W.E., Marusev, A. (eds) DGOR / ÖGOR. Operations Research Proceedings 1992, vol 1992. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-78196-4_105
Download citation
DOI: https://doi.org/10.1007/978-3-642-78196-4_105
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56642-7
Online ISBN: 978-3-642-78196-4
eBook Packages: Springer Book Archive