Abstract
The purpose of this paper is to illustrate how value iteration can be used in a zero-sum game to obtain structural results on the optimal (equilibrium) value and policy. This is done through the following example. We consider the problem of dynamic flow control of arriving customers into a finite buffer. The service rate may depend on the state of the system, may change in time and is unknown to the controller. The goal of the controller is to design a policy that guarantees the best performance under the worst case service conditions. The cost is composed of a holding cost, a cost for rejecting customers and a cost that depends on the quality of the service. We consider both discounted and expected average cost. The problem is studied in the framework of zero-sum Markov games where the server, called player 1, is assumed to play against the flow controller, called player 2. Each player is assumed to have the information of all previous actions of both players as well as the current and past states of the system. We show that there exists an optimal policy for both players which is stationary (that does not depend on the time). A value iteration algorithm is used to obtain monotonicity properties of the optimal policies. For the case that only two actions are available to one of the players, we show that his optimal policy is of a threshold type, and optimal policies exist for both players that may need randomization in at most one state.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
E. Altman, Flow control using the theory of zero-sum Markov games, Proceedings of the 31st IEEE Conference on Decision and Control, Tucson, Arizona, pp. 1632–1637, December 1992.
E. Altman and G. Koole, Stochastic Scheduling Games with Markov Decision Arrival Processes, Journal Computers and Mathematics with Appl., 3rd special issue on Differential Games, pp. 141–148, 1993.
E. Altman and N. Shimkin, Individually Optimal Dynamic Routing in a Processor Sharing System: Stochastic Game Analysis, EE Pub No. 849, August 1992. Submitted.
M. T. Hsiao and A. A. Lazar, Optimal Decentralized Flow Control of Markovian queueing Networks with Multiple Controller, CTR Technical Report, CUCTR-TR-19, Columbia University, 1986.
A. Federgruen, On N-person stochastic Games with denumerable state space, Adv. Appl Prob. 10, pp. 452–471, 1978.
L. C. M. Kallenberg, Linear Programming and Finite Markovian Control Problems, Math. Centre Tracts 148, Amsterdam, 1983.
H.-U. Küenle, On the optimality of (s,S)-strategies in a minimax inventory model with average cost criterion, Optimization 22 No. 1, pp. 123–138, 1991.
J. M. McNamara, S. Merad and E. J. Collins, The Hawk-Dove game as an average-cost problem, Adv. Appl. Prob. 23, pp. 667–682, 1991.
T. Parthasarathy and M. Stern, Markov games -a survey, Differential Games and Control Theory II, Roxin, Liu and Sternberg, 1977.
T.E.S. Raghavan and J.A. Filar, Algorithms for Stochastic Games -A survey, Zeitschrift für OR, vol 35, pp. 437–472, 1991.
L. S. Shapely, Stochastic games, Proceeding of the National Academy of Sciences USA 39, pp. 1095–1100, 1953.
N. N. Vorob’ev, Game Theory, Lectures for Economists and Systems Scientists, Springer-Verlag, 1977.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer Science+Business Media New York
About this paper
Cite this paper
Altman, E. (1994). Monotonicity of Optimal Policies in a Zero Sum Game: A Flow Control Model. In: Başar, T., Haurie, A. (eds) Advances in Dynamic Games and Applications. Annals of the International Society of Dynamic Games, vol 1. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-0245-5_15
Download citation
DOI: https://doi.org/10.1007/978-1-4612-0245-5_15
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-1-4612-6679-2
Online ISBN: 978-1-4612-0245-5
eBook Packages: Springer Book Archive