Hostname: page-component-848d4c4894-nr4z6 Total loading time: 0 Render date: 2024-05-17T12:26:38.852Z Has data issue: false hasContentIssue false

Asymptotic Normality of Discrete-Time Markov Control Processes

Published online by Cambridge University Press:  14 July 2016

Armando F. Mendoza-Pérez*
Affiliation:
Universidad Politécnica de Chiapas
Onésimo Hernández-Lerma*
Affiliation:
CINVESTAV
*
Postal address: Universidad Politécnica de Chiapas, Calle Eduardo J. Selvas S/N, Tuxtla Gutiérrez, Chiapas, Mexico. Email address: mepa680127@hotmail.com
∗∗Postal address: Department of Mathematics, CINVESTAV-IPN, A. Postal 14-740, Mexico DF 07000, Mexico. Email address: ohernand@math.cinvestav.mx
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper we study the asymptotic normality of discrete-time Markov control processes in Borel spaces, with possibly unbounded cost. Under suitable hypotheses, we show that the cost sequence is asymptotically normal. As a special case, we obtain a central limit theorem for (noncontrolled) Markov chains.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2010 

Footnotes

Research partially supported by CONACyT grant 104001.

References

[1] Gordienko, E. and Hernández-Lerma, O. (1995). {Average cost Markov control processes with weigthed norms: existence of canonical policies.} Appl. Math. 23, 199218.Google Scholar
[2] Gordienko, E. and Hernández-Lerma, O. (1995). {Average cost Markov control processes with weigthed norms: value iteration.} Appl. Math. 23, 219237.Google Scholar
[3] Hernández-Lerma, O. and Lasserre, J. B. (1996). Discrete-Time Markov Control Processes. Springer, New York.CrossRefGoogle Scholar
[4] Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-time Markov Control Processes. Springer, New York.Google Scholar
[5] Hernández-Lerma, O. and Lasserre, J. B. (2001). {Zero-sum stochastic games in Borel spaces: average payoff criteria.} SIAM J. Control Optimization 39, 15201539.Google Scholar
[6] Hernández-Lerma, O., Vega-Amaya, O. and Carrasco, G. (1999). {Sample-path optimality and variance-minimization of average cost Markov control processes.} SIAM J. Control Optimization 38, 7993.Google Scholar
[7] Hilgert, N. and Hernández-Lerma, O. (2003). {Bias optimality versus strong 0-discount optimality in Markov control processes with unbounded costs.} Acta Appl. Math. 77, 215235.Google Scholar
[8] Jarner, S. F. and Roberts, G. O. (2002). {Polynomial convergence rates of Markov chains.} Ann. Appl. Prob. 12, 224247.CrossRefGoogle Scholar
[9] Lánská, V. (1986). {A note on estimation in controlled diffusion processes.} Kybernetika 22, 133141.Google Scholar
[10] Luque-Vásquez, F. and Hernández-Lerma, O. (1999). {Semi-Markov control models with average costs.} Appl. Math. 26, 315331.Google Scholar
[11] Mandl, P. (1971). {On the variance in controlled Markov chains.} Kybernetika 7, 112.Google Scholar
[12] Mandl, P. (1973). {A connection between controlled Markov chains and martingales.} Kybernetika 9, 237241.Google Scholar
[13] Mandl, P. (1974). {Estimation and control in Markov chains.} Adv. Appl. Prob. 6, 4060.Google Scholar
[14] Mandl, P. (1974). On the asymptotic normality of the reward in a controlled Markov chain. In Progress in Statistics (European Meeting of Statisticians, Budapest, 1972), Vol. II, Colloquia Mathematica Societatis János Bolyai Vol. 9, North-Holland, Amsterdam, pp. 499505.Google Scholar
[15] Mandl, P. and Lausmanová, M. (1991). {Two extensions of asymptotic methods in controlled Markov chains.} Ann. Operat. Res. 28, 6779.Google Scholar
[16] Mendoza-Pérez, A. (2008). {Asymptotic normality of average cost Markov control processes.} Morfismos 12, 3352.Google Scholar
[17] Mendoza-Pérez, A. (2008). Pathwise average reward Markov control processes. , CINVES{}TAV-IPN. Available at http://www.math.cinvestav.mx/ohernand_students.Google Scholar
[18] Mendoza-Pérez, A. F. and Hernández-Lerma, O. (2010). {Markov control processes with pathwise constraints.} Math. Meth. Operat. Res. 71, 477502.Google Scholar
[19] Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.CrossRefGoogle Scholar
[20] Prieto-Rumeau, T. and Hernández-Lerma, O. (2009). {Variance minimization and the overtaking optimality approach to continuous–time controlled Markov chains.} Math. Meth. Operat. Res. 70, 527540.Google Scholar
[21] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley, New York.Google Scholar
[22] Vega-Amaya, O. (1998). Markov control processes in Borel spaces: undiscounted criteria. , UAM-Iztapalapa (in Spanish).Google Scholar
[23] Zhu, Q. X. and Guo, X. P. (2007). {Markov decision processes with variance minimization: a new condition and approach.} Stoch. Anal. Appl. 25, 577592.CrossRefGoogle Scholar