Abstract.
We give a simple proof of the theorem concerning optimality in a one-dimensional ergodic control problem. We characterize the optimal control in the class of all Markov controls. Our proof is probabilistic and does not need to solve the corresponding Bellman equation. This simplifies the proof.
Author information
Authors and Affiliations
Additional information
Accepted 24 March 1998
Rights and permissions
About this article
Cite this article
Fujita, Y. A Simple Proof of the Theorem Concerning Optimality in a One-Dimensional Ergodic Control Problem . Appl Math Optim 41, 1–7 (2000). https://doi.org/10.1007/s002459911001
Issue Date:
DOI: https://doi.org/10.1007/s002459911001