Skip to main content

Advertisement

Log in

Fully discrete schemes for monotone optimal control problems

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

In this article, we study an infinite horizon optimal control problem with monotone controls. We analyze the associated Hamilton–Jacobi–Bellman (HJB) variational inequality which characterizes the value function and consider the totally discretized problem using Lagrange elements to approximate the state space \(\Omega \). The convergence orders of these approximations are proved, which are in general \((h+\frac{k}{\sqrt{h}})^\gamma \) where \(\gamma \) is the Hölder constant of the value function u, h and k are the time and space discretization parameters, respectively. A special election of the relations between h and k allows to obtain a convergence of order \(k^{\frac{2}{3}\gamma }\), which is valid without semiconcavity hypotheses over the problem’s data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bardi M, Capuzzo Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton–Jacobi–Bellman equations. Birkhäuser, Boston

    Book  MATH  Google Scholar 

  • Barron EN (1985) Viscosity solutions for the monotone control problem. SIAM J Control Optim 23(2):161–171

    Article  MathSciNet  MATH  Google Scholar 

  • Barron EN, Jensen R (1980) Optimal control problems with no turning back. J Differ Equ 36(2):223–248

    Article  MathSciNet  MATH  Google Scholar 

  • Capuzzo Dolcetta I (1983) On a discrete approximation of the Hamilton–Jacobi equation of dynamic programming. Appl Math Optim 10:367–377

    Article  MathSciNet  MATH  Google Scholar 

  • Capuzzo Dolcetta I, Ishii H (1984) Approximate solution of the Bellman equation of deterministic control theory. Appl Math Optim 11:161–181

    Article  MathSciNet  Google Scholar 

  • Chiarolla MB, Ferrari G, Stabile G (2015) Optimal dynamic procurement policies for a storable commodity with Lévy prices and convex holding costs. Eur J Oper Res Elsevier 247:847–858

    Article  MATH  Google Scholar 

  • Ciarlet PG (1976) Numerical analysis for the finite element methods. Presses de l’Université de Montréal, Montréal

    MATH  Google Scholar 

  • Ciarlet PG, Raviart PAG (1973) Maximum principle and uniform convergence for the finite element method. Comput Methods Appl Mech Eng 2:17–31

    Article  MathSciNet  MATH  Google Scholar 

  • Crandall MG, Evans LC, Lions PL (1984) Some properties of viscosity solutions of Hamilton–Jacobi–Bellman equations. Trans Am Math Soc 282:487–502

    Article  MATH  Google Scholar 

  • Falcone M (1987) A numerical approach to the infinite horizon problem of deterministic control theory. Appl Math Optim 15:1–13

    Article  MathSciNet  MATH  Google Scholar 

  • Ferrari G, Riedel F, Steg JH (2016) Continuous-time public good contribution under uncertainty: a stochastic control approach. Appl Math Optim. doi:10.1007/s00245-016-9337-5

    MATH  Google Scholar 

  • Fleming WH, Soner HM (1993) Controlled Markov processes and viscosity solutions. Springer, New York

    MATH  Google Scholar 

  • González RLV, Rofman E (1985) On deterministic control problem: an approximation procedure for the optimal cost, Parts I and II. SIAM J Control Optim 23(2):242–266 and 267–285

  • González RLV, Tidball MM (1992) Fast solution of general nonlinear fixed point problems. In: System modeling and optimization, Proceedings of 15th IFIP conference on system modeling and optimization, vol 180., lecture notes in control and information sciencesSpringer, New York, pp 35–44

  • González RLV, Tidball MM (1992) Sur lordre de convergence des solutions discrétisées en temps et en espace de lequation de Hamilton-Jacobi. Comptes Rendus Acad. Sci., Paris, Tome 314. Serie I:479–482

  • Hellwig M (2009) A maximum principle for control problems with monotonicity constraints. Preprints of The Max Planck Institute for Research on Collective goods. Bonn, 2008/4. Max Planck Institute for Research on Collective Goods, Bonn

  • Karatzas I, Shreve SE (1984) Connections between optimal stopping and singular stochastic control I. Monotone follower problems. SIAM J Control Optim 22:856–877

    Article  MathSciNet  MATH  Google Scholar 

  • Lions PL, Mercier B (1980) Approximation numérique des equations de Hamilton–Jacobi–Bellman. RAIRO Anal Numérique/Numer Anal 14(4):369–393

    MathSciNet  MATH  Google Scholar 

  • Philipp EA, Aragone LS, Parente LA (2015) Discrete time schemes for optimal control problems with monotone controls. J Comput Appl Math COAM Springer 34(3):847–863

    MathSciNet  MATH  Google Scholar 

  • Zhuang W, Li MZF (2012) Monotone optimal control for a class of Markov decision processes. Eur J Oper Res Elsevier 217:342–350

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank the anonymous referee for thoughtful comments that led to substantial improvement of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eduardo A. Philipp.

Additional information

Communicated by Domingo Alberto Tarzia.

This work was partially supported by Grant PIP CONICET 286/2012 and PICT ANPCYT 2212/2012.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aragone, L.S., Parente, L.A. & Philipp, E.A. Fully discrete schemes for monotone optimal control problems. Comp. Appl. Math. 37, 1047–1065 (2018). https://doi.org/10.1007/s40314-016-0384-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40314-016-0384-y

Keywords

Mathematics Subject Classification

Navigation