Skip to main content
Log in

Approximation of optimal feedback control: a dynamic programming approach

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We consider the general continuous time finite-dimensional deterministic system under a finite horizon cost functional. Our aim is to calculate approximate solutions to the optimal feedback control. First we apply the dynamic programming principle to obtain the evolutive Hamilton–Jacobi–Bellman (HJB) equation satisfied by the value function of the optimal control problem. We then propose two schemes to solve the equation numerically. One is in terms of the time difference approximation and the other the time-space approximation. For each scheme, we prove that (a) the algorithm is convergent, that is, the solution of the discrete scheme converges to the viscosity solution of the HJB equation, and (b) the optimal control of the discrete system determined by the corresponding dynamic programming is a minimizing sequence of the optimal feedback control of the continuous counterpart. An example is presented for the time-space algorithm; the results illustrate that the scheme is effective.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Bardi M.: Some applications of viscosity solutions to optimal control and differential games. In: Dolcetta, I.C., Lions, P.L. (eds) Viscosity Solutions and Applications, Lecture Notes in Mathematics, vol. 1660, pp. 44–97. Springer, Berlin (1997)

    Google Scholar 

  2. Bardi M., Dolcetta I.C.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Birkhauser, Boston (1997)

    Book  Google Scholar 

  3. Barles G., Souganidis P.E.: Convergence of approximation schemes for fully nonlinear second order equations. J. Asymptot. Anal. 4, 271–283 (1991)

    Google Scholar 

  4. Barron E.N.: Application of viscosity solutions of infinite-dimensional Hamilton–Jacobi–Bellman equations to some problems in distributed optimal control. J. Optim. Theory Appl. 64, 245–268 (1990)

    Article  Google Scholar 

  5. Bryson A.E. Jr: Optimal control—1950–1985. IEEE Control Syst. Mag. 13, 26–33 (1996)

    Article  Google Scholar 

  6. Cannarsa P., Gozzi F., Soner H.M.: A dynamic programming approach to nonlinear boundary control problems of parabolic type. J. Funct. Anal. 117, 25–61 (1993)

    Article  Google Scholar 

  7. Capuzzo Dolcetta I.: On a descrete approximation of the Hamilton–Jacobi–Bellman equation of dynamic programming. Appl. Math. Optim. 10, 366–377 (1983)

    Google Scholar 

  8. Capuzzo Dolcetta I., Ishii H.: Approximation solutions of the Bellman equation of deterministic control theory. Appl. Math. Optim. 11, 161–181 (1984)

    Article  Google Scholar 

  9. Crandall M.G.: Viscosity solutions: a primer. In: Dolcetta, I.C., Lions, P.L. (eds) Viscosity Solutions and Applications, Lecture Notes in Mathematics, vol. 1660, pp. 1–43. Springer, Berlin (1997)

    Google Scholar 

  10. Crandall M.G., Evans L.C., Lions P.L.: Some properties of viscosity solutions of Hamilton–Jacobi equations. Tran. Amer. Math. Soc. 282, 487–502 (1984)

    Article  Google Scholar 

  11. Crandall M.G., Ishii H., Lions P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc. 27, 1–67 (1992)

    Article  Google Scholar 

  12. Crandall M.G., Lions P.L.: Viscosity solutions of Hamilton–Jacobi equations. Tran. Amer. Math. Soc. 277, 1–42 (1983)

    Article  Google Scholar 

  13. Crandall M.G., Lions P.L.: Two approximations of solutions of Hamilton–Jacobi equations. Math. Comp. 43, 1–19 (1984)

    Article  Google Scholar 

  14. Falcone, M.: A numerical approach to the infinite horizon problem of deterministic control theory. Appl. Math. Optim. 15, 1–13 (1987) and 23, 213–214 (1991)

    Google Scholar 

  15. Falcone, M.: Numerical solutions of dynamic programming equations, Appendix in the book by Bardi, M., Capuzzo Dolcetta, I. (eds.) Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Birkhauser, Boston (1997)

  16. Falcone, M., Giorgi, T.: An approximation scheme for evolutive Hamilton–Jacobi equations. In: Stochastic Analysis, Control, Optimization and applications, Systems and Control: Foundations and Applications, pp. 289–303. Birkhauser, Boston (1999)

  17. Fleming W.H., Sonor H.M.: Controlled Markov Processes and Viscosity Solutions. 2nd edn. Springer, New York (2006)

    Google Scholar 

  18. Gozzi F., Sritharan S.S., Swiech A.: Viscosity solutions of dynamic-programming equations for the optimal control of the two-dimensional Navier–Stokes equations. Arch. Ration. Mech. Anal. 163, 295–327 (2002)

    Article  Google Scholar 

  19. Guo B.Z., Sun B.: Numerical solution to the optimal birth feedback control of a population dynamics: viscosity solution approach. Optim. Control Appl. Meth. 26, 229–254 (2005)

    Article  Google Scholar 

  20. Guo B.Z., Sun B.: Numerical solution to the optimal feedback control of continuous casting process. J. Glob. Optim. 39, 171–195 (2007)

    Article  Google Scholar 

  21. Guo, B.Z., Sun, B.: A new algorithm for finding numerical solutions of optimal feedback control. IMA Math.Control Inf. (to appear)

  22. Huang C.S., Wang S., Teo K.L.: Solving Hamilton–Jacobi–Bellman equations by a modified method of characteristics. Nonlinear Anal., TMA 40, 279–293 (2000)

    Article  Google Scholar 

  23. Huang C.S., Wang S., Teo K.L.: On application of an alternating direction method to Hamilton–Jacobi–Bellman equations. J. Comput. Appl. Math. 166, 153–166 (2004)

    Article  Google Scholar 

  24. Kocan M., Soravia P.: A viscosity approach to infinite-dimensional Hamilton–Jacobi equations arising in optimal control with state constraints. SIAM J. Control Optim. 36, 1348–1375 (1998)

    Article  Google Scholar 

  25. Lions P.L.: Generalized Solutions of Hamilton–Jacobi Equations. Pitman, London (1982)

    Google Scholar 

  26. Loxton R.C., Teo K.L., Rehbock V.: Optimal control problems with multiple characteristic time points in the objective and constraints. Automatica 44, 2923–2929 (2008)

    Article  Google Scholar 

  27. Rubio J.E.: Control and Optimization: The Linear Treatment of Nonlinear Problems, Nonlinear Science: Theory and Applications. Manchester University Press, Manchester (1986)

    Google Scholar 

  28. Souganidis P.E.: Approximation schemes for viscosity solutions of Hamilton–Jacobi equations. J. Differ. Equ. 59, 1–43 (1985)

    Article  Google Scholar 

  29. Stoer J., Bulirsch R.: Introduction to numerical analysis. 2nd edn. Springer, New York (1993)

    Google Scholar 

  30. von Stryk O., Bulirsch R.: Direct and indirect methods for trajectory optimization. Annu. Oper. Res. 37, 357–373 (1992)

    Article  Google Scholar 

  31. Sussmann H.J., Willems J.C.: 300 years of optimal control: from the brachystochrone to the maximum principle. IEEE Controls Syst. Mag. 17, 32–44 (1997)

    Article  Google Scholar 

  32. Teo K.L., Goh C.J., Wong K.H.: A Unified Computational Approach to Optimal Control Problems. Longman Scientific and Technical, England (1991)

    Google Scholar 

  33. Wang S., Gao F., Teo K.L.: An upwind finite-difference method for the approximation of viscosity solutions to Hamilton–Jacobi–Bellman equations. IMA J. Math. Control. Inf. 17, 167–178 (2000)

    Article  Google Scholar 

  34. Wang S., Jennings L.S., Teo K.L.: Numerical solution of Hamilton–Jacobi–Bellman equations by an upwind finite volume method. J. Glob. Optim. 27, 177–192 (2003)

    Article  Google Scholar 

  35. Willamson W.E.: Use of polynomial approximations to calculate suboptimal controls. AIAA J. 9, 2271–2273 (1971)

    Article  Google Scholar 

  36. Yong J.M.: Dynamic Programming Principle and Hamilton–Jacobi–Bellman Equations (in Chinese). Shanghai Scientific and Technical Publishers, Shanghai (1992)

    Google Scholar 

  37. Yong J.M., Zhou X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations. Applications of Mathematics, vol. 43. Springer, New York (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bao-Zhu Guo.

Additional information

This work was supported by the National Natural Science Foundation of China and the National Research Foundation of South Africa.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guo, BZ., Wu, TT. Approximation of optimal feedback control: a dynamic programming approach. J Glob Optim 46, 395–422 (2010). https://doi.org/10.1007/s10898-009-9432-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-009-9432-0

Keywords

AMS Subject Classifications

Navigation