Skip to main content

Control vector iteration; Duality in optimal control with first order differential equations; Dynamic programming and Newton's method in unconstrained optimal control; Dynamic programming: Average cost per stage problems; Dynamic programming: Continuous-time optimal control; Dynamic programming: Discounted problems; Dynamic programming in clustering, Dynamic programming: Infinite horizon problems, overview; Dynamic programming: Inventory control; Dynamic programming: Optimal control applications; Dynamic programming: Stochastic shortest path problems; Dynamic programming: Undiscounted problems; High-order maximum principle for abnormal extremals; Infinite horizon control and dynamic games; MINLP: Applications in the interaction of design and control; Multi-objective optimization: Interaction of design and control; Multiple objective dynamic programming; Neurodynamic programming; Optimal control of a flexible arm; Optimization strategies for dynamic systems; Pontryagin maximum principle; Robust control; Robust control: Schur stability of polytopes of polynomials; Semi-infinite programming and control problems; Sequential quadratic programming: Interior point methods for distributed optimal control problems; Suboptimal control HAMILTON-JACOBI-BELLMAN EQUATION

  • Reference work entry
Encyclopedia of Optimization
  • 447 Accesses

Even though dynamic programming [2] was originally developed for the solution of problems which exhibit discrete types of decisions, it has also been applied to continuous formulations. In this article, the application of dynamic programming to the solution of continuous-time optimal control problems is discussed. By discretizing the problem, applying the dynamic programming equations, then returning to the continuous domain, a partial differential equation results, the Hamilton-Jacobi-Bellman equation (HJB equation). This equation is often referred to as the continuous-time equivalent of the dynamic programming algorithm . In this article, the HJB equation will first be derived. A simple application will be presented, in addition to its use in solving the linear quadratic control problem. Finally, a brief overview of some solution methods and applications presented in the literature will be given.

Problem Formulation

The dynamic programming approach will be applied to a system of the...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 1,699.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Beard, R.W., Saridis, G.N., and Wen, J.T.: ‘Approximate solutions to the time-invariant Hamilton-Jacobi-Bellman equation’, J. Optim. Th. Appl. 96 no. 3 (1998), 589–626.

    Article  MATH  MathSciNet  Google Scholar 

  • Bellman, R.: Dynamic programming, Princeton Univ. Press, 1957.

    Google Scholar 

  • Bertsekas, D.P.: Dynamic programming and optimal control, Athena Sci., 1995.

    Google Scholar 

  • Bryson, A.E., and Ho, Y.: Applied optimal control, Hemisphere, 1975.

    Google Scholar 

  • Cahill, A.J., James, M.R., Kieffer, J.C., and Williamson, D.: ‘Remarks on the application of dynamic programming to the optimal path timing of robot manipulators’, Internat. J. Robust and Nonlinear Control 8, 1998, 463–482.

    Article  MATH  Google Scholar 

  • Evans, L.C., and James, M.R.: ‘The Hamilton-Jacobi-Bellman equation for time-optimal control’, SIAM J. Control Optim. 27 no. 6 (1989), 1477–1489.

    Article  MATH  MathSciNet  Google Scholar 

  • Sun, M.: ‘Alternating direction algorithms for solving Hamilton-Jacobi-Bellman equations’, Applied Math. and Optim. 34 (1996), 267–277.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Kluwer Academic Publishers

About this entry

Cite this entry

Esposito, W.R. (2001). Control vector iteration; Duality in optimal control with first order differential equations; Dynamic programming and Newton's method in unconstrained optimal control; Dynamic programming: Average cost per stage problems; Dynamic programming: Continuous-time optimal control; Dynamic programming: Discounted problems; Dynamic programming in clustering, Dynamic programming: Infinite horizon problems, overview; Dynamic programming: Inventory control; Dynamic programming: Optimal control applications; Dynamic programming: Stochastic shortest path problems; Dynamic programming: Undiscounted problems; High-order maximum principle for abnormal extremals; Infinite horizon control and dynamic games; MINLP: Applications in the interaction of design and control; Multi-objective optimization: Interaction of design and control; Multiple objective dynamic programming; Neurodynamic programming; Optimal control of a flexible arm; Optimization strategies for dynamic systems; Pontryagin maximum principle; Robust control; Robust control: Schur stability of polytopes of polynomials; Semi-infinite programming and control problems; Sequential quadratic programming: Interior point methods for distributed optimal control problems; Suboptimal control HAMILTON-JACOBI-BELLMAN EQUATION . In: Encyclopedia of Optimization. Springer, Boston, MA. https://doi.org/10.1007/0-306-48332-7_190

Download citation

  • DOI: https://doi.org/10.1007/0-306-48332-7_190

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-7923-6932-5

  • Online ISBN: 978-0-306-48332-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics