Skip to main content
Log in

A globally and superlinearly convergent trust region method for LC 1 optimization problems

  • Published:
Applied Mathematics-A Journal of Chinese Universities Aims and scope Submit manuscript

Abstract

A new trust region algorithm for solving convex LC 1 optimization problem is presented. It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Rockafellar, R. T. and Wets, J. B., Generalized linear-quadratic programs of deterministic and stochastic optimal control in discrete time, SIAM J. Control. Optim., 1990, 28:810–822.

    Article  MATH  MathSciNet  Google Scholar 

  2. Rockafellar, R. T., Computational schemes for solving large scale problems in extended linear-quadratic programming, Math. Programming, 1990, 48:447–474.

    Article  MATH  MathSciNet  Google Scholar 

  3. Qi, L. and Sun, J., A nonsmooth version of Newton's method, Math. Programming, 1993, 58:353–368.

    Article  MathSciNet  Google Scholar 

  4. Qi, L., and Sun, J., A trust region algorithm for minimization of locally Lipschitzian functions, Math. Programming, 1994, 66:25–43.

    Article  MathSciNet  Google Scholar 

  5. Jiang, H. Y. and Qi, L., A globally and superlinearly convergent trust region algorithm for convex SC 1 minimization problems and its application to stochastic programs, J. Optim. Theory Appl., 1996, 90: 649–669.

    Article  MATH  MathSciNet  Google Scholar 

  6. Qi, L., Superlinearly convergent approximate Newton methods for LC 1 optimization problems, Math. Programming, 1994, 64:277–294.

    Article  MathSciNet  Google Scholar 

  7. Pang, J. S., Qi, L., A globally convergent Newton method for convex SC 1 minimization problems, J. Optim. Theory Appl., 1995, 85:635–648.

    Article  MathSciNet  Google Scholar 

  8. Hiriart-Urruty, J. B., Strodiot, J. J., Nguyen, V. H., Generalized Hession matrix and second-order optimality conditions for problems with C 1,1 data, Appl. Math. Optim., 1984, 11:43–56.

    Article  MATH  MathSciNet  Google Scholar 

  9. Klatte, D., Tammer, K., On second-order sufficient optimality conditions for C 1,1-optimization problems, Optimization, 1988, 19:169–179.

    Article  MATH  MathSciNet  Google Scholar 

  10. Yang, X. Q., Jeyakumar, V., Generalized second-order directinnal derivatives and optimization with C 1,1 functions, Optimization, 1992, 26:165–185.

    Article  MATH  MathSciNet  Google Scholar 

  11. Mifflin, R., Semismooth and semiconvex functions in constrained optimization, SIAM J. Control. Optim., 1977, 15:957–972.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Supported by the National Natural Science Foundation of P. R. China (19971002) and the Subject of Beijing Educational Committee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liping, Z., Yanlian, L. A globally and superlinearly convergent trust region method for LC 1 optimization problems. Appl. Math.- J. Chin. Univ. 16, 72–80 (2001). https://doi.org/10.1007/s11766-001-0039-6

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11766-001-0039-6

1991 MR Subject Classification

Keyword

Navigation