Skip to main content
Log in

An adaptive augmented Lagrangian method for large-scale constrained optimization

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its ability to be implemented matrix-free. This is important as this feature of augmented Lagrangian methods is responsible for renewed interests in employing such methods for solving large-scale problems. We provide convergence results from remote starting points and illustrate by a set of numerical experiments that our method outperforms traditional augmented Lagrangian methods in terms of critical performance measures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111, 5–32 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Computer Science and Applied Mathematics. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York (1982)

    Google Scholar 

  3. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Belmont, Massachusetts (1996)

  4. Birgin, E.G., Martínez, J.M.: Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization. Comput. Optim. Appl. 51, 941–965 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Boggs, P.T., Tolle, J.W.: A family of descent functions for constrained optimization. SIAM J. Numer. Anal. 21, 1146–1161 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3, 1–122 (2011)

    Article  MATH  Google Scholar 

  7. Byrd, R.H., Lopez-Calva, G., Nocedal, J.: A line search exact penalty method using steering rules. Math Program 133, 39–73 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Byrd, R.H., Nocedal, J., Waltz, R.A.: Steering exact penalty methods for nonlinear programming. Optim. Methods Softw. 23, 197–213 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Conn, A.R., Gould, N.I.M., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  10. Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: A Fortran package for large-scale nonlinear optimization (Release A). Lecture Notes in Computation Mathematics, vol. 17. Springer, Berlin, Heidelberg, New York, London, Paris and Tokyo (1992)

  11. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000)

    Book  MATH  Google Scholar 

  12. Davidson, K.R., Donsig, A.P.: Real analysis and applications. Undergraduate Texts in Mathematics. Springer, New York (2010). Theory in practice

    Book  MATH  Google Scholar 

  13. Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19, 400–408 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dennis Jr, J., El-Alem, M., Maciel, M.C.: A global convergence theory for general trust-region-based algorithms for equality constrained optimization. SIAM J. Optim. 7, 177–207 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  15. DiPillo, G., Grippo, L.: A new class of augmented Lagrangians in nonlinear programming. SIAM J. Control Optim. 17, 618–628 (1979)

    Article  MathSciNet  Google Scholar 

  16. Dolan, E.D., Moré, J.J.: Benchmarking Optimization Software with COPS, Technical Memorandum ANL/MCS-TM-246. Argonne National Laboratory, Argonne, IL (2000)

    Google Scholar 

  17. El-Alem, M.: A global convergence theory for dennis, el-alem, and maciel’s class of trust-region algorithms for constrained optimization without assuming regularity. SIAM J. Optim. 9, 965–990 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fernández, D., Solodov, M.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficiency condition. IMPA, preprint A677 (2010)

  19. Fernández, D., Solodov, M.: Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Math. Program. 125, 47–73 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Fletcher, R.: A class of methods for nonlinear programming with termination and convergence properties. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 157–175. North-Holland, The Netherlands (1970)

  21. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 2, 17–40 (1976)

    Article  MATH  Google Scholar 

  22. Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Rev. 47, 99–131 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  23. Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: Some Theoretical Properties of an Augmented Lagrangian Merit Function, Report SOL 86–6R. Stanford University, Stanford, CA (1986)

    Google Scholar 

  24. Gill, P.E., Robinson, D.P.: A Globally Convergent Stabilized SQP Method, Numerical Analysis Report 12–02. University of California, San Diego, La Jolla, CA (2012)

    Google Scholar 

  25. Glowinski, R., Marroco, A.: Sur l’Approximation, par Elements Finis d’Ordre Un, el la Resolution, par Penalisation-Dualité, d’une Classe de Problèmes de Dirichlet Nonlineares. Revue Française d’Automatique, Informatique et Recherche Opérationelle 9, 41–76 (1975)

    MATH  Google Scholar 

  26. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: A constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  27. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  28. Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems, vol. 187. Springer, Berlin, Heidelberg and New York (1981)

  29. ILOG Inc, ILOG CPLEX: High-performance software for mathematical programming and optimization, 2006. See http://www.ilog.com/products/cplex/

  30. Izmailov, A.F., Solodov, M.V.: On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-Newton SQP methods to critical multipliers. Math. Program. 126, 231–257 (2011)

    Article  MathSciNet  Google Scholar 

  31. Izmailov, A.F., Solodov, M.V.: Stabilized SQP revisited. Math. Program. 133, 93–120 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  32. Kočvara, M., Stingl, M., PENNON: A generalized augmented Lagrangian method for semidefinite programming, in High performance algorithms and software for nonlinear optimization (Erice: vol. 82 of Appl. Optim., Kluwer Acad. Publ. Norwell, MA 2003, pp. 303–321 (2001)

  33. Li, D.-H., Qi, L.: A stabilized SQP method via linear equations, technical Report AMR00/5. University of New South Wales, Sydney, School of Mathematics (2000)

  34. Mongeau, M., Sartenaer, A.: Automatic decrease of the penalty parameter in exact penalty function methods. Eur. J. Oper. Res. 83(3), 686–699 (1995)

  35. Moré, J.J.: Trust regions and projected gradients. In M. Iri and K. Yajima, (eds.) System Modelling and Optimization. vol. 113 of Lecture Notes in Control and Information Sciences, pp. 1–13. Springer, Berlin Heidelberg (1988)

  36. Mostafa, E.-S., Vicente, L., Wright, S.: Numerical behavior of a stabilized SQP method for degenerate NLP problems. In C. Bliek, C. Jermann, and A. Neumaier (eds.) Global Optimization and Constraint Satisfaction, vol. 2861 of Lecture Notes in Computer Science, pp. 123–141. Springer, Berlin/Heidelberg (2003)

  37. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, London and New York (1969)

    Google Scholar 

  38. Qin, Z., Goldfarb, D., Ma, S.: An alternating direction method for total variation denoising, arXiv, preprint arXiv:1108.1587 (2011)

  39. Toint, P.L.: Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization. Optim. Methods Softw 28, 82–95 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11, 253–275 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  41. Yang, J., Zhang, Y., Yin, W.: A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. IEEE J. Sel. Top. Signal Proces. 4, 288–297 (2010)

    Article  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the two referees and Associate Editor whose comments and suggestions helped to significantly improve the paper. They are also indebted to Nick Gould and Andrew Conn whose insights were extremely valuable in the preparation of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank E. Curtis.

Additional information

F. E. Curtis—Research supported by National Science Foundation Grant DMS-1016291 and Department of Energy Grant DE-SC0010615

H. Jiang, D.P. Robinson—Research supported by National Science Foundation Grant DMS-1217153.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Curtis, F.E., Jiang, H. & Robinson, D.P. An adaptive augmented Lagrangian method for large-scale constrained optimization. Math. Program. 152, 201–245 (2015). https://doi.org/10.1007/s10107-014-0784-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-014-0784-y

Keywords

Mathematics Subject Classification (2010)

Navigation