Skip to main content
Log in

\(O(1/k^2)\) convergence rates of (dual-primal) balanced augmented Lagrangian methods for linearly constrained convex programming

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

The recent balanced augmented Lagrangian method (ALM) and its dual-primal version are effective for solving linearly constrained convex programming problems. We present accelerated (dual-primal) balanced ALM methods and establish \(\varvec{O(1/k^2)}\) (where \(\varvec{k}\) is the number of iterations) convergence rates in the case that the objective function to be minimized is strongly convex. Numerical results demonstrate the efficiency of the new accelerated algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Fig. 1
Fig. 2

Similar content being viewed by others

Availability of supporting data

There is no availability of supporting data. All numerical test data are stated in Section 3.

References

  1. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  2. Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. PH.D. Thesis, MIT (1989)

  3. Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Method Softw. 4(1), 75–83 (1994)

    Article  MathSciNet  Google Scholar 

  4. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. with Appl. 2(1), 17–40 (1976)

    Article  Google Scholar 

  5. Glowinski, R., Marroco, A.: Sur l’approximation, paréléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. Esaim-Math. Model. Num. 9(R2), 41–76 (1975)

    Google Scholar 

  6. He, B.S., Ma, F., Yuan, X.M.: Optimally linearizing the alternating direction method of multipliers for convex programming. Comput. Optim. Appl. 75(2), 361–388 (2020)

    Article  MathSciNet  Google Scholar 

  7. He, B.S., Yuan, X.M.: Balanced augmented Lagrangian method for convex programming. arXiv:2108.08554 (2021)

  8. He, B.S., Yuan, X.M.: On the acceleration of augmented Lagrangian method for linearly constrained optimization. Optimization online (2010)

  9. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)

    Article  MathSciNet  Google Scholar 

  10. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. Optimization, 283–298 (1969)

  11. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  Google Scholar 

  12. Tian, W.Y., Yuan, X.M.: An alternating direction method of multipliers with a worst-case \({O}(1/n^2)\) convergence rate. Math. Comput. 88(318), 1685–1713 (2019)

    Article  Google Scholar 

  13. Xu, Y.Y.: Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27(3), 1459–1484 (2017)

    Article  MathSciNet  Google Scholar 

  14. Xu, S.J.: Dual-primal balanced augmented Lagrangian method for linearly constrained convex programming. J. Appl. Math. Comput. 69(1), 1015–1035 (2023)

    Article  MathSciNet  Google Scholar 

  15. Yang, J.F., Yuan, X.M.: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82(281), 301–329 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to express their very great appreciation to the editor and all reviewers.

Funding

This research was supported by the National Natural Science Foundation of China under grant 12171021, and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Contributions

All authors reviewed the manuscript. All authors contributed equally to this work.

Corresponding author

Correspondence to Shiru Li.

Ethics declarations

Ethical approval

Not applicable

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, T., Xia, Y. & Li, S. \(O(1/k^2)\) convergence rates of (dual-primal) balanced augmented Lagrangian methods for linearly constrained convex programming. Numer Algor (2024). https://doi.org/10.1007/s11075-024-01796-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11075-024-01796-x

Keywords

Mathematics Subject Classification (2010)

Navigation