Skip to main content
Log in

A Second Order Primal–Dual Dynamical System for a Convex–Concave Bilinear Saddle Point Problem

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

The class of convex–concave bilinear saddle point problems encompasses many important convex optimization models arising in a wide array of applications. The most of existing primal–dual dynamical systems for saddle point problems are based on first order ordinary differential equations (ODEs), which only own the \({\mathcal {O}}(1/t)\) convergence rate in the convex case, and fast convergence rate analysis always requires some additional assumption such as strong convexity. In this paper, based on second order ODEs, we consider a general inertial primal–dual dynamical system, with damping, scaling and extrapolation coefficients, for a convex–concave bilinear saddle point problem. By the Lyapunov analysis approach, under appropriate assumptions, we investigate the convergence rates of the primal–dual gap and velocities, and the boundedness of the trajectories for the proposed dynamical system. With special parameters, our results can recover the Polyak’s heavy ball acceleration scheme and Nesterov’s acceleration scheme. We also provide numerical examples to support our theoretical claims.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Attouch, H., Boţ, R.I., Csetnek, E.R.: Fast optimization via inertial dynamics with closed-loop damping. J. Eur. Math. Soc. 25(5), 1985–2056 (2022)

    Article  MathSciNet  Google Scholar 

  2. Attouch, H., Cabot, A., Chbani, Z., Riahi, H.: Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient. Evol. Equ. Control Theory 7(3), 353–371 (2018)

    Article  MathSciNet  Google Scholar 

  3. Attouch, H., Chbani, Z., Fadili, J., Riahi, H.: Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics. J. Optim. Theory Appl. 193, 704–736 (2022)

    Article  MathSciNet  Google Scholar 

  4. Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168(1), 123–175 (2018)

    Article  MathSciNet  Google Scholar 

  5. Attouch, H., Chbani, Z., Riahi, H.: Rate of convergence of the Nesterov accelerated gradient method in the subcritical case \(\alpha \le 3\). ESAIM Control Optim. Calc. Var. 25, Article number: 2 (2019)

  6. Balhag, A., Chbani, Z., Riahi, H.: Linear convergence of inertial gradient dynamics with constant viscous damping coefficient and time-dependent rescaling parameter. Available at HAL Id: https://hal.science/hal-02610699 (2020)

  7. Boţ, R.I., Csetnek, E.R.: Second order forward-backward dynamical systems for monotone inclusion problems. SIAM J. Control Optim. 54(3), 1423–1443 (2016)

    Article  MathSciNet  Google Scholar 

  8. Boţ, R.I., Csetnek, E.R., László, S.C.: A primal-dual dynamical approach to structured convex minimization problems. J. Differ. Equ. 269(12), 10717–10757 (2020)

    Article  MathSciNet  Google Scholar 

  9. Boţ, R.I., Csetnek, E.R., Nguyen, D.K.: Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates. Math. Program. 200(1), 147–197 (2023)

    Article  MathSciNet  Google Scholar 

  10. Boţ, R.I., Csetnek, E.R., Sedlmayer, M.: An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function. Comput. Optim. Appl. 86, 925–966 (2023)

    Article  MathSciNet  Google Scholar 

  11. Boţ, R.I., Nguyen, D.K.: Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing damping. J. Differ. Equ. 303, 369–406 (2021)

    Article  MathSciNet  Google Scholar 

  12. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  13. Chavdarova, T., Jordan, M.I., Zampetakis, M.: Last-iterate convergence of saddle point optimizers via high-resolution differential equations. In: NeurIPS Workshop on Optimization for Machine Learning (2021)

  14. Cherukuri, A., Gharesifard, B., Cortès, J.: Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM J. Control Optim. 55(1), 486–511 (2017)

    Article  MathSciNet  Google Scholar 

  15. Cherukuri, A., Mallada, E., Low, S., Cortés, J.: The role of convexity in saddle-point dynamics: Lyapunov function and robustness. IEEE Trans. Automat. Control 63(8), 2449–2464 (2019)

    Article  MathSciNet  Google Scholar 

  16. Ding, D., Jovanović, M.R.: Global exponential stability of primal–dual gradient flow dynamics based on the proximal augmented Lagrangian. In: 2019 American Control Conference (ACC), pp. 3414–3419 (2019)

  17. Fan, Q., Jiao, Y., Lu, X.: A primal dual active set algorithm with continuation for compressed sensing. IEEE Trans. Signal Process. 62(23), 6276–6285 (2014)

    Article  MathSciNet  Google Scholar 

  18. Fazlyab, M., Koppel, A., Preciado, V.M., Ribeiro, A.: A variational approach to dual methods for constrained convex optimization. In: 2017 American Control Conference (ACC), pp. 5269–5275 (2017)

  19. Garg, K., Panagou, D.: Fixed-time stable gradient flows: applications to continuous-time optimization. IEEE Trans. Automat. Control 66(5), 2002–2015 (2021)

    Article  MathSciNet  Google Scholar 

  20. Haraux, A.: Systemes dynamiques dissipatifs et applications. Elsevier Masson, Paris (1991)

    Google Scholar 

  21. Hassan-Moghaddam S., Jovanović, M.R.: Proximal gradient flow and Douglas–Rachford splitting dynamics: global exponential stability via integral quadratic constraints. Automatica 123, Article number: 109311 (2021)

  22. He, X., Hu, R., Fang, Y.P.: Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems. SIAM J. Control Optim. 59(5), 3278–3301 (2021)

    Article  MathSciNet  Google Scholar 

  23. He, X., Hu, R., Fang, Y. P.: Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem. Automatica 146, Article number: 110547 (2022)

  24. He, X., Hu, R., Fang, Y.P.: Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Appl. Anal. 102(15), 4114–4139 (2023)

    Article  MathSciNet  Google Scholar 

  25. He, X., Hu, R., Fang, Y.P.: “Second-order primal’’ + “first-order dual’’ dynamical systems with time scaling for linear equality constrained convex optimization problems. IEEE Trans. Automat. Control 67(8), 4377–4383 (2022)

    Article  MathSciNet  Google Scholar 

  26. He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: from the contraction perspective. SIAM J. Imaging Sci. 5(1), 119–149 (2012)

    Article  MathSciNet  Google Scholar 

  27. Hulett D.A., Nguyen, D.K.: Time rescaling of a primal-dual dynamical system with asymptotically vanishing damping. Appl. Math. Optim. 88, Article number: 27 (2023)

  28. Jiang, F., Cai, X., Wu, Z., Han, D.: Approximate first-order primal-dual algorithms for saddle point problems. Math. Comp. 90(329), 1227–1262 (2021)

    Article  MathSciNet  Google Scholar 

  29. Liang, S., Yin, G.: Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity. Automatica 105, 298–306 (2019)

    Article  MathSciNet  Google Scholar 

  30. Lin, Z., Li, H., Fang, C.: Accelerated Optimization for Machine Learning. Springer, Singapore (2020)

  31. Lu, H.: An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems. Math. Program. 194(1), 1061–1112 (2022)

    Article  MathSciNet  Google Scholar 

  32. Luo, H.: A primal-dual flow for affine constrained convex optimization. ESAIM Control Optim. Calc. Var. 28, Article Number: 33 2022

  33. Malitsky, Y., Pock, T.: A first-order primal-dual algorithm with linesearch. SIAM J. Optim. 28(1), 411–432 (2018)

    Article  MathSciNet  Google Scholar 

  34. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2003)

    Google Scholar 

  35. Nesterov, Y.: A method of solving a convex programming problem with convergence rate \({\cal{O} }(1/k^2)\). Sov. Math. Dokl. 27(2), 372–376 (1983)

    Google Scholar 

  36. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  37. Schmidt, M., Fung, G., Rosales, R.: Fast optimization methods for L1 regularization: a comparative study and two new approaches. In: European Conference on Machine Learning, pp. 286–297. Springer (2007)

  38. Su, W., Boyd, S., Candés, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17(153), 5312–5354 (2016)

    MathSciNet  Google Scholar 

  39. Sun, T., Yin, P., Li, D., Huang, C., Guan, L., Jiang, H.: Non-ergodic convergence analysis of heavy-ball algorithms. Proc. AAAI Conf. Artif. Intell. 33(1), 5033–5040 (2019)

    Google Scholar 

  40. Tang, Y., Qu, G., Li, N.: Semi-global exponential stability of augmented primal-dual gradient dynamics for constrained convex optimization. Syst. Control Lett. 144, Article Number: 104754 (2020)

  41. Wibisono, A., Wilson, A.C., Jordan, M.I.: A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. U.S.A. 113(47), E7351–E7358 (2016)

    Article  MathSciNet  Google Scholar 

  42. Wilson, A.C., Recht, B., Jordan, M.I.: A Lyapunov analysis of accelerated methods in optimization. J. Mach. Learn. Res. 22(113), 1–34 (2021)

    MathSciNet  Google Scholar 

  43. Zeng, X., Lei, J., Chen, J.: Dynamical primal-dual accelerated method with applications to network optimization. IEEE Trans. Automat. Control 68(3), 1760–1767 (2023)

    Article  MathSciNet  Google Scholar 

  44. Zeng, X., Lei, J., Chen, J.: Accelerated first-order continuous-time algorithm for solving convex-concave bilinear saddle point problem. IFAC-PapersOnLine 53(2), 7362–7367 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are thankful to two anonymous reviewers for their remarks and suggestions which have improved the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yaping Fang.

Ethics declarations

Conflict of interest

No potential conflict of interest was reported by the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, X., Hu, R. & Fang, Y. A Second Order Primal–Dual Dynamical System for a Convex–Concave Bilinear Saddle Point Problem. Appl Math Optim 89, 30 (2024). https://doi.org/10.1007/s00245-023-10102-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00245-023-10102-5

Keywords

Mathematics Subject Classification

Navigation