Abstract
One of the most popular algorithms for saddle point problems is the so-named primal-dual hybrid gradient method, which have been received much considerable attention in the literature. Generally speaking, solving the primal and dual subproblems dominates the main computational cost of those primal-dual type methods. In this paper, we propose a partially inexact generalized primal-dual hybrid gradient method for saddle point problems with bilinear couplings, where the dual subproblem is solved approximately with a relative error strategy. Our proposed algorithm consists of two stages, where the first stage yields a predictor by solving the primal and dual subproblems, and the second procedure makes a correction on the predictor via a simple scheme. It is noteworthy that the underlying extrapolation parameter can be relaxed in a larger range, which allows us to have more choices than a fixed setting. Theoretically, we establish some convergence properties of the proposed algorithm, including the global convergence, the sub-linear convergence rate and the Q-linear convergence rate. Finally, some preliminary computational results demonstrate that our proposed algorithm works well on the fused Lasso problem with synthetic datasets and a pixel-constrained image restoration model.
Similar content being viewed by others
References
Beck, A., Teboulle, M.: A fast iterative shrinkage\(-\)thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Cai, X., Han, D., Xu, L.: An improved first-order primal-dual algorithm with a new correction step. J. Global Optim. 5(7), 1419–1428 (2013)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)
Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. Ser. A 159, 253–287 (2016)
Chang, X., Yang, J.: A golden ratio primal-dual algorithm for structured convex optimization. J. Sci. Comput. 87(47), 1–26 (2021)
Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29, 025011 (2013)
Condat, L.: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158, 1015–1046 (2013)
Esser, E., Zhang, X., Chan, T.: A general framework for a class of first-order primal-dual algorithms for convex optimization in imaging sciences. SIAM J. Imaging Sci. 3, 1015–1046 (2010)
Facchinei, F., Pang, J.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. I and II. Springer Verlag, New York (2003)
Gu, G., He, B., Yuan, X.: Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a uniform approach. Comput. Optim. Appl. 59, 135–161 (2014)
Han, D., He, H., Yang, H., Yuan, X.: A customized Douglas–Rachford splitting algorithm for separable convex minimization with linear constraints. Numer. Math. 127, 167–200 (2014)
Han, D., Sun, D., Zhang, L.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Math. Oper. Res. 43(2), 622–637 (2017)
He, B., Ma, F., Yuan, X.: An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems. J Math. Imaging Vis. 58(2), 279–293 (2017)
He, B., You, Y., Yuan, X.: On the convergence of primal-dual hybrid gradient algorithm. SIAM J. Imaging Sci. 7(4), 2526–2537 (2014)
He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: From contraction perspective. SIAM J. Imaging Sci. 5, 119–149 (2012)
He, H., Desai, J., Wang, K.: A primal-dual prediction-correction algorithm for saddle point optimization. J. Global Optim. 66(3), 573–583 (2016)
He, H., Han, D., Li, Z.: Some projection methods with the BB step sizes for variational inequalities. J. Comput. Appl. Math. 236, 2590–2604 (2012)
Jiang, F., Cai, X., Wu, Z., Han, D.: Approximate first-order primal-dual algorithms for saddle point problems. Math. Comput. 90(329), 1227–1262 (2021)
Jiang, F., Cai, X., Wu, Z., Zhang, H.: A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems. Numer. Algor. 88, 1109–1136 (2021)
Malitsky, Y., Pock, T.: A first-order primal-dual algorithm with linesearch. SIAM J. Optim. 28(1), 411–432 (2018)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston (2003)
Rasch, J., Chambolle, A.: Inexact first-order primal-dual algorithms. Comput. Optim. Appl. 76, 381–430 (2020)
Tian, W., Yuan, X.: Linearized primal-dual methods for linear inverse problems with total variation regularization and finite element discretization. Inverse Prob. 32, 115,011 (2016)
Valkonen, T.: First-order primal-dual methods for nonsmooth non-convex optimisation. In: Chen, K., Schönlieb, C.B., Tai, X.C., Younces, L. (eds.) Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision, pp. 1–42. Springer, Cham (2021)
Wang, K., He, H.: A double extrapolation primal-dual algorithm for saddle point problems. J. Sci. Comput. 85(30), 1–30 (2020)
Wu, Z., Li, M.: General inexact primal-dual hybrid gradient methods for saddle-point problems and convergence analysis. Asia Pac. J. Oper. Res. 39, 2150044 (2021)
Xie, J., Liao, A., Yang, X.: An inexact alternating direction method of multipliers with relative error criteria. Optim. Lett. 11, 583–596 (2017)
Yan, M.: A new primal-dual algorithm for minimizing the sum of three functions with a linear operator. J. Sci. Comput. 76, 1698–1717 (2018)
Yang, W., Han, D.: Linear convergence of the alternating direction method of multipliers for a class of convex optimization problems. SIAM J. Numer. Anal. 54(2), 625–640 (2016)
Zheng, X., Ng, K.: Metric subregularity of piecewise linear multifunctions and applications to piecewise linear multiobjective optimization. SIAM J. Optim. 24(1), 154–174 (2014)
Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. CAM Reports 08-34, UCLA, Los Angeles, CA (2008)
Acknowledgements
The authors are grateful to the three anonymous referees for their valuable comments on the earlier versions of this paper. Also, the authors would like to thank Dr. Fan Jiang for kindly sharing their Matlab code of [18, 19], and thank Dr. Zhou Wei for his discussion on Theorem 2.1. K. Wang was supported by National Natural Science Foundation of China (NSFC) at Grant No. 11901294 and Natural Science Foundation of Jiangsu Province at Grant No. BK20190429. H. He was supported in part by NSFC at Grant No. 11771113 and Ningbo Natural Science Foundation (Project ID: 2023J014).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
We declare that we have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, K., Yu, J. & He, H. A partially inexact generalized primal-dual hybrid gradient method for saddle point problems with bilinear couplings. J. Appl. Math. Comput. 69, 3693–3719 (2023). https://doi.org/10.1007/s12190-023-01899-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-023-01899-z
Keywords
- Saddle point problem
- Primal-dual hybrid gradient algorithm
- Prediction-Correction
- Linear convergence
- Error bound