Skip to main content
Log in

Variance reduced moving balls approximation method for smooth constrained minimization problems

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

In this paper, we consider the problem of minimizing the sum of a large number of smooth convex functions subject to a complicated constraint set defined by a smooth convex function. Such a problem has wide applications in many areas, such as machine learning and signal processing. By utilizing variance reduction and moving balls approximation techniques, we propose a new variance reduced moving balls approximation method. Compared with existing convergence rates of moving balls approximation-type methods that require the strong convexity of the objective function, a notable advantage of the proposed method is that the linear and sublinear convergence rates can be guaranteed under the quadratic gradient growth property and convexity condition, respectively. To demonstrate its effectiveness, numerical experiments for solving the smooth regularized logistic regression problem and the Neyman-Pearson classification problem are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available in the website https://www.csie.ntu.edu.tw/~cjlin/libsvm/ and [28, 29].

References

  1. Wu, S.X., Yue, M.-C., So, A.M.-C., Ma, W.-K.: SDR approximation bounds for the robust multicast beamforming problem with interference temperature constraints. In: ICASSP, pp. 4054–4058 (2017). IEEE

  2. Gaines, B.R., Kim, J., Zhou, H.: Algorithms for fitting the constrained Lasso. J. Comput. Graph. Stat. 27(4), 861–871 (2018)

    Article  MathSciNet  Google Scholar 

  3. Zhang, L.W.: A stochastic moving balls approximation method over a smooth inequality constraint. J. Comput. Math. 38(3), 528–546 (2020)

    Article  MathSciNet  Google Scholar 

  4. Rigollet, P., Tong, X.: Neyman-pearson classification, convexity and stochastic constraints. J. Mach. Learn. Res. 12, 2831–2855 (2011)

    MathSciNet  Google Scholar 

  5. Beck, A.: First-Order Methods in Optimization. Springer, Philadephia (2017)

    Book  Google Scholar 

  6. Nesterov, Y.: Lectures on Convex Optimization. Springer, New York (2018)

    Book  Google Scholar 

  7. Necoara, I., Nesterov, Y., Glineur, F.: Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175, 69–107 (2019)

    Article  MathSciNet  Google Scholar 

  8. Liu, Y., Wang, X., Guo, T.: A linearly convergent stochastic recursive gradient method for convex optimization. Optim. Lett. 14, 2265–2283 (2020)

    Article  MathSciNet  Google Scholar 

  9. Park, Y., Ryu, E.K.: Linear convergence of cyclic SAGA. Optim. Lett. 14, 1583–1598 (2020)

    Article  MathSciNet  Google Scholar 

  10. Adachi, S., Nakatsukasa, Y.: Eigenvalue-based algorithm and analysis for nonconvex QCQP with one constraint. Math. Program. 173, 79–116 (2019)

    Article  MathSciNet  Google Scholar 

  11. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  Google Scholar 

  12. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  Google Scholar 

  13. Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    Article  MathSciNet  Google Scholar 

  14. Le Roux, N., Schmidt, M.W., Bach, F.R.: A stochastic gradient method with an exponential convergence rate for finite training sets. Adv. Neural Inf. Process. Syst. 25, 2663–2671 (2013)

    Google Scholar 

  15. Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 27, 1646–1654 (2014)

    Google Scholar 

  16. Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural Inf. Process. Syst. 26, 315–323 (2013)

    Google Scholar 

  17. Nguyen, L.M., Liu, J., Scheinberg, K., Takáčg, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: Proceedings of the 34th ICML, pp. 2613–2621 (2017)

  18. Fang, C., Li, C.J., Lin, Z., Zhang, T.: SPIDER: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Adv. Neural Inf. Process. Syst. 31, 687–697 (2018)

    Google Scholar 

  19. Auslender, A., Shefi, R., Teboulle, M.: A moving balls approximation method for a class of smooth constrained minimization problems. SIAM J. Optim. 20(6), 3232–3259 (2010)

    Article  MathSciNet  Google Scholar 

  20. Bolte, J., Pauwels, E.: Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs. Math. Oper. Res. 41(2), 442–465 (2016)

    Article  MathSciNet  Google Scholar 

  21. Yu, P., Pong, T.K., Lu, Z.: Convergence rate analysis of a sequential convex programming method with line search for a class of constrained difference-of-convex optimization problems. SIAM J. Optim. 31(3), 2024–2054 (2021)

    Article  MathSciNet  Google Scholar 

  22. Zhang, H., Cheng, L.: Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization. Optim. Lett. 9, 961–979 (2015)

    Article  MathSciNet  Google Scholar 

  23. Shreve, S.E.: Stochastic Calculus for Finance II: Continuous-Time Models. Springer, New York (2004)

    Book  Google Scholar 

  24. Lee, S.-I., Lee, H., Abbeel, P., Ng, A.: Efficient \(L_1\) regularized logistic regression. In: Proc. AAAI, pp. 401–408 (2006)

  25. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  Google Scholar 

  26. Grant, M., Boyd, S., Ye, Y.: CVX: Matlab software for disciplined convex programming (2008)

  27. Tong, X., Feng, Y., Zhao, A.: A survey on Neyman-Pearson classification and suggestions for future research. Wiley Interdiscip. Rev. Comput. Stat. 8, 64–81 (2016)

    Article  MathSciNet  Google Scholar 

  28. Dua, D., Graff, C.: UCI Machine Learning Repository (2017). http://archive.ics.uci.edu/ml

  29. Guyon, I., Gunn, S.R., Ben-Hur, A., Dror, G.: Result analysis of the NIPS 2003 feature selection challenge. Adv. Neural Inf. Process. Syst. 17, 545–552 (2005)

    Google Scholar 

  30. Yan, Y., Xu, Y.: Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs. Math. Program. Comput. 14, 319–363 (2022)

    Article  MathSciNet  Google Scholar 

  31. Xu, Y.: Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming. Math. Program. 185, 199–244 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 12101436).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai Tu.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Z., Xia, Fq. & Tu, K. Variance reduced moving balls approximation method for smooth constrained minimization problems. Optim Lett 18, 1253–1271 (2024). https://doi.org/10.1007/s11590-023-02049-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-023-02049-x

Keywords

Navigation