Skip to main content
Log in

Adaptive Sampling Stochastic Multigradient Algorithm for Stochastic Multiobjective Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we propose an adaptive sampling stochastic multigradient algorithm for solving stochastic multiobjective optimization problems. Instead of requiring additional storage or computation of full gradients, the proposed method reduces variance by adaptively controlling the sample size used. Without the convexity assumption on the objective functions, we obtain that the proposed algorithm converges to Pareto stationary points in almost surely. We then analyze the convergence rates of the proposed algorithm. Numerical experiments are presented to demonstrate the effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Ansary, M.A., Panda, G.: A modified quasi-Newton method for vector optimization problem. Optimization 64(11), 2289–2306 (2015)

    Article  MathSciNet  Google Scholar 

  2. Beiser, F., Keith, B., Urbainczyk, S., Wohlmuth, B.: Adaptive sampling strategies for risk-averse stochastic optimization with constraints. IMA J. Numer. Anal. (2023). https://doi.org/10.1093/imanum/drac083

    Article  MathSciNet  Google Scholar 

  3. Bollapragada, R., Byrd, R.H., Nocedal, J.: Adaptive sampling strategies for stochastic optimization. SIAM J. Optim. 28(4), 3312–3343 (2018)

    Article  MathSciNet  Google Scholar 

  4. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  Google Scholar 

  5. Byrd, R.H., Chin, G.M., Nocedal, J., Wu, Y.: Sample size selection in optimization methods for machine learning. Math. Program. 134(1), 127–155 (2012)

    Article  MathSciNet  Google Scholar 

  6. Cartis, C., Scheinberg, K.: Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Math. Program. 169(2), 337–375 (2018)

    Article  MathSciNet  Google Scholar 

  7. Cho, J.H., Wang, Y.T., Chen, R., Chan, K.S., Swami, A.: A survey on modeling and optimizing multi-objective systems. IEEE Commun. Surv. Tutor. 19(3), 1867–1901 (2017)

    Article  Google Scholar 

  8. Curtis, F.E., Scheinberg, K.: Adaptive stochastic optimization: a framework for analyzing stochastic optimization algorithms. IEEE Signal Proc. Mag. 37(5), 32–42 (2020)

    Article  Google Scholar 

  9. Durrett, R.: Probability: Theory and Examples. Cambridge University Press, London (2010)

    Book  Google Scholar 

  10. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)

    Google Scholar 

  11. Eichfelder, G., Warnow, L.: An approximation algorithm for multi-objective optimization problems using a box-coverage. J. Global Optim. 83(2), 329–357 (2022)

    Article  MathSciNet  Google Scholar 

  12. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51(3), 479–494 (2000)

    Article  MathSciNet  Google Scholar 

  13. Fliege, J., Vaz, A.I.F., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Methods Softw. 34(5), 949–959 (2019)

    Article  MathSciNet  Google Scholar 

  14. Fliege, J., Werner, R.: Robust multiobjective optimization & applications in portfolio optimization. Eur. J. Oper. Res. 234(2), 422–433 (2014)

    Article  MathSciNet  Google Scholar 

  15. Fliege, J., Xu, H.F.: Stochastic multiobjective optimization: sample average approximation and applications. J. Optim. Theory Appl. 151(1), 135–162 (2011)

    Article  MathSciNet  Google Scholar 

  16. Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descent methods. Pesqui. Oper. 34(3), 585–620 (2014)

    Article  Google Scholar 

  17. Garrigos, G.: Square distance functions are Polyak–Łojasiewicz and vice-versa. arXiv:2301.10332 (2023)

  18. Garrigos, G., Rosasco, L., Villa, S.: Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry. Math. Program. 198(1), 937–996 (2023)

    Article  MathSciNet  Google Scholar 

  19. Goncalves, M.L.N., Lima, F.S., Prudente, L.F.: Globally convergent Newton-type methods for multiobjective optimization. Comput. Optim. Appl. 83(2), 403–434 (2022)

    Article  MathSciNet  Google Scholar 

  20. Gutjahr, W.J., Pichler, A.: Stochastic multi-objective optimization: a survey on non-scalarizing methods. Ann. Oper. Res. 236(2), 475–499 (2016)

    Article  MathSciNet  Google Scholar 

  21. Hu, Z., Shaloudegi, K., Zhang, G.J., Yu, Y.L.: Federated learning meets multi-objective optimization. IEEE Trans. Netw. Sci. Eng. 9(4), 2039–2051 (2022)

    Article  MathSciNet  Google Scholar 

  22. Hunter, S.R., Applegate, E.A., Arora, V., Chong, B., Cooper, K., Guevara, O.R., Valencia, C.V.: An introduction to multiobjective simulation optimization. ACM Trans. Model. Comput. Simul. 29(1), 1–36 (2019)

    Article  MathSciNet  Google Scholar 

  23. Jin, Y.C., Sendhoff, B.: Pareto-based multiobjective machine learning: an overview and case studies. IEEE Trans. Syst. Man Cybern. Part C 38(3), 397–415 (2008)

    Article  Google Scholar 

  24. Karimi, H., Nutini, J., Schmidt, M.: Linear convergence of gradient and proximal-gradient methods under the Polyak-Łojasiewicz condition. In: Frasconi, P., Landwehr, N., Manco, G., Vreeken, J. (eds.), Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2016, Proceedings. Springer International Publishing, Part I 16, pp. 795–811 (2016)

  25. Lin, G.H., Zhang, D.L., Liang, Y.C.: Stochastic multiobjective problems with complementarity constraints and applications in healthcare management. Eur. J. Oper. Res. 226(3), 461–470 (2013)

    Article  MathSciNet  Google Scholar 

  26. Lin, X., Zhen, H.L., Li, Z.H., Zhang, Q.F., Kwong, S.: Pareto multi-task learning. In: Wallach, H., Larochelle, H., Beygelzimer, A., d\(^{\prime }\)Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 12037–12047 (2019)

  27. Liu, S., Vicente, L.N.: The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. Ann. Oper. Res. (2021). https://doi.org/10.1007/s10479-021-04033-z

    Article  Google Scholar 

  28. Mahdavi, M., Yang, T.B., Jin, R.: Stochastic convex optimization with multiple objectives. In: Burges, C.J., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 1115–1123 (2013)

  29. Mercier, Q., Poirion, F., Désidéri, J.A.: A stochastic multiple gradient descent algorithm. Eur. J. Oper. Res. 271(3), 808–817 (2018)

    Article  MathSciNet  Google Scholar 

  30. Pasupathy, R., Glynn, P., Ghosh, S., Hashemi, F.S.: On sampling rates in simulation-based recursions. SIAM J. Optim. 28(1), 45–73 (2018)

    Article  MathSciNet  Google Scholar 

  31. Poirion, F., Mercier, Q., Désidéri, J.A.: Descent algorithm for nonsmooth stochastic multiobjective optimization. Comput. Optim. Appl. 68(2), 317–331 (2017)

    Article  MathSciNet  Google Scholar 

  32. Reiners, M., Klamroth, K., Heldmann, F., Stiglmayr, M.: Efficient and sparse neural networks by pruning weights in a multiobjective learning approach. Comput. Oper. Res. 141(5), 105676 (2022)

    Article  MathSciNet  Google Scholar 

  33. Sener, O., Koltun, V.: Multi-task learning as multi-objective optimization. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 527–538 (2018)

  34. Tanabe, H., Fukuda, E.H., Yamashita, N.: Convergence rates analysis of a multiobjective proximal gradient method. Optim. Lett. 17(2), 333–350 (2023)

    Article  MathSciNet  Google Scholar 

  35. Wiecek, M.M., Ehrgott, M., Engau, A.: Continuous multiobjective programming. In: Greco, S., Ehrgott, M., Figueira, J.R. (eds.) Multiple Criteria Decision Analysis: State of the art surveys, pp. 739–815. Springer, New York (2016)

    Chapter  Google Scholar 

  36. Zhao, Y., Chen, L., Yang, X.M.: A sample average approximation method based on a gap function for stochastic multiobjective optimization problems. Pac. J. Optim. 17(6), 681–694 (2021)

    MathSciNet  Google Scholar 

  37. Zhou, S.J., Zhang, W.P., Jiang, J.Y., Zhong, W.L., Gu, J.J., Zhu, W.W.: On the convergence of stochastic multi-objective gradient manipulation and beyond. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 38103–38115 (2022)

  38. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and two anonymous referees for their valuable comments and constructive suggestions, which have considerably enhanced the quality of the original manuscript. The first author was supported in part by the National Natural Science Foundation of China under grants 12001072 and 12271067, the China Postdoctoral Science Foundation Project under grant 2019M653332, the Chongqing Natural Science Foundation Project under grant CSTB2022NSCQ-MSX1318, the Group Building Scientific Innovation Project for universities in Chongqing under grant CXQT21021 and the open project of Key Laboratory under grant CSSXKFKTQ202006, School of Mathematical Sciences, Chongqing Normal University. The third author was supported in part by the Major Program of the National Natural Science Foundation of China under grants 11991020 and 11991024, the NSFC-RGC (Hong Kong) Joint Research Program under grant 12261160365 and the Chongqing Natural Science Foundation under grant cstc2019jcyj-zdxmX0016.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Zhao.

Additional information

Communicated by René Henrion.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, Y., Chen, W. & Yang, X. Adaptive Sampling Stochastic Multigradient Algorithm for Stochastic Multiobjective Optimization. J Optim Theory Appl 200, 215–241 (2024). https://doi.org/10.1007/s10957-023-02334-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-023-02334-w

Keywords

Mathematics Subject Classification

Navigation