Skip to main content
Log in

Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Trust-region methods have received massive attention in a variety of continuous optimization. They aim to obtain a trial step by minimizing a quadratic model in a region of a certain trust-region radius around the current iterate. This paper proposes an adaptive Riemannian trust-region algorithm for optimization on manifolds, in which the trust-region radius depends linearly on the norm of the Riemannian gradient at each iteration. Under mild assumptions, we establish the liminf-type convergence, lim-type convergence, and global convergence results of the proposed algorithm. In addition, the proposed algorithm is shown to reach the conclusion that the norm of the Riemannian gradient is smaller than \(\epsilon \) within \({\mathcal {O}}(\frac{1}{\epsilon ^2})\) iterations. Some numerical examples of tensor approximations are carried out to reveal the performances of the proposed algorithm compared to the classical Riemannian trust-region algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. This package was downloaded from www.manopt.org.

  2. It is easy to ensure that such a condition is true from (A6) of Sect. 3.2.

  3. This toolbox was download from http://www.sandia.gov/~tgkolda/TensorToolbox/.

  4. This package was download from www.tensorlab.net.

  5. We thank one of the referees, who suggested us this more efficient way than our original one. In the original version, we used the default values \(\kappa =0.1\) and \(\theta =1\) in the stopping criterion [2, Equation (10)] of tCG. By tuning the parameters, we looked for a set of most efficient parameters (i.e., producing the fewest number of iterations for solving trust-region subproblems) in each example. With this new way, we can observe that both Algorithms 1 and 2 have lower computational costs compared to the original version.

References

  1. Absil, P.-A., Baker, C., Gallivan, K.A.: Convergence analysis of Riemannian trust-region methods. Technical report, Optimization online (2006)

  2. Absil, P.-A., Baker, C., Gallivan, K.A.: Trust-region methods on Riemannian manifolds. Found. Comput. Math. 7(3), 303–330 (2007)

    Article  MathSciNet  Google Scholar 

  3. Absil, P.-A., Mahony, R., Andrews, B.: Convergence of the iterates of descent methods for analytic cost functions. SIAM J. Optim. 16(2), 531–547 (2005)

    Article  MathSciNet  Google Scholar 

  4. Absil, P.-A., Mahony, R., Sepulchre, R.: Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton (2008)

    Book  Google Scholar 

  5. Baker, C., Absil, P.-A., Gallivan, K.A.: An implicit trust-region method on Riemannian manifolds. IMA J. Numer. Anal. 28, 665–689 (2008)

    Article  MathSciNet  Google Scholar 

  6. Bandeira, A.S., Scheinberg, K., Vicente, L.N.: Convergence of trust-region methods based on probabilistic models. SIAM J. Optim. 24(3), 1238–1264 (2014)

    Article  MathSciNet  Google Scholar 

  7. Boumal, N., Absil, P.-A., Cartis, C.: Global rates of convergence for nonconvex optimization on manifolds. IMA J. Numer. Anal. 39, 1–33 (2018)

    Article  MathSciNet  Google Scholar 

  8. Boumal, N., Mishra, B., Absil, P.-A., Sepulchre, R.: Manopt, a Matlab toolbox for optimization on manifolds. J. Mach. Learn. Res. 15(42), 1455–1459 (2014)

    Google Scholar 

  9. Breiding, P., Vannieuwenhoven, N.: A Riemannian trust region method for the canonical tensor rank approximation problem. SIAM J. Optim. 28(3), 2435–2465 (2018)

    Article  MathSciNet  Google Scholar 

  10. Cartis, C., Gould, N.I.M., Toint, P.L.: Complexity bounds for second-order optimality in unconstrained optimization. J. Complex. 28(1), 93–108 (2012)

    Article  MathSciNet  Google Scholar 

  11. Cartis, C., Gould, N.I.M., Toint, P.L.: On the complexity of finding first-order critical points in constrained nonlinear optimization. Math. Program. Ser. A 144(1–2), 93–106 (2014)

    Article  MathSciNet  Google Scholar 

  12. Cartis, C., Gould, N.I.M., Toint, P.L.: On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods. SIAM J. Numer. Anal. 53(2), 836–851 (2015)

    Article  MathSciNet  Google Scholar 

  13. Cho, M., Lee, J.: Riemannian approach to batch normalization. In: Advances in Neural Information Processing Systems, pp. 5225–5235 (2017)

  14. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000)

    Book  Google Scholar 

  15. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. SIAM, Philadelphia (2009)

    Book  Google Scholar 

  16. de A Bortoloti, M.A., Fernandes, T.A., Ferreira, O.P., Yuan, J.: Damped newton’s method on riemannian manifolds. J. Glob. Optim. 77(3), 643–660 (2020)

  17. de Lathauwer, L., de Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  Google Scholar 

  18. de Lathauwer, L., de Moor, B., Vandewalle, J.: On the best Rank-1 and Rank-(\({R}_1\), \({R}_2\),...,\({R_N}\)) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21(4), 1324–1342 (2000)

  19. Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multilinear rank-(\(r_1\), \(r_2\), \(r_3\)) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31(2), 248–271 (2009)

    Article  MathSciNet  Google Scholar 

  20. Fan, J., Yuan, Y.: A new trust region algorithm with trust region radius converging to zero. In D. Li, editor, Proceedings of the 5th International Conference on Optimization: Techniques and Applications, pp. 786–794 (2001)

  21. Gao, B., Liu, X., Chen, X., Yuan, Y.: A new first-order algorithmic framework for optimization problems with orthogonality constraints. SIAM J. Optim. 28(1), 302–332 (2018)

    Article  MathSciNet  Google Scholar 

  22. Garmanjani, R., Júdice, D., Vicente, L.N.: Trust-region methods without using derivatives: Worst case complexity and the nonsmooth case. SIAM J. Optim. 26(4), 1987–2011 (2016)

    Article  MathSciNet  Google Scholar 

  23. Grapiglia, G.N., Stella, G.F.D.: An adaptive trust-region method without function evaluations. Comput. Optim. Appl. 82, 31–60 (2022)

    Article  MathSciNet  Google Scholar 

  24. Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Programm. Ser. A 152(1–2), 491–520 (2015)

    Article  MathSciNet  Google Scholar 

  25. Grippo, L., Palagi, L., Piccialli, V.: An unconstrained minimization method for solving low-rank SDP relaxations of the Maxcut problem. Math. Programm. Ser. A 126(1), 119–146 (2011)

    Article  MathSciNet  Google Scholar 

  26. Hamid, E., Morteza, K.: A new adaptive trust-region method for system of nonlinear equations. Appl. Math. Model. 38(11), 3003–3015 (2014)

    MathSciNet  Google Scholar 

  27. Heidel, G., Schulz, V.: A Riemannian trust-region method for low-rank tensor completion. Numer. Linear Algebra Appl. 23(6), e1275 (2018)

    MathSciNet  Google Scholar 

  28. Hu, J., Jiang, B., Liu, X., Wen, Z.W.: A note on semidefinite programming relaxations for polynomial optimization over a single sphere. SCIENCE CHINA Math. 59(8), 1543–1560 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  29. Hu, J., Liu, X., Wen, Z., Yuan, Y.: A brief introduction to manifold optimization. J. Oper. Res. Soc. China 8(2), 199–248 (2020)

    Article  MathSciNet  Google Scholar 

  30. Hu, S.: An inexact augmented Lagrangian method for computing strongly orthogonal decompositions of tensors. Comput. Optim. Appl. 75, 701–737 (2020)

    Article  MathSciNet  Google Scholar 

  31. Huang, W., Absil, P.-A., Gallivan, K.A.: A Riemannian symmetric rank-one trust-region method. Math. Program. Ser. A 150(2), 179–216 (2015)

    Article  MathSciNet  Google Scholar 

  32. Huang, W., Absil, P.-A., Gallivan, K.A.: A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM J. Optim. 28(1), 470–495 (2018)

    Article  MathSciNet  Google Scholar 

  33. Ishteva, M., Absil, P.-A., Van Dooren, P.: Jacobi algorithm for the best low multilinear rank approximation of symmetric tensors. SIAM J. Matrix Anal. Appl. 2(34), 651–672 (2013)

    Article  MathSciNet  Google Scholar 

  34. Ishteva, M., Absil, P.-A., Van Huffel, S., de Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32(1), 115–135 (2011)

    Article  MathSciNet  Google Scholar 

  35. Jiang, B., Dai, Y.H.: A framework of constraint preserving update schemes for optimization on Stiefel manifold. Math. Program. Ser. A 153(2), 535–575 (2015)

    Article  MathSciNet  Google Scholar 

  36. Kolda, T.G., Mayo, J.R.: Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl. 32(4), 1095–1124 (2011)

    Article  MathSciNet  Google Scholar 

  37. Lai, R., Osher, S.: A splitting method for orthogonality constrained problems. J. Sci. Comput. 58(2), 431–449 (2014)

    Article  MathSciNet  Google Scholar 

  38. Łojasiewicz, S.: Ensembles Aemi-analytiques. IHES Notes (1965)

  39. Montanari, A., Richard, E.: Non-negative principal component analysis: message passing algorithms and sharp asymptotics. IEEE Trans. Inf. Theory 62(3), 1458–1484 (2016)

    Article  MathSciNet  Google Scholar 

  40. Qi, L., Chen, H., Chen, Y.: Tensor Eigenvalues and Their Applications. Springer, Singapore (2018)

    Book  Google Scholar 

  41. Savas, B., Lim, L.H.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32(6), 3352–3393 (2010)

    Article  MathSciNet  Google Scholar 

  42. Schneider, R., Uschmajew, A.: Convergence results for projected line-search methods on varieties of low-rank matrices via łojasiewicz inequality. SIAM J. Optim. 25(1), 622–646 (2015)

    Article  MathSciNet  Google Scholar 

  43. Sheng, Z., Li, J., Ni, Q.: Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations. Math. Comput. 92, 2217–2245 (2023)

    Article  MathSciNet  Google Scholar 

  44. Sheng, Z., Yuan, G.: An effective adaptive trust region algorithm for nonsmooth minimization. Comput. Optim. Appl. 71, 251–271 (2018)

    Article  MathSciNet  Google Scholar 

  45. Sheng, Z., Yuan, G., Cui, Z.: A new adaptive trust region algorithm for optimization problems. Acta Mathematica Scientia 38(2), 479–496 (2018)

    Article  MathSciNet  Google Scholar 

  46. Sheng, Z., Yuan, G., Cui, Z., Duan, X., Wang, X.: An adaptive trust region algorithm for large-residual nonsmooth least squares problems. J. Ind. Manag. Optim. 14(2), 707–718 (2018)

    Article  MathSciNet  Google Scholar 

  47. Shi, Z., Wang, S.: Nonmonotone adaptive trust region method. Eur. J. Oper. Res. 208(1), 28–36 (2011)

    Article  MathSciNet  Google Scholar 

  48. Steihaug, T.: The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal. 20(3), 626–637 (1983)

    Article  ADS  MathSciNet  Google Scholar 

  49. Steinlechner, M.: Riemannian optimization for high-dimensional tensor completion. SIAM J. Sci. Comput. 38(5), S461–S484 (2016)

    Article  MathSciNet  Google Scholar 

  50. Toint, P.L.: Towards an effcient sparsity exploting Newton method for minimization. In:  Duff, I. (Ed). Sparse Matrices and Their Uses, pp. 57–88 (1981)

  51. Usevich, K., Li, J., Comon, P.: Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms. SIAM J. Optim. 30(4), 2998–3028 (2020)

    Article  MathSciNet  Google Scholar 

  52. Wang, X., Yuan, Y.: Stochastic trust-region methods with trust-region radius depending on probabilistic models. J. Comput. Math. 2, 294–334 (2022)

    Article  MathSciNet  Google Scholar 

  53. Wen, Z., Yin, W.: A feasible method for optimization with orthogonality constraints. Math. Programm. Ser. A 142(1–2), 397–434 (2013)

    Article  MathSciNet  Google Scholar 

  54. Wen, Z., Zhang, Y.: Accelerating convergence by augmented Rayleigh-Ritz projections for large-scale eigenpair computation. SIAM J. Matrix Anal. Appl. 38(2), 273–296 (2017)

    Article  MathSciNet  Google Scholar 

  55. Xiao, N., Liu, X., Yuan, Y.: Exact penalty function for \(\ell _{2,1}\) norm minimization over the Stiefel manifold. SIAM J. Optim. 31(4), 3097–3126 (2021)

    Article  MathSciNet  Google Scholar 

  56. Yuan, Y.: Recent advances in trust region algorithms. Math. Program. 151(1), 249–281 (2015)

    Article  MathSciNet  Google Scholar 

  57. Zhang, J., Zhang, S.: A cubic regularized Newton’s method over Riemannian manifolds (2018). arxiv:1805.05565

  58. Zhao, Z., Bai, Z., Jin, X.: A Riemannian Newton algorithm for nonlinear eigenvalue problems. SIAM J. Matrix Anal. Appl. 36(2), 752–774 (2015)

    Article  MathSciNet  Google Scholar 

  59. Zhou, Q., Hang, D.: Nonmonotone adaptive trust region method with line search based on new diagonal updating. Appl. Numer. Math. 91, 75–88 (2015)

    Article  MathSciNet  Google Scholar 

  60. Zhu, X.: A Riemannian conjugate gradient method for optimization on the Stiefel manifold. Comput. Optim. Appl. 67, 73–110 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are grateful to the associate editor and the two anonymous reviewers for their useful suggestions and comments, which helped to improve the presentation of the paper. We would like to thank Dr. Jianze Li for carefully reading the original manuscript and for providing constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gonglin Yuan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The work of the first author was supported by the Youth Foundation of Anhui University of Technology (No. QZ202114), and Anhui Provincial Natural Science Foundation (No. 2208085QA07). The work of the second author was partially supported by Guangxi Science and Technology Base and Talent Project (No. AD22080047), the Special Funds for Local Science and Technology Development Guided by the Central Government (No. ZY20198003), High Level Innovation Teams and Excellent Scholars Program in Guangxi institutions of higher education (No. [2019]52), and National Natural Science Foundation of China (No. 11661009).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sheng, Z., Yuan, G. Convergence and worst-case complexity of adaptive Riemannian trust-region methods for optimization on manifolds. J Glob Optim (2024). https://doi.org/10.1007/s10898-024-01378-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10898-024-01378-0

Keywords

Navigation