Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness

Authors

  • Chenghan Xie School of Information Management and Engineering, Shanghai University of Finance and Economics School of Mathematical Sciences, Fudan University
  • Chenxi Li School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Chuwen Zhang School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Qi Deng School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Dongdong Ge School of Information Management and Engineering, Shanghai University of Finance and Economics
  • Yinyu Ye Department of Management Science and Engineering, Stanford University

DOI:

https://doi.org/10.1609/aaai.v38i14.29537

Keywords:

ML: Optimization, RU: Stochastic Optimization, SO: Non-convex Optimization

Abstract

In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more general (L0, L1)-smoothness setting, which gains particular significance within the realms of deep neural networks and distributionally robust optimization (DRO). We demonstrate the significant advantage of trust region methods for stochastic nonconvex optimization under such generalized smoothness assumption. We show that first-order trust region methods can recover the normalized and clipped stochastic gradient as special cases and then provide a unified analysis to show their convergence to first-order stationary conditions. Motivated by the important application of DRO, we propose a generalized high-order smoothness condition, under which second-order trust region methods can achieve a complexity of O(epsilon(-3.5)) for convergence to second-order stationary points. By incorporating variance reduction, the second-order trust region method obtains an even better complexity of O(epsilon(-3)), matching the optimal bound for standard smooth optimization. To our best knowledge, this is the first work to show convergence beyond the first-order stationary condition for generalized smooth optimization. Preliminary experiments show that our proposed algorithms perform favorably compared with existing methods.

Published

2024-03-24

How to Cite

Xie, C., Li, C., Zhang, C., Deng, Q., Ge, D., & Ye, Y. (2024). Trust Region Methods for Nonconvex Stochastic Optimization beyond Lipschitz Smoothness. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 16049-16057. https://doi.org/10.1609/aaai.v38i14.29537

Issue

Section

AAAI Technical Track on Machine Learning V