Linear Regularizers Enforce the Strict Saddle Property

Authors

  • Matthew Ubl University of Florida
  • Matthew Hale University of Florida
  • Kasra Yazdani University of Florida

DOI:

https://doi.org/10.1609/aaai.v37i8.26194

Keywords:

ML: Optimization

Abstract

Satisfaction of the strict saddle property has become a standard assumption in non-convex optimization, and it ensures that many first-order optimization algorithms will almost always escape saddle points. However, functions exist in machine learning that do not satisfy this property, such as the loss function of a neural network with at least two hidden layers. First-order methods such as gradient descent may converge to non-strict saddle points of such functions, and there do not currently exist any first-order methods that reliably escape non-strict saddle points. To address this need, we demonstrate that regularizing a function with a linear term enforces the strict saddle property, and we provide justification for only regularizing locally, i.e., when the norm of the gradient falls below a certain threshold. We analyze bifurcations that may result from this form of regularization, and then we provide a selection rule for regularizers that depends only on the gradient of an objective function. This rule is shown to guarantee that gradient descent will escape the neighborhoods around a broad class of non-strict saddle points, and this behavior is demonstrated on numerical examples of non-strict saddle points common in the optimization literature.

Downloads

Published

2023-06-26

How to Cite

Ubl, M., Hale, M., & Yazdani, K. (2023). Linear Regularizers Enforce the Strict Saddle Property. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10017-10024. https://doi.org/10.1609/aaai.v37i8.26194

Issue

Section

AAAI Technical Track on Machine Learning III