One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware Quantization Training

Authors

  • Lianbo Ma Northeastern University
  • Yuee Zhou Northeastern University
  • Jianlun Ma Northeastern University
  • Guo Yu Nanjing Tech University
  • Qing Li Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i13.29336

Keywords:

ML: Deep Neural Architectures and Foundation Models, ML: Deep Learning Theory

Abstract

Weight quantization is an effective technique to compress deep neural networks for their deployment on edge devices with limited resources. Traditional loss-aware quantization methods commonly use the quantized gradient to replace the full-precision gradient. However, we discover that the gradient error will lead to an unexpected zig-zagging-like issue in the gradient descent learning procedures, where the gradient directions rapidly oscillate or zig-zag, and such issue seriously slows down the model convergence. Accordingly, this paper proposes a one-step forward and backtrack way for loss-aware quantization to get more accurate and stable gradient direction to defy this issue. During the gradient descent learning, a one-step forward search is designed to find the trial gradient of the next-step, which is adopted to adjust the gradient of current step towards the direction of fast convergence. After that, we backtrack the current step to update the full-precision and quantized weights through the current-step gradient and the trial gradient. A series of theoretical analysis and experiments on benchmark deep models have demonstrated the effectiveness and competitiveness of the proposed method, and our method especially outperforms others on the convergence performance.

Published

2024-03-24

How to Cite

Ma, L., Zhou, Y., Ma, J., Yu, G., & Li, Q. (2024). One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware Quantization Training. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14246-14254. https://doi.org/10.1609/aaai.v38i13.29336

Issue

Section

AAAI Technical Track on Machine Learning IV