Abstract
We propose a novel regularization method for hybrid quantization of neural networks, enabling efficient deployment on ultra-low power microcontrollers in embedded systems. Our approach introduces alternative regularization functions and a uniform hybrid quantization scheme targeting {2, 4, 8}-bits. The method offers flexibility to the weight matrix level, negligible costs, and seamless integration into existing 8-bit post-training quantization pipelines. Additionally, we propose novel schedule functions for regularization, addressing the critical yet often overlooked timing aspect and providing new insights into pacing quantization. Our method achieves a substantial reduction in model byte size, nearly halving it with less than 1% accuracy loss, effectively minimizing power and memory footprints on microcontrollers. Our contributions advance resource-efficient models in resource-constrained devices and the emerging field of tinyML, overcoming limitations of existing approaches and providing new perspectives on the quantization process. The practical implications of our work span diverse real-world applications, including IoT, wearables, and autonomous systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Achille, A., Rovere, M., Soatto, S.: Critical learning periods in deep networks. In: 7th International Conference on Learning Representations (2019)
Alemdar, H., Leroy, V., Prost-Boucle, A., Pétrot, F.: Ternary neural networks for resource-efficient AI applications. In: International Joint Conference on Neural Networks, IJCNN, pp. 2547–2554 (2017)
Arik, S.Ö., et al.: Convolutional recurrent neural networks for small-footprint keyword spotting. In: 18th Annual Conference of the International Speech Communication Association, Interspeech (2017)
van Baalen, M., et al.: Bayesian bits: unifying quantization and pruning. In: Advances in Neural Information Processing Systems, vol. 33, pp. 5741–5752 (2020)
Bulat, A., Tzimiropoulos, G., Kossaifi, J., Pantic, M.: Improved training of binary networks for human pose estimation and image recognition. CoRR abs/1904.05868 (2019)
Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, vol. 28 (2015)
Darabi, S., Belbahri, M., Courbariaux, M., Nia, V.: Regularized Binary Network Training. arXiv abs/1812.11800 (2018)
Frankle, J., Schwab, D.J., Morcos, A.S.: The early phase of neural network training. In: 8th International Conference on Learning Representations (2020)
Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M.W., Keutzer, K.: A survey of quantization methods for efficient neural network inference. arXiv abs/2103.13630 (2021)
Golatkar, A.S., Achille, A., Soatto, S.: Time matters in regularizing deep networks: weight decay and data augmentation affect early learning dynamics, matter little near convergence. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Kim, H., Park, J., Lee, C., Kim, J.: Improving accuracy of binary neural networks using unbalanced activation distribution. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 7862–7871 (2021)
Lê, M.T., Arbel, J.: TinyMLOps for real-time ultra-low power MCUs applied to frame-based event classification. In: EuroMLSys 2023: Proceedings of the 3rd European Workshop on Machine Learning and Systems. ACM, New York (2023)
Louizos, C., Reisser, M., Blankevoort, T., Gavves, E., Welling, M.: Relaxed quantization for discretized neural networks. In: 7th International Conference on Learning Representations (2019)
Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Data 6(1), 1–48 (2019). https://doi.org/10.1186/s40537-019-0197-0
Takeuchi, D., Yatabe, K., Koizumi, Y., Oikawa, Y., Harada, N.: Real-time speech enhancement using equilibriated RNN. In: 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, pp. 851–855 (2020)
Tang, W., Hua, G., Wang, L.: How to train a compact binary neural network with high accuracy? In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, pp. 2625–2631 (2017)
Tompson, J., Goroshin, R., Jain, A., LeCun, Y., Bregler, C.: Efficient object localization using convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 648–656 (2015)
Unlu, H.: Efficient neural network deployment for microcontroller. CoRR abs/2007.01348 (2020)
Warden, P.: Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv abs/1804.03209 (2018)
Yin, P., Zhang, S., Lyu, J., Osher, S.J., Qi, Y., Xin, J.: BinaryRelax: a relaxation approach for training deep neural networks with quantized weights. SIAM J. Imaging Sci. 11(4), 2205–2223 (2018)
Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. In: 5th International Conference on Learning Representations (2017)
Zmora, N., Jacob, G., Zlotnik, L., Elharar, B., Novik, G.: Neural Network Distiller: A Python package for DNN compression research. arXiv abs/1910.12232 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lê, M.T., de Foras, E., Arbel, J. (2023). Regularization for Hybrid N-Bit Weight Quantization of Neural Networks on Ultra-Low Power Microcontrollers. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14258. Springer, Cham. https://doi.org/10.1007/978-3-031-44192-9_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-44192-9_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44191-2
Online ISBN: 978-3-031-44192-9
eBook Packages: Computer ScienceComputer Science (R0)