Abstract
Deep learning methods have made great advances in low-level vision tasks, such as image enhancement and target detection. However, these methods cannot be executed on mobile on-chip platforms for their high memory occupancy and multiple hundreds of Floating Point Operation per Second (FLOP/s). In this paper, a lightweight convolutional neural network (CNN) for low light image enhancement task is proposed with less than 1 M parameter size and solving the problem of hyper parameter setting in enhancement models. Then, a pseudo-symmetry quantization method to introduced for image enhancement model compression, since recent quantization approaches aim at high compression ratio and are not suitable for image enhancement task. At last, this CNN-based low light enhancement method is deployed on a customized Xilinx Inc. FPGA platform with hardware accelerator, which is designed to speed up the multi threads data transmission. The experiments have proved that our method gives visual and objective convincing results and potential to be executed for real-world applications.
Similar content being viewed by others
References
Wang X, Piao Y, Wang Y (2022) A dark image enhancement method based on multiscale features and dilated residual networks. Neural Proc Lett 54:799–814
Meng Y, Kong D, Zhu Z, Zhao Y (2019) From night to day: Gans based low quality image enhancement. Neural Proc Lett 50:799–814
Xu X, Liu L, Zhang X, Guan W, Hu R (2021) Rethinking data collection for person re-identification: active redundancy reduction. Pattern Recogn 113(4):107827
Xu X, Wang S, Wang Z, Zhang X, Hu R (2021) Exploring image enhancement for salient object detection in low light images. ACM Trans Multimedia Comput Commun Appl (TOMM) 17(1):1–19
Ghosh A, Chakraborty D, Law A (2018) Artificial intelligence in internet of things. CAAI Trans Intell Technol 3(4):208–218
Dong Z, Yao Z, Arfeen D, Gholami A, Mahoney MW, Keutzer K (2020) Hawq-v2: hessian aware trace-weighted quantization of neural networks. Adv Neural Inform Proc Syst 33:18518–18529
Shen S, Dong Z, Ye J, Ma L, Yao Z, Gholami A, Mahoney MW, Keutzer K (2020) Q-bert: hessian based ultra low precision quantization of bert. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 8815–8821
Hegde K, Yu J, Agrawal R, Yan M, Pellauer M, Fletcher C (2018) Ucnn: exploiting computational reuse in deep neural networks via weight repetition. In: 2018 ACM/IEEE 45th annual international symposium on computer architecture (ISCA), pp 674–687. IEEE
Albericio J, Delmás A, Judd P, Sharify S, O’Leary G, Genov R, Moshovos A (2017) Bit-pragmatic deep neural network computing. In: Proceedings of the 50th annual IEEE/ACM international symposium on microarchitecture, pp 382–394
Kwon H, Samajdar A, Krishna T (2018) Maeri: enabling flexible dataflow mapping over DNN accelerators via reconfigurable interconnects. ACM SIGPLAN Not 53(2):461–475
Lu H, Wei X, Lin N, Yan G, Li X (2018) Tetris: re-architecting convolutional neural network computation for machine learning accelerators. In: 2018 IEEE/ACM international conference on computer-aided design (ICCAD), pp 1–8. IEEE
Land EH (1977) The retinex theory of color vision. Sci Am 237(6):108–129
Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993
Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841
Kim G, Kwon D, Kwon J (2019) Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. In: 2019 IEEE international conference on image processing (ICIP), pp 2811–2815. IEEE
Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) Enlightengan: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349
Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560
Wang R, Zhang Q, Fu C-W, Shen X, Zheng W-S, Jia J (2019) Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6849–6857
Ren W, Liu S, Ma L, Xu Q, Xu X, Cao X, Du J, Yang M-H (2019) Low-light image enhancement via a deep hybrid network. IEEE Trans Image Process 28(9):4364–4375
Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM international conference on multimedia, pp 1632–1640
Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062
Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1780–1789
Zhu M, Pan P, Chen W, Yang Y (2020) Eemefn: low-light image enhancement via edge-enhanced multi-exposure fusion network. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 13106–13113
Yang K-F, Zhang X-S, Li Y-J (2019) A biological vision inspired framework for image enhancement in poor visibility conditions. IEEE Trans Image Process 29:1493–1506
Xu K, Yang X, Yin B, Lau RW (2020) Learning to restore low-light images via decomposition-and-enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2281–2290
Xu C, Peng Z, Hu X, Zhang W, Chen L, An F (2020) Fpga-based low-visibility enhancement accelerator for video sequence by adaptive histogram equalization with dynamic clip-threshold. IEEE Trans Circ Syst I Regul Pap 67(11):3954–3964
Han S, Mao H, Dally WJ (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149
Zhu C, Han S, Mao H, Dally WJ (2016) Trained ternary quantization. arXiv preprint arXiv:1612.01064
Rastegari M, Ordonez V, Redmon J, Farhadi A (2016) Xnor-net: Imagenet classification using binary convolutional neural networks. In: European conference on computer Vision, Springer. pp 525–542
Zhou S, Wu Y, Ni Z, Zhou X, Wen H, Zou Y (2016) Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160
Choi J, Wang Z, Venkataramani S, Chuang PI-J, Srinivasan V, Gopalakrishnan K (2018) Pact: parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085
Zhuang B, Shen C, Tan M, Liu L, Reid I (2018) Towards effective low-bitwidth convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7920–7928
Krishnamoorthi R (2018) Quantizing deep convolutional networks for efficient inference: a whitepaper. arXiv preprint arXiv:1806.08342
Zhang D, Yang J, Ye D, Hua G (2018) Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In: Proceedings of the European conference on computer vision (ECCV), pp. 365–382
Zhou A, Yao A, Guo Y, Xu L, Chen Y (2017) Incremental network quantization: Towards lossless CNNS with low-precision weights. arXiv preprint arXiv:1702.03044
Jouppi NP, Young C, Patil N, Patterson D, Agrawal G, Bajwa R, Bates S, Bhatia S, Boden N, Borchers A etal (2017) In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th annual international symposium on computer architecture, pp. 1–12
Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cudnn: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759
Liu S, Du Z, Tao J, Han D, Luo T, Xie Y, Chen Y, Chen T (2016) Cambricon: an instruction set architecture for neural networks. In: 2016 ACM/IEEE 43rd annual international symposium on computer architecture (ISCA), pp 393–405. IEEE
Han S, Liu X, Mao H, Pu J, Pedram A, Horowitz MA, Dally WJ (2016) Eie: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput Architect News 44(3):243–254
Chen Y-H, Krishna T, Emer JS, Sze V (2016) Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circ 52(1):127–138
Zhang J, Chen X, Song M, Li T (2019) Eager pruning: algorithm and architecture support for fast training of deep neural networks. In: 2019 ACM/IEEE 46th annual international symposium on computer architecture (ISCA), pp 292–303. IEEE
Zhang C, Li P, Sun G, Guan Y, Xiao B, Cong J (2015) Optimizing fpga-based accelerator design for deep convolutional neural networks. In: Proceedings of the 2015 ACM/SIGDA international symposium on field-programmable gate arrays, pp 161–170
Lu W, Yan G, Li J, Gong S, Han Y, Li X (2017) Flexflow: a flexible dataflow accelerator architecture for convolutional neural networks. In: 2017 IEEE international symposium on high performance computer architecture (HPCA), pp 553–564. IEEE
Sharma H, Park J, Mahajan D, Amaro E, Kim JK, Shao C, Mishra A, Esmaeilzadeh H (2016) From high-level deep neural models to FPGAS. In: 2016 49th annual IEEE/ACM international symposium on microarchitecture (MICRO), pp 1–12. IEEE
Shao YS, Clemons J, Venkatesan R, Zimmer B, Fojtik M, Jiang N, Keller B, Klinefelter A, Pinckney N, Raina P et al (2019) Simba: Scaling deep-learning inference with multi-chip-module-based architecture. In: Proceedings of the 52nd annual IEEE/ACM international symposium on microarchitecture, pp 14–27
Venkataramani S, Ranjan A, Banerjee S, Das D, Avancha S, Jagannathan A, Durg A, Nagaraj D, Kaul B, Dubey P etal (2017) Scaledeep: A scalable compute architecture for learning and evaluating deep networks. In: Proceedings of the 44th annual international symposium on computer architecture, pp 13–26
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. Springer, Cham, pp 694–711
Jia D, Wei D, Socher R, Li LJ, Kai L, Li FF (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255
Faraone J, Fraser N, Blott M, Leong PH (2018) Syq: learning symmetric quantization for efficient deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4300–4309
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612
Acknowledgements
This work was supported by the Natural Science Foundation of China (62202347, and U1803262). This work was supported by Hubei Natural Science Foundation Youth Program (2022CFB578).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, W., Xu, X. Lightweight CNN-Based Low-Light-Image Enhancement System on FPGA Platform. Neural Process Lett 55, 8023–8039 (2023). https://doi.org/10.1007/s11063-023-11295-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-023-11295-0