Skip to main content
Log in

Lightweight CNN-Based Low-Light-Image Enhancement System on FPGA Platform

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep learning methods have made great advances in low-level vision tasks, such as image enhancement and target detection. However, these methods cannot be executed on mobile on-chip platforms for their high memory occupancy and multiple hundreds of Floating Point Operation per Second (FLOP/s). In this paper, a lightweight convolutional neural network (CNN) for low light image enhancement task is proposed with less than 1 M parameter size and solving the problem of hyper parameter setting in enhancement models. Then, a pseudo-symmetry quantization method to introduced for image enhancement model compression, since recent quantization approaches aim at high compression ratio and are not suitable for image enhancement task. At last, this CNN-based low light enhancement method is deployed on a customized Xilinx Inc. FPGA platform with hardware accelerator, which is designed to speed up the multi threads data transmission. The experiments have proved that our method gives visual and objective convincing results and potential to be executed for real-world applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Wang X, Piao Y, Wang Y (2022) A dark image enhancement method based on multiscale features and dilated residual networks. Neural Proc Lett 54:799–814

    Article  Google Scholar 

  2. Meng Y, Kong D, Zhu Z, Zhao Y (2019) From night to day: Gans based low quality image enhancement. Neural Proc Lett 50:799–814

    Article  Google Scholar 

  3. Xu X, Liu L, Zhang X, Guan W, Hu R (2021) Rethinking data collection for person re-identification: active redundancy reduction. Pattern Recogn 113(4):107827

    Article  Google Scholar 

  4. Xu X, Wang S, Wang Z, Zhang X, Hu R (2021) Exploring image enhancement for salient object detection in low light images. ACM Trans Multimedia Comput Commun Appl (TOMM) 17(1):1–19

    Google Scholar 

  5. Ghosh A, Chakraborty D, Law A (2018) Artificial intelligence in internet of things. CAAI Trans Intell Technol 3(4):208–218

    Article  Google Scholar 

  6. Dong Z, Yao Z, Arfeen D, Gholami A, Mahoney MW, Keutzer K (2020) Hawq-v2: hessian aware trace-weighted quantization of neural networks. Adv Neural Inform Proc Syst 33:18518–18529

    Google Scholar 

  7. Shen S, Dong Z, Ye J, Ma L, Yao Z, Gholami A, Mahoney MW, Keutzer K (2020) Q-bert: hessian based ultra low precision quantization of bert. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 8815–8821

  8. Hegde K, Yu J, Agrawal R, Yan M, Pellauer M, Fletcher C (2018) Ucnn: exploiting computational reuse in deep neural networks via weight repetition. In: 2018 ACM/IEEE 45th annual international symposium on computer architecture (ISCA), pp 674–687. IEEE

  9. Albericio J, Delmás A, Judd P, Sharify S, O’Leary G, Genov R, Moshovos A (2017) Bit-pragmatic deep neural network computing. In: Proceedings of the 50th annual IEEE/ACM international symposium on microarchitecture, pp 382–394

  10. Kwon H, Samajdar A, Krishna T (2018) Maeri: enabling flexible dataflow mapping over DNN accelerators via reconfigurable interconnects. ACM SIGPLAN Not 53(2):461–475

    Article  Google Scholar 

  11. Lu H, Wei X, Lin N, Yan G, Li X (2018) Tetris: re-architecting convolutional neural network computation for machine learning accelerators. In: 2018 IEEE/ACM international conference on computer-aided design (ICCAD), pp 1–8. IEEE

  12. Land EH (1977) The retinex theory of color vision. Sci Am 237(6):108–129

    Article  Google Scholar 

  13. Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  MATH  Google Scholar 

  14. Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841

    Article  MathSciNet  MATH  Google Scholar 

  15. Kim G, Kwon D, Kwon J (2019) Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. In: 2019 IEEE international conference on image processing (ICIP), pp 2811–2815. IEEE

  16. Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) Enlightengan: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349

    Article  Google Scholar 

  17. Wei C, Wang W, Yang W, Liu J (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  18. Wang R, Zhang Q, Fu C-W, Shen X, Zheng W-S, Jia J (2019) Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6849–6857

  19. Ren W, Liu S, Ma L, Xu Q, Xu X, Cao X, Du J, Yang M-H (2019) Low-light image enhancement via a deep hybrid network. IEEE Trans Image Process 28(9):4364–4375

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM international conference on multimedia, pp 1632–1640

  21. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  MathSciNet  MATH  Google Scholar 

  22. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1780–1789

  23. Zhu M, Pan P, Chen W, Yang Y (2020) Eemefn: low-light image enhancement via edge-enhanced multi-exposure fusion network. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 13106–13113

  24. Yang K-F, Zhang X-S, Li Y-J (2019) A biological vision inspired framework for image enhancement in poor visibility conditions. IEEE Trans Image Process 29:1493–1506

    Article  MATH  Google Scholar 

  25. Xu K, Yang X, Yin B, Lau RW (2020) Learning to restore low-light images via decomposition-and-enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2281–2290

  26. Xu C, Peng Z, Hu X, Zhang W, Chen L, An F (2020) Fpga-based low-visibility enhancement accelerator for video sequence by adaptive histogram equalization with dynamic clip-threshold. IEEE Trans Circ Syst I Regul Pap 67(11):3954–3964

    Article  Google Scholar 

  27. Han S, Mao H, Dally WJ (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149

  28. Zhu C, Han S, Mao H, Dally WJ (2016) Trained ternary quantization. arXiv preprint arXiv:1612.01064

  29. Rastegari M, Ordonez V, Redmon J, Farhadi A (2016) Xnor-net: Imagenet classification using binary convolutional neural networks. In: European conference on computer Vision, Springer. pp 525–542

  30. Zhou S, Wu Y, Ni Z, Zhou X, Wen H, Zou Y (2016) Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160

  31. Choi J, Wang Z, Venkataramani S, Chuang PI-J, Srinivasan V, Gopalakrishnan K (2018) Pact: parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085

  32. Zhuang B, Shen C, Tan M, Liu L, Reid I (2018) Towards effective low-bitwidth convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7920–7928

  33. Krishnamoorthi R (2018) Quantizing deep convolutional networks for efficient inference: a whitepaper. arXiv preprint arXiv:1806.08342

  34. Zhang D, Yang J, Ye D, Hua G (2018) Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In: Proceedings of the European conference on computer vision (ECCV), pp. 365–382

  35. Zhou A, Yao A, Guo Y, Xu L, Chen Y (2017) Incremental network quantization: Towards lossless CNNS with low-precision weights. arXiv preprint arXiv:1702.03044

  36. Jouppi NP, Young C, Patil N, Patterson D, Agrawal G, Bajwa R, Bates S, Bhatia S, Boden N, Borchers A etal (2017) In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th annual international symposium on computer architecture, pp. 1–12

  37. Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cudnn: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759

  38. Liu S, Du Z, Tao J, Han D, Luo T, Xie Y, Chen Y, Chen T (2016) Cambricon: an instruction set architecture for neural networks. In: 2016 ACM/IEEE 43rd annual international symposium on computer architecture (ISCA), pp 393–405. IEEE

  39. Han S, Liu X, Mao H, Pu J, Pedram A, Horowitz MA, Dally WJ (2016) Eie: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput Architect News 44(3):243–254

    Article  Google Scholar 

  40. Chen Y-H, Krishna T, Emer JS, Sze V (2016) Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circ 52(1):127–138

    Article  Google Scholar 

  41. Zhang J, Chen X, Song M, Li T (2019) Eager pruning: algorithm and architecture support for fast training of deep neural networks. In: 2019 ACM/IEEE 46th annual international symposium on computer architecture (ISCA), pp 292–303. IEEE

  42. Zhang C, Li P, Sun G, Guan Y, Xiao B, Cong J (2015) Optimizing fpga-based accelerator design for deep convolutional neural networks. In: Proceedings of the 2015 ACM/SIGDA international symposium on field-programmable gate arrays, pp 161–170

  43. Lu W, Yan G, Li J, Gong S, Han Y, Li X (2017) Flexflow: a flexible dataflow accelerator architecture for convolutional neural networks. In: 2017 IEEE international symposium on high performance computer architecture (HPCA), pp 553–564. IEEE

  44. Sharma H, Park J, Mahajan D, Amaro E, Kim JK, Shao C, Mishra A, Esmaeilzadeh H (2016) From high-level deep neural models to FPGAS. In: 2016 49th annual IEEE/ACM international symposium on microarchitecture (MICRO), pp 1–12. IEEE

  45. Shao YS, Clemons J, Venkatesan R, Zimmer B, Fojtik M, Jiang N, Keller B, Klinefelter A, Pinckney N, Raina P et al (2019) Simba: Scaling deep-learning inference with multi-chip-module-based architecture. In: Proceedings of the 52nd annual IEEE/ACM international symposium on microarchitecture, pp 14–27

  46. Venkataramani S, Ranjan A, Banerjee S, Das D, Avancha S, Jagannathan A, Durg A, Nagaraj D, Kaul B, Dubey P etal (2017) Scaledeep: A scalable compute architecture for learning and evaluating deep networks. In: Proceedings of the 44th annual international symposium on computer architecture, pp 13–26

  47. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. Springer, Cham, pp 694–711

    Google Scholar 

  48. Jia D, Wei D, Socher R, Li LJ, Kai L, Li FF (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255

  49. Faraone J, Fraser N, Blott M, Leong PH (2018) Syq: learning symmetric quantization for efficient deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4300–4309

  50. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of China (62202347, and U1803262). This work was supported by Hubei Natural Science Foundation Youth Program (2022CFB578).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Xu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, W., Xu, X. Lightweight CNN-Based Low-Light-Image Enhancement System on FPGA Platform. Neural Process Lett 55, 8023–8039 (2023). https://doi.org/10.1007/s11063-023-11295-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11295-0

Keywords

Navigation