Skip to main content
Log in

EDGAN: motion deblurring algorithm based on enhanced generative adversarial networks

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Removing motion blur has been an important issue in computer vision literature. Motion blur is caused by the relative motion between the camera and the photographed object. However, in recent years, some achievements have been made in the research of image deblurring by using deep learning algorithms. In this paper, an enhanced adversarial network model is proposed. The proposed model can use the weight of feature channel to generate sharp image and eliminate draughtboard artefacts. In addition, the mixed loss function enables the network to output high-quality image. The proposed approach is tested using GOPRO datasets and Lai datasets. In the GOPRO datasets, the peak signal-to-noise ratio of the proposed approach is up to 28.674, and DeblurGAN is 27.454. And the structural similarity measure can be achieved up to 0.969, and DeblurGAN is 0.939. Furthermore, the images were obtained from China’s Chang’e 3 Lander to test the new algorithm. Due to the elimination of the chessboard effect, the deblurred image has a better visual appearance. The proposed method achieved higher performance and efficiency in qualitative and quantitative aspects using the benchmark dataset experiments. The results also provided various insights into the design and development of the camera pointing system, which was mounted on the Lander for capturing images of the moon and rover for Chang’e space mission.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Polyu.edu.hk (2015) Life on Mars? Award-wining PolyU device could dig up the answer [online]. http://www.polyu.edu.hk/openingminds/en/story.php?sid=3. Accessed 10 Nov 2017

  2. Polyu.edu.hk (2017) PolyU 70th Anniversary [online]. https://www.polyu.edu.hk/cpa/70thanniversary/memories_achievement10.htm. Accessed 3 Nov 2017

  3. Gupta A, Joshi N, Zitnick CL, Cohen M, Curless B (2010) Single image deblurring using motion density functions. In: Proceedings of the ECCV, pp 171–184

  4. Harmeling S, Hirsch M, Schölkopf B (2010) Space-variant single-image blind deconvolution for removing camera shake. In: Proceedings of the NIPS, Vancouver, British Columbia, Canada, pp 829–837

  5. Hirsch M, Schuler CJ, Harmeling S, Scholkopf B (2011) Fast removal of non-uniform camera shake. In: Proceedings of the ICCV, pp 463–470

  6. Whyte O, Sivic J, Zisserman A, Ponce J (2010) Non-uniform deblurring for shaken images. In: Proceedings of the CVPR, pp 491–498

  7. Li W (2011) Parameter estimation and algorithm research of motion blur image restoration. M.S. dissertation, Department of Computer Application, Anhui University, Anhui, China

  8. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S et al (2014) Generative adversarial nets. In: Proceedings of the ICNOIP, pp 2672–2680

  9. Sun J, Cao W, Xu Z, Ponce J (2015) Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of the CVPR, pp 769–777

  10. Nah S, Kim TH, Lee KM (2018) Deep multi-scale convolutional neural network for dynamic scene deblurring. arXiv preprint arXiv:1612.02177v2

  11. Ramakrishnan S, Pachori S, Gangopadhyay A, et al (2017) Deep generative filter for motion deblurring. In: Proceedings of the ICCV, pp 2993–3000

  12. Kupyn O, Budzan V, Mykhailych M, Mishkin D, Matas J (2017) Deblurgan: blind motion deblurring using conditional adversarial networks. arXiv preprint arXiv:1711.07064

  13. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: Proceedings of the ICML, pp 214–223

  14. Johnson J, Alahi A, Li FF (2016) Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the ECCV, pp 694–711

  15. Su S, Delbracio M, Wang J, Sapiro G, Heidrich W, Wang O (2017) Deep video deblurring for hand-held cameras. In: Proceedings of the CVPR, pp 1279–1288

  16. Zhang K, Luo W, Zhong Y, Mia L, Liu W (2018) Adversarial Spatio-Temporal Learning for Video Deblurring. In: Proceedings of the TIP, pp 281–301

  17. Chen H, Gu J, Gallo O, Liu M, Veeraraghavan A, Kautz J (2018) Reblur2Deblur: deblurring videos via self-supervised learning. In: ICCP

  18. Mirza M, Osindero S (2014) Conditional generative adversarial nets. Comput Sci 63:2672–2680

    Google Scholar 

  19. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875

  20. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the ICML

  21. Shi W, Caballero J, Huszár F, et al (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1874–1883

  22. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7132–7141

  23. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D et al (2015) Going deeper with convolutions. In: Proceedings of the CVPR, pp 1–9

  24. Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the CVPR

  25. Szegedy C, Ioffe S, Vanhoucke V (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv:1602.07261

  26. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the CVPR, pp 770–778

  27. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. http://distill.pub/2016/deconvcheckerboard/

  28. Liand C, Wand M (2016) Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: Proceedings of the ECCV, pp 702–716

  29. Isola P, Zhu J, Zhou T, Efros A (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the CVPR, pp 1125–1134

  30. Maas AL, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models. Proc. ICML 30(1):3

    Google Scholar 

  31. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput Sci 362:1140

    Google Scholar 

  32. Deng J, Dong W, Socher R, Li LJ, Li K, Li FF (2009) ImageNet: a large-scale hierarchical image database. Computer Vision and Pattern Recognition. In: Proceedings of the CVPR, pp 248–255

  33. Jiao J, Tu WC, He S, Lau RWH (2017) FormResNet: formatted residual learning for image restoration. In Proceedings of the CVPRW, pp 1034–1042

  34. Kingmaand DP, Ba J (2017) Adam: a method for stochastic optimization. arXiv: arXiv:1412.6980v9

  35. Chinese Academy of Sciences, China National Space Administration, The Science and Application Center for Moon and Deepspace Exploration. Chang’e 3data:Rover P anoramic Camera. http://planetary.s3.amazonaws.com/data/change3/pcam.html. 14 May 2018

  36. Polyu.edu.hk (2017) The Hong Kong Polytechnic University-The Space Exploration Journey [online]. https://www.polyu.edu.hk/web/filemanager/en/content_155/1960/Appendix_Milestones_SpaceProjects_.pdf. Accessed 24 Nov 2017

  37. Hore A, Ziou D (2010) Image quality metrics: PSNR vs. SSIM. In: Proceedings of the ICRP, pp 2366–2369

Download references

Acknowledgments

This work is supported by the Natural Science Foundation of Guangdong Province (2015A030310172). It is also partly supported by a Grant from the Department of Industrial and Systems Engineering of the Hong Kong Polytechnic University (H-ZG3K) and a Grant from Shenzhen Technology University (2018010802008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Ma, S.Y., Zhang, X. et al. EDGAN: motion deblurring algorithm based on enhanced generative adversarial networks. J Supercomput 76, 8922–8937 (2020). https://doi.org/10.1007/s11227-020-03189-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-020-03189-y

Keywords

Navigation