Skip to main content

Physical-Property Guided End-to-End Interactive Image Dehazing Network

  • Conference paper
  • First Online:
International Conference on Neural Computing for Advanced Applications (NCAA 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1870))

Included in the following conference series:

  • 417 Accesses

Abstract

Single image dehazing task predicts the latent haze-free images from hazy images corrupted by the dust or particles existed in atmosphere. Notwithstanding the great progress has been made by the end-to-end deep dehazing methods to recover the texture details, they usually cannot effectively preserve the real color of images, due to lack of constraint on color preservation. In contrast, atmospheric scattering model based dehazing methods obtain the restored images with relatively rich real color information due to unique physical property. In this paper, we propose to seamlessly integrate the properties of physics-based and end-to-end dehazing methods into a unified powerful model with sufficient interactions, and a novel Physical-property Guided End-to-End Interactive Image Dehazing Network (PID-Net) is presented. To make full use of the physical properties to extract the density information of haze maps for deep dehazing, we design a transmission map guided interactive attention (TMGIA) module to teach an end-to-end information interaction network via dual channel-wise and pixel-wise attention. This way can refine the intermediate features of end-to-end information interaction network, and do it a favor to obtain better detail recovery by sufficient interaction. A color-detail refinement sub-network further refines the dehazed images with abundant color and image details to obtain better visual effects. On several synthetic and real-world datasets, our method consistently outperforms other state-of-the-arts for detail recovery and color preservation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: DehazeNet: an end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Chen, D., et al.: Gated context aggregation network for image dehazing and deraining. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pp. 1375–1383 (2019)

    Google Scholar 

  3. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster R-CNN for object detection in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3339–3348 (2018)

    Google Scholar 

  4. Dong, Y., Liu, Y., Zhang, H., Chen, S., Qiao, Y.: FD-GAN: generative adversarial networks with fusion-discriminator for single image dehazing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 10729–10736 (2020)

    Google Scholar 

  5. Fattal, R.: Dehazing using color-lines. ACM Trans. Graph. 34(1), 1–14 (2014)

    Article  Google Scholar 

  6. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  7. Hong, M., Xie, Y., Li, C., Qu, Y.: Distilling image dehazing with heterogeneous task imitation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3462–3471 (2020)

    Google Scholar 

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  9. Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: AOD-Net: all-in-one dehazing network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4770–4778 (2017)

    Google Scholar 

  10. Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: End-to-end united video dehazing and detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  11. Li, B., et al.: Reside: a benchmark for single image dehazing. arXiv preprint arXiv:1712.04143 1 (2017)

  12. Li, G., Xie, Y., Lin, L., Yu, Y.: Instance-level salient object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2386–2395 (2017)

    Google Scholar 

  13. Li, Y., Chang, Y., Gao, Y., Yu, C., Yan, L.: Physically disentangled intra-and inter-domain adaptation for varicolored haze removal. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5841–5850 (2022)

    Google Scholar 

  14. Li, Y., et al.: LAP-Net: level-aware progressive network for image dehazing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3276–3285 (2019)

    Google Scholar 

  15. Liu, X., Ma, Y., Shi, Z., Chen, J.: GriddehazeNet: attention-based multi-scale network for image dehazing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7314–7323 (2019)

    Google Scholar 

  16. Liu, Y., et al.: From synthetic to real: image dehazing collaborating with unlabeled real data. In: Proceedings of the ACM International Conference on Multimedia, pp. 50–58 (2021)

    Google Scholar 

  17. Meng, G., Wang, Y., Duan, J., Xiang, S., Pan, C.: Efficient image dehazing with boundary constraint and contextual regularization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 617–624 (2013)

    Google Scholar 

  18. Ojala, T., Pietikainen, M., Harwood, D.: Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In: Proceedings of 12th International Conference on Pattern Recognition, vol. 1, pp. 582–585 (1994)

    Google Scholar 

  19. Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of texture measures with classification based on featured distributions. Pattern Recogn. 29(1), 51–59 (1996)

    Article  Google Scholar 

  20. Qin, X., Wang, Z., Bai, Y., Xie, X., Jia, H.: FFA-Net: feature fusion attention network for single image dehazing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11908–11915 (2020)

    Google Scholar 

  21. Ren, W., Cao, X.: Deep video dehazing. In: Zeng, B., Huang, Q., El Saddik, A., Li, H., Jiang, S., Fan, X. (eds.) PCM 2017. LNCS, vol. 10735, pp. 14–24. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77380-3_2

    Chapter  Google Scholar 

  22. Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.-H.: Single image dehazing via multi-scale convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 154–169. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_10

    Chapter  Google Scholar 

  23. Ren, W., et al.: Gated fusion network for single image dehazing. In: Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition, pp. 3253–3261 (2018)

    Google Scholar 

  24. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)

    Google Scholar 

  25. Sakaridis, C., Dai, D., Van Gool, L.: Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vision 126(9), 973–992 (2018)

    Article  Google Scholar 

  26. Tang, K., Yang, J., Wang, J.: Investigating haze-relevant features in a learning framework for image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2995–3000 (2014)

    Google Scholar 

  27. Tu, Z., et al.: Maxim: multi-axis MLP for image processing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5769–5780 (2022)

    Google Scholar 

  28. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  29. Wei, Y., et al.: DerainCycleGAN: Rain attentive CycleGAN for single image deraining and rainmaking. IEEE Trans. Image Process. 30, 4788–4801 (2021)

    Article  Google Scholar 

  30. Wu, H., et al.: Contrastive learning for compact single image dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10551–10560 (2021)

    Google Scholar 

  31. Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3194–3203 (2018)

    Google Scholar 

  32. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  33. Zhang, Z., Zheng, H., Hong, R., Xu, M., Yan, S., Wang, M.: Deep color consistent network for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1889–1898 (2022)

    Google Scholar 

  34. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)

    Google Scholar 

  35. Zhao, S., Zhang, Z., Hong, R., Xu, M., Yang, Y., Wang, M.: FCL-GAN: a lightweight and real-time baseline for unsupervised blind image deblurring. In: Proceedings of the ACM International Conference on Multimedia, pp. 6220–6229 (2022)

    Google Scholar 

  36. Zhao, S., et al.: CRNet: unsupervised color retention network for blind motion deblurring. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 6193–6201 (2022)

    Google Scholar 

  37. Zheng, C., Shi, D., Liu, Y.: Windowing decomposition convolutional neural network for image enhancement. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 424–432 (2021)

    Google Scholar 

Download references

Acknowledgments

The work described in this paper is partially supported by the National Natural Science Foundation of China (62072151, 61932009), the Anhui Provincial Natural Science Fund for the Distinguished Young Scholars (2008085J30), and the CAAI-Huawei MindSpore Open Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhao Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Zhao, S., Zhang, Z., Zhao, Y., Zhang, H. (2023). Physical-Property Guided End-to-End Interactive Image Dehazing Network. In: Zhang, H., et al. International Conference on Neural Computing for Advanced Applications. NCAA 2023. Communications in Computer and Information Science, vol 1870. Springer, Singapore. https://doi.org/10.1007/978-981-99-5847-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-5847-4_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-5846-7

  • Online ISBN: 978-981-99-5847-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics