skip to main content
10.1145/3640115.3640187acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciteeConference Proceedingsconference-collections
research-article

Dense Network-Based Spectral-Polarization Image Fusion: Multispectral Data Enhancement via Encoder-Decoder Approach

Authors Info & Claims
Published:26 March 2024Publication History

ABSTRACT

Images are obtained by perceiving the sunlight reflected by objects or scenes, and due to limited solar irradiance, spatial resolution certainly decreases. In contrast, multispectral sensors hold the spatial information of ground objects even though the obtained images only have a few bands, which means that the obtained images have high spatial but insufficient spectral resolution. To overcome these limitations, we have developed a spectral and polarization image fusion technique that improves several perspectives of image fusion, such as intensity, edges, textures, and substantial information. Our proposed method utilizes an encoder-decoder approach, which contains dense and convolutional blocks to simulate the source images. The polarization images have been used to develop DoLP images that help to improve fused images’ quality. We can perceive the visual quality of fused images from the experimental results.

References

  1. J. Ma, Z. Zhou, B. Wang, L. Miao, and H. Zong, “Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps,” Neurocomputing, vol. 335, pp. 9–20, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Li, B. Li, Y. Jiang, and W. Cai, “MSAt-GAN: a generative adversarial network based on multi-scale and deep attention mechanism for infrared and visible light image fusion,” Complex Intell. Syst., vol. 8, no. 6, pp. 4753–4781, 2022.Google ScholarGoogle ScholarCross RefCross Ref
  3. S. Karim, G. Tong, J. Li, A. Qadir, U. Farooq, and Y. Yu, “Current Advances and Future Perspectives of Image Fusion: A Comprehensive Review,” Inf. Fusion, 2022.Google ScholarGoogle Scholar
  4. J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion, vol. 45, pp. 153–178, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  5. P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” in Applications of digital image processing VIII, SPIE, 1985, pp. 173–181.Google ScholarGoogle ScholarCross RefCross Ref
  6. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in 1993 (4th) international Conference on Computer Vision, IEEE, 1993, pp. 173–182.Google ScholarGoogle Scholar
  7. E. J. Candes and D. L. Donoho, “Continuous curvelet transform: I. Resolution of the wavefront set,” Appl. Comput. Harmon. Anal., vol. 19, no. 2, pp. 162–197, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  8. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. image Process., vol. 14, no. 12, pp. 2091–2106, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. Easley, D. Labate, and W.-Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Appl. Comput. Harmon. Anal., vol. 25, no. 1, pp. 25–46, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  10. Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process., vol. 9, no. 5, pp. 347–357, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  11. R. G. Gavaskar and K. N. Chaudhury, “Fast adaptive bilateral filtering,” IEEE Trans. Image Process., vol. 28, no. 2, pp. 779–790, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. Ren, Z. Pan, J. Cao, H. Zhang, and H. Wang, “Infrared and visible image fusion based on edge-preserving guided filter and infrared feature decomposition,” Signal Processing, vol. 186, p. 108108, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  13. P. Ganasala and A. D. Prasad, “Contrast enhanced multi sensor image fusion based on guided image filter and NSST,” IEEE Sens. J., vol. 20, no. 2, pp. 939–946, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  14. K. Ram Prabhakar, V. Sai Srikar, and R. Venkatesh Babu, “Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4714–4722.Google ScholarGoogle ScholarCross RefCross Ref
  15. H. Li and X.-J. Wu, “DenseFuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Process., vol. 28, no. 5, pp. 2614–2623, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. Ma, Y. Zhu, X. Yin, X. Ban, H. Huang, and M. Mukeshimana, “Sesf-fuse: An unsupervised deep model for multi-focus image fusion,” Neural Comput. Appl., vol. 33, no. 11, pp. 5793–5804, 2021.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. W. Tang, F. He, Y. Liu, and Y. Duan, “MATR: Multimodal medical image fusion via multiscale adaptive transformer,” IEEE Trans. Image Process., vol. 31, pp. 5134–5149, 2022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 12797–12804.Google ScholarGoogle ScholarCross RefCross Ref
  19. H. Zhang and J. Ma, “SDNet: A versatile squeeze-and-decomposition network for real-time image fusion,” Int. J. Comput. Vis., vol. 129, no. 10, pp. 2761–2785, 2021.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo, “Fusiondn: A unified densely connected network for image fusion,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 12484–12491.Google ScholarGoogle ScholarCross RefCross Ref
  21. S. Xu, J. Zheng, J. Pu, and P. Shui, “Sea-surface floating small target detection based on polarization features,” IEEE Geosci. Remote Sens. Lett., vol. 15, no. 10, pp. 1505–1509, 2018.Google ScholarGoogle Scholar
  22. F. A. Sadjadi and C. S. L. Chun, “Passive polarimetric IR target classification,” IEEE Trans. Aerosp. Electron. Syst., vol. 37, no. 2, pp. 740–751, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  23. N. Gupta, “Remote sensing using hyperspectral and polarization images,” in Instrumentation for Air Pollution and Global Atmospheric Monitoring, SPIE, 2002, pp. 184–192.Google ScholarGoogle ScholarCross RefCross Ref
  24. M. J. Duggin and G. J. Kinn, “Vegetative target enhancement in natural scenes using multiband polarization methods,” in Polarization Analysis and Measurement IV, SPIE, 2002, pp. 281–291.Google ScholarGoogle ScholarCross RefCross Ref
  25. L. Le Hors, P. Hartemann, and S. Breugnot, “Multispectral polarization active imager in the visible band,” in Laser Radar Technology and Applications V, SPIE, 2000, pp. 380–389.Google ScholarGoogle ScholarCross RefCross Ref
  26. K. Xia, H. Yin, and J. Wang, “A novel improved deep convolutional neural network model for medical image fusion,” Cluster Comput., vol. 22, pp. 1515–1527, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. Liu , “Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5802–5811.Google ScholarGoogle ScholarCross RefCross Ref
  28. H. Li, X.-J. Wu, and J. Kittler, “Infrared and visible image fusion using a deep learning framework,” in 2018 24th international conference on pattern recognition (ICPR), IEEE, 2018, pp. 2705–2710.Google ScholarGoogle ScholarCross RefCross Ref
  29. S. Hwang, J. Park, N. Kim, Y. Choi, and I. So Kweon, “Multispectral pedestrian detection: Benchmark dataset and baseline,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037–1045.Google ScholarGoogle ScholarCross RefCross Ref
  30. C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream network fusion for video action recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1933–1941.Google ScholarGoogle ScholarCross RefCross Ref
  31. J. Wagner, V. Fischer, M. Herman, and S. Behnke, “Multispectral Pedestrian Detection using Deep Fusion Convolutional Neural Networks.,” in ESANN, 2016, pp. 509–514.Google ScholarGoogle Scholar
  32. J. Liu, S. Zhang, S. Wang, and D. N. Metaxas, “Multispectral deep neural networks for pedestrian detection,” arXiv Prepr. arXiv1611.02644, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  33. C. Li, D. Song, R. Tong, and M. Tang, “Multispectral pedestrian detection via simultaneous detection and segmentation,” arXiv Prepr. arXiv1808.04818, 2018.Google ScholarGoogle Scholar
  34. H. T. Mustafa, J. Yang, H. Mustafa, and M. Zareapoor, “Infrared and visible image fusion based on dilated residual attention network,” Optik (Stuttg)., vol. 224, p. 165409, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  35. H. Zhang, E. Fromont, S. Lefevre, and B. Avignon, “Multispectral fusion for object detection with cyclic fuse-and-refine blocks,” in 2020 IEEE International conference on image processing (ICIP), IEEE, 2020, pp. 276–280.Google ScholarGoogle ScholarCross RefCross Ref
  36. H. Zhang, E. Fromont, S. Lefèvre, and B. Avignon, “Guided attentive feature fusion for multispectral pedestrian detection,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 72–80.Google ScholarGoogle ScholarCross RefCross Ref
  37. Z. Wang, W. Shao, Y. Chen, J. Xu, and X. Zhang, “Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning,” arXiv Prepr. arXiv2203.15337, 2022.Google ScholarGoogle Scholar
  38. H. Zhang, J. Yuan, X. Tian, and J. Ma, “GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators,” IEEE Trans. Comput. Imaging, vol. 7, pp. 1134–1147, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  39. L. Tang, J. Yuan, and J. Ma, “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Inf. Fusion, vol. 82, pp. 28–42, 2022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. F. Zhao, W. Zhao, L. Yao, and Y. Liu, “Self-supervised feature adaption for infrared and visible image fusion,” Inf. Fusion, vol. 76, pp. 189–203, 2021.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Z. Wang and X. Bai, “High frequency assisted fusion for infrared and visible images through sparse representation,” Infrared Phys. Technol., vol. 98, pp. 212–222, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  42. Y. Shen , “Rapid detection of camouflaged artificial target based on polarization imaging and deep learning,” IEEE Photonics J., vol. 13, no. 4, pp. 1–9, 2021.Google ScholarGoogle Scholar
  43. Y. Wang , “Principle and implementation of stokes vector polarization imaging technology,” Appl. Sci., vol. 12, no. 13, p. 6613, 2022.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICITEE '23: Proceedings of the 6th International Conference on Information Technologies and Electrical Engineering
    November 2023
    764 pages
    ISBN:9798400708299
    DOI:10.1145/3640115

    Copyright © 2023 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 26 March 2024

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format