ABSTRACT
Images are obtained by perceiving the sunlight reflected by objects or scenes, and due to limited solar irradiance, spatial resolution certainly decreases. In contrast, multispectral sensors hold the spatial information of ground objects even though the obtained images only have a few bands, which means that the obtained images have high spatial but insufficient spectral resolution. To overcome these limitations, we have developed a spectral and polarization image fusion technique that improves several perspectives of image fusion, such as intensity, edges, textures, and substantial information. Our proposed method utilizes an encoder-decoder approach, which contains dense and convolutional blocks to simulate the source images. The polarization images have been used to develop DoLP images that help to improve fused images’ quality. We can perceive the visual quality of fused images from the experimental results.
- J. Ma, Z. Zhou, B. Wang, L. Miao, and H. Zong, “Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps,” Neurocomputing, vol. 335, pp. 9–20, 2019.Google ScholarDigital Library
- J. Li, B. Li, Y. Jiang, and W. Cai, “MSAt-GAN: a generative adversarial network based on multi-scale and deep attention mechanism for infrared and visible light image fusion,” Complex Intell. Syst., vol. 8, no. 6, pp. 4753–4781, 2022.Google ScholarCross Ref
- S. Karim, G. Tong, J. Li, A. Qadir, U. Farooq, and Y. Yu, “Current Advances and Future Perspectives of Image Fusion: A Comprehensive Review,” Inf. Fusion, 2022.Google Scholar
- J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Inf. Fusion, vol. 45, pp. 153–178, 2019.Google ScholarCross Ref
- P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” in Applications of digital image processing VIII, SPIE, 1985, pp. 173–181.Google ScholarCross Ref
- P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in 1993 (4th) international Conference on Computer Vision, IEEE, 1993, pp. 173–182.Google Scholar
- E. J. Candes and D. L. Donoho, “Continuous curvelet transform: I. Resolution of the wavefront set,” Appl. Comput. Harmon. Anal., vol. 19, no. 2, pp. 162–197, 2005.Google ScholarCross Ref
- M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. image Process., vol. 14, no. 12, pp. 2091–2106, 2005.Google ScholarDigital Library
- G. Easley, D. Labate, and W.-Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Appl. Comput. Harmon. Anal., vol. 25, no. 1, pp. 25–46, 2008.Google ScholarCross Ref
- Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process., vol. 9, no. 5, pp. 347–357, 2015.Google ScholarCross Ref
- R. G. Gavaskar and K. N. Chaudhury, “Fast adaptive bilateral filtering,” IEEE Trans. Image Process., vol. 28, no. 2, pp. 779–790, 2018.Google ScholarDigital Library
- L. Ren, Z. Pan, J. Cao, H. Zhang, and H. Wang, “Infrared and visible image fusion based on edge-preserving guided filter and infrared feature decomposition,” Signal Processing, vol. 186, p. 108108, 2021.Google ScholarCross Ref
- P. Ganasala and A. D. Prasad, “Contrast enhanced multi sensor image fusion based on guided image filter and NSST,” IEEE Sens. J., vol. 20, no. 2, pp. 939–946, 2019.Google ScholarCross Ref
- K. Ram Prabhakar, V. Sai Srikar, and R. Venkatesh Babu, “Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4714–4722.Google ScholarCross Ref
- H. Li and X.-J. Wu, “DenseFuse: A fusion approach to infrared and visible images,” IEEE Trans. Image Process., vol. 28, no. 5, pp. 2614–2623, 2018.Google ScholarDigital Library
- B. Ma, Y. Zhu, X. Yin, X. Ban, H. Huang, and M. Mukeshimana, “Sesf-fuse: An unsupervised deep model for multi-focus image fusion,” Neural Comput. Appl., vol. 33, no. 11, pp. 5793–5804, 2021.Google ScholarDigital Library
- W. Tang, F. He, Y. Liu, and Y. Duan, “MATR: Multimodal medical image fusion via multiscale adaptive transformer,” IEEE Trans. Image Process., vol. 31, pp. 5134–5149, 2022.Google ScholarDigital Library
- H. Zhang, H. Xu, Y. Xiao, X. Guo, and J. Ma, “Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 12797–12804.Google ScholarCross Ref
- H. Zhang and J. Ma, “SDNet: A versatile squeeze-and-decomposition network for real-time image fusion,” Int. J. Comput. Vis., vol. 129, no. 10, pp. 2761–2785, 2021.Google ScholarDigital Library
- H. Xu, J. Ma, Z. Le, J. Jiang, and X. Guo, “Fusiondn: A unified densely connected network for image fusion,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp. 12484–12491.Google ScholarCross Ref
- S. Xu, J. Zheng, J. Pu, and P. Shui, “Sea-surface floating small target detection based on polarization features,” IEEE Geosci. Remote Sens. Lett., vol. 15, no. 10, pp. 1505–1509, 2018.Google Scholar
- F. A. Sadjadi and C. S. L. Chun, “Passive polarimetric IR target classification,” IEEE Trans. Aerosp. Electron. Syst., vol. 37, no. 2, pp. 740–751, 2001.Google ScholarCross Ref
- N. Gupta, “Remote sensing using hyperspectral and polarization images,” in Instrumentation for Air Pollution and Global Atmospheric Monitoring, SPIE, 2002, pp. 184–192.Google ScholarCross Ref
- M. J. Duggin and G. J. Kinn, “Vegetative target enhancement in natural scenes using multiband polarization methods,” in Polarization Analysis and Measurement IV, SPIE, 2002, pp. 281–291.Google ScholarCross Ref
- L. Le Hors, P. Hartemann, and S. Breugnot, “Multispectral polarization active imager in the visible band,” in Laser Radar Technology and Applications V, SPIE, 2000, pp. 380–389.Google ScholarCross Ref
- K. Xia, H. Yin, and J. Wang, “A novel improved deep convolutional neural network model for medical image fusion,” Cluster Comput., vol. 22, pp. 1515–1527, 2019.Google ScholarDigital Library
- J. Liu , “Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5802–5811.Google ScholarCross Ref
- H. Li, X.-J. Wu, and J. Kittler, “Infrared and visible image fusion using a deep learning framework,” in 2018 24th international conference on pattern recognition (ICPR), IEEE, 2018, pp. 2705–2710.Google ScholarCross Ref
- S. Hwang, J. Park, N. Kim, Y. Choi, and I. So Kweon, “Multispectral pedestrian detection: Benchmark dataset and baseline,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037–1045.Google ScholarCross Ref
- C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream network fusion for video action recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1933–1941.Google ScholarCross Ref
- J. Wagner, V. Fischer, M. Herman, and S. Behnke, “Multispectral Pedestrian Detection using Deep Fusion Convolutional Neural Networks.,” in ESANN, 2016, pp. 509–514.Google Scholar
- J. Liu, S. Zhang, S. Wang, and D. N. Metaxas, “Multispectral deep neural networks for pedestrian detection,” arXiv Prepr. arXiv1611.02644, 2016.Google ScholarCross Ref
- C. Li, D. Song, R. Tong, and M. Tang, “Multispectral pedestrian detection via simultaneous detection and segmentation,” arXiv Prepr. arXiv1808.04818, 2018.Google Scholar
- H. T. Mustafa, J. Yang, H. Mustafa, and M. Zareapoor, “Infrared and visible image fusion based on dilated residual attention network,” Optik (Stuttg)., vol. 224, p. 165409, 2020.Google ScholarCross Ref
- H. Zhang, E. Fromont, S. Lefevre, and B. Avignon, “Multispectral fusion for object detection with cyclic fuse-and-refine blocks,” in 2020 IEEE International conference on image processing (ICIP), IEEE, 2020, pp. 276–280.Google ScholarCross Ref
- H. Zhang, E. Fromont, S. Lefèvre, and B. Avignon, “Guided attentive feature fusion for multispectral pedestrian detection,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 72–80.Google ScholarCross Ref
- Z. Wang, W. Shao, Y. Chen, J. Xu, and X. Zhang, “Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning,” arXiv Prepr. arXiv2203.15337, 2022.Google Scholar
- H. Zhang, J. Yuan, X. Tian, and J. Ma, “GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators,” IEEE Trans. Comput. Imaging, vol. 7, pp. 1134–1147, 2021.Google ScholarCross Ref
- L. Tang, J. Yuan, and J. Ma, “Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network,” Inf. Fusion, vol. 82, pp. 28–42, 2022.Google ScholarDigital Library
- F. Zhao, W. Zhao, L. Yao, and Y. Liu, “Self-supervised feature adaption for infrared and visible image fusion,” Inf. Fusion, vol. 76, pp. 189–203, 2021.Google ScholarDigital Library
- Z. Wang and X. Bai, “High frequency assisted fusion for infrared and visible images through sparse representation,” Infrared Phys. Technol., vol. 98, pp. 212–222, 2019.Google ScholarCross Ref
- Y. Shen , “Rapid detection of camouflaged artificial target based on polarization imaging and deep learning,” IEEE Photonics J., vol. 13, no. 4, pp. 1–9, 2021.Google Scholar
- Y. Wang , “Principle and implementation of stokes vector polarization imaging technology,” Appl. Sci., vol. 12, no. 13, p. 6613, 2022.Google ScholarCross Ref
Recommendations
Multispectral and hyperspectral image fusion with spatial-spectral sparse representation
Highlights- A spatial-spectral sparse representation based method is proposed for the fusion.
AbstractFusing a high spatial resolution multispectral image (HR-MSI) with a low spatial resolution hyperspectral image (LR-HSI) of the same scenario to acquire a high spatial resolution hyperspectral image (HR-HSI) has recently attracted more ...
Fusion of multi-spectral image using non-separable additive wavelets for high spatial resolution enhancement
ICIAR'11: Proceedings of the 8th international conference on Image analysis and recognition - Volume Part IIn order to solve the problems that the image fusion method based on separable discrete wavelet transform is lower in spatial resolution and there is block effect in fused image, a new multispectral image fusion method based on non-separable wavelets ...
Image dehazing via enhancement, restoration, and fusion: A survey
AbstractHaze usually causes severe interference to image visibility. Such degradation on images troubles both human observers and computer vision systems. To seek high-quality images from degraded ones, a large number of image dehazing ...
Highlights- Image dehazing methods via enhancement, restoration and fusion are surveyed.
- ...
Comments