ABSTRACT
Compared with images in general scenes, multi-modal medical images contain more detailed features and require higher integrity of features. Therefore, when fusing multi-modal medical images, features at different scales need to be accurately extracted to ensure the above requirements, which cannot be done in general convolutional neural network (CNN). To solve this problem, a convolutional neural network based on multi-scale feature fusion is proposed to improve the fusion quality of multi-modal medical image. Specifically, the proposed network consists of two trunks and three branches to extract features at different scales. The trunks and branches are connected by the fusion modules (FM) to realize the fusion of multi-scale features. Finally, the fused multi-scale features are extracted by multiple convolutions and concatenated with the features of the trunks to reconstruct and generate the fused image. The results of the objective and subjective evaluation show that the proposed method is advanced in most of the indexes compared with other state-of-the-art methods.
- M Azam, K Khan, Muhammad Ahmad, and Manuel Mazzara. 2021. Multimodal medical image registration and fusion for quality enhancement. Cmc-Comput 68 (2021), 821–840.Google ScholarCross Ref
- Muhammad Adeel Azam, Khan Bahadar Khan, Sana Salahuddin, Eid Rehman, Sajid Ali Khan, Muhammad Attique Khan, Seifedine Kadry, and Amir H Gandomi. 2022. A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Computers in biology and medicine 144 (2022), 105253.Google Scholar
- Satishkumar S Chavan, Abhishek Mahajan, Sanjay N Talbar, Subhash Desai, Meenakshi Thakur, and Anil D’cruz. 2017. Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis. Computers in biology and medicine 81 (2017), 64–78.Google Scholar
- Jiao Du, Weisheng Li, Ke Lu, and Bin Xiao. 2016. An overview of multi-modal medical image fusion. Neurocomputing 215 (2016), 3–20.Google ScholarDigital Library
- Ruichao Hou, Dongming Zhou, Rencan Nie, Dong Liu, and Xiaoli Ruan. 2019. Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model. Medical & biological engineering & computing 57 (2019), 887–900.Google Scholar
- Weiwei Kong, Qiguang Miao, and Yang Lei. 2018. Multimodal sensor medical image fusion based on local difference in non-subsampled domain. IEEE Transactions on Instrumentation and Measurement 68, 4 (2018), 938–951.Google ScholarCross Ref
- Baiying Lei, Siping Chen, Dong Ni, and Tianfu Wang. 2016. Discriminative learning for Alzheimer’s disease diagnosis via canonical correlation analysis and multimodal fusion. Frontiers in aging neuroscience 8 (2016), 77.Google Scholar
- Shuaiqi Liu, Mingzhu Shi, Zhihui Zhu, and Jie Zhao. 2017. Image fusion based on complex-shearlet domain with guided filtering. Multidimensional Systems and Signal Processing 28, 1 (2017), 207–224.Google ScholarDigital Library
- Yu Liu, Xun Chen, Juan Cheng, and Hu Peng. 2017. A medical image fusion method based on convolutional neural networks. In 2017 20th international conference on information fusion (Fusion). IEEE, Xian,China, 1–7.Google ScholarCross Ref
- Hikmat Ullah, Basharat Ullah, Longwen Wu, Fakheraldin YO Abdalla, Guanghui Ren, and Yaqin Zhao. 2020. Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-Laplacian in non-subsampled shearlet transform domain. Biomedical Signal Processing and Control 57 (2020), 101724.Google ScholarCross Ref
- Bin Wang, Jianchao Zeng, Suzhen Lin, and Guifeng Bai. 2019. Multi-band images synchronous fusion based on NSST and fuzzy logical inference. Infrared Physics & Technology 98 (2019), 94–107.Google ScholarCross Ref
- Yu Zhang, Yu Liu, Peng Sun, Han Yan, Xiaolin Zhao, and Li Zhang. 2020. IFCNN: A general image fusion framework based on convolutional neural network. Information Fusion 54 (2020), 99–118.Google ScholarCross Ref
Index Terms
- Multi-scale Feature Fusion Convolutional Neural Network for Multi-Modal Medical Image Fusion
Recommendations
DDIFN: A Dual-discriminator Multi-modal Medical Image Fusion Network
Multi-modal medical image fusion is a long-standing important research topic that can obtain informative medical images and assist doctors diagnose and treat diseases more efficiently. However, most fusion methods extract and fuse features by subjectively ...
MSDNet for Medical Image Fusion
Image and GraphicsAbstractConsidering the DenseFuse only works in a single scale, we propose a multi-scale DenseNet (MSDNet) for medical image fusion. The main architecture of network is constructed by encoding network, fusion layer and decoding network. To utilize ...
Multi-focus image fusion with a deep convolutional neural network
Introduces Convolutional neural networks (CNNs) into the field of image fusion.Discusses the feasibility and superiority of CNNs used for image fusion.Proposes a state-of-the-art CNN-based multi-focus image fusion method.Exhibits the potential of CNNs ...
Comments