Abstract
As a crucial medical examination technique, different modalities of magnetic resonance imaging (MRI) complement each other, offering multi-angle and multi-dimensional insights into the body’s internal information. Therefore, research on MRI cross-modality conversion is of great significance, and many innovative techniques have been explored. However, most methods are trained on well-aligned data, and the impact of misaligned data has not received sufficient attention. Additionally, many methods focus on transforming the entire image and ignore crucial edge information. To address these challenges, we propose a generative adversarial network based on multi-feature fusion, which effectively preserves edge information while training on noisy data. Notably, we consider images with limited range random transformations as noisy labels and use an additional small auxiliary registration network to help the generator adapt to the noise distribution. Moreover, we inject auxiliary edge information to improve the quality of synthesized target modality images. Our goal is to find the best solution for cross-modality conversion. Comprehensive experiments and ablation studies demonstrate the effectiveness of the proposed method.
Graphical abstract
Similar content being viewed by others
References
Tiwari A, Srivastava S, Pant M (2020) Brain tumor segmentation and classification from magnetic resonance images: review of selected methods from 2014 to 2019. Pattern Recogn Lett 131:244–260
Zhou T, Fu H, Chen G, Shen J, Shao L (2020) Hi-Net: hybrid-fusion network for multi-modal MR image synthesis. IEEE Trans Med Imaging 39(9):2772–2781
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Communications of the ACM 63(11):139–144
Zhan B, Li D, Wu X, Zhou J, Wang Y (2021) Multi-modal MRI image synthesis via GAN with multi-scale gate mergence. IEEE J Biomed Health Inform 26(1):17–26
Zhan B, Zhou L, Li Z, Wu X, Pu Y, Zhou J, Wang Y, Shen D (2022) D2FE-GAN: decoupled dual feature extraction based GAN for MRI image synthesis. Knowl Based Syst 252:109362
Kong L, Lian C, Huang D, Hu Y, Zhou Q et al (2021) Breaking the dilemma of medical image-to-image translation. Adv Neural Inf Process Syst 34:1964–1978
Yu B, Wang Y, Wang L, Shen D, Zhou L (2020) Medical image synthesis via deep learning. Deep Learn Med Image Anal pp 23–44
Torrado-Carvajal A, Herraiz JL, Alcain E, Montemayor AS, Garcia-Canamaque L, Hernandez-Tamames JA, Rozenholc Y, Malpica N (2016) Fast patch-based pseudo-CT synthesis from T1-weighted MR images for PET/MR attenuation correction in brain studies. J Nucl Med 57(1):136–143
Li Y, Wei J, Qi Z, Sun Y, Lu Y (2020) Synthesize CT from paired MRI of the same patient with patch-based generative adversarial network. In: Medical imaging 2020: computer-aided diagnosis, vol 11314, pp 919–924. SPIE
Klages P, Benslimane I, Riyahi S, Jiang J, Hunt M, Deasy JO, Veeraraghavan H, Tyagi N (2020) Patch-based generative adversarial neural network models for head and neck MR-only planning. Med Phys 47(2):626–642
Roy S, Carass A, Prince JL (2013) Magnetic resonance image example-based contrast synthesis. IEEE Trans Med Imaging 32(12):2348–2363
Jog A, Carass A, Roy S, Pham DL, Prince JL (2015) MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 24(1):63–76
Aouadi S, Vasic A, Paloor S, Torfeh T, McGarry M, Petric P, Riyas M, Hammoud R, Al-Hammadi N (2017) Generation of synthetic CT using multi-scale and dual-contrast patches for brain MRI-only external beam radiotherapy. Phys Med 42:174–184
Miller MI, Christensen GE, Amit Y, Grenander U (1993) Mathematical textbook of deformable neuroanatomies. Proceedings of the National Academy of Sciences 90(24):11944–11948
Yang X, Lei Y, Shu HK, Rossi P, Mao H, Shim H, Curran WJ, Liu T (2017) Pseudo CT estimation from MRI using patch-based random forest. In: Medical imaging 2017: image processing, vol 10133, pp 775–782. SPIE
Lei Y, Harms J, Wang T, Tian S, Zhou J, Shu HK, Zhong J, Mao H, Curran WJ, Liu T et al (2019) MRI-based synthetic CT generation using semantic random forest with iterative refinement. Phys Med Biol 64(8):085001
Lei Y, Jeong JJ, Wang T, Shu HK, Patel P, Tian S, Liu T, Shim H, Mao H, Jani AB et al (2018) MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model. J Med Imaging 5(4):043504
Andreasen D, Edmund JM, Zografos V, Menze BH, Leemput KV (2016) Computed tomography synthesis from magnetic resonance images in the pelvis using multiple random forests and auto-context features. In: Medical imaging 2016: image processing, vol 9784, pp 323–330. SPIE
Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Communications of the ACM 60(6):84–90
Fu J, Yang Y, Singhrao K, Ruan D, Chu FI, Low DA, Lewis JH (2019) Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging. Med Phys 46(9):3788–3798
Han X (2017) MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys 44(4):1408–1419
Xiang L, Wang Q, Nie D, Zhang L, Jin X, Qiao Y, Shen D (2018) Deep embedding convolutional neural network for synthesizing CT image from T1-weighted MR image. Med Image Anal 47:31–44
Osman AFI, Tamam NM (2022) Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys 23(4):e13530
Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv:1411.1784
Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I (2017) Deep MR to CT synthesis using unpaired data. In: International workshop on simulation and synthesis in medical imaging, pp 14–23. Springer
Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, Shen D (2017) Medical image synthesis with context-aware generative adversarial networks. In: International conference on medical image computing and computer-assisted intervention, pp 417–425. Springer
Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, Prince J (2018) Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, pp 174–182. Springer
Hu S, Shen Y, Wang S, Lei B (2020) Brain MR to PET synthesis via bidirectional generative adversarial network. In: International conference on medical image computing and computer-assisted intervention, pp 698–707. Springer
Hu S, Lei B, Wang S, Wang Y, Feng Z, Shen Y (2021) Bidirectional mapping generative adversarial networks for brain MR to pet synthesis. IEEE Trans Med Imaging 41(1):145–157
Hong S, Marinescu R, Dalca AV, Bonkhoff AK, Bretzner M, Rost NS, Golland P (2021) 3D-StyleGAN: a style-based generative adversarial network for generative modeling of three-dimensional medical images. In: Deep generative models, and data augmentation, labelling, and imperfections, pp 24–34. Springer
Yu B, Zhou L, Wang L, Shi Y, Fripp J, Bourgeat P (2019) Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans Med Imaging 38(7):1750–1762
Luo Y, Nie D, Zhan B, Li Z, Wu X, Zhou J, Wang Y, Shen D (2021) Edge-preserving MRI image synthesis via adversarial network with iterative multi-scale fusion. Neurocomputing 452:63–77
Dalmaz O, Yurt M, Çukur T (2022) ResViT: residual vision transformers for multimodal medical image synthesis. IEEE Trans Med Imaging 41(10):2598–2614
Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ (2021) DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. Inf Fusion 67:147–160
Patrini G, Nielsen F, Nock R, Carioni M (2016) Loss factorization, weakly supervised learning and label noise robustness. In: International conference on machine learning, pp 708–717. PMLR
Manwani N, Sastry PS (2013) Noise tolerance under risk minimization. IEEE Trans Cybern 43(3):1146–1151
Van Rooyen B, Menon A, Williamson RC (2015) Learning with symmetric label noise: the importance of being unhinged. Adv Neural Inf Process Syst 28
Liu T, Tao D (2015) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38(3):447–461
Tanno R, Saeedi A, Sankaranarayanan S, Alexander DC, Silberman N (2019) Learning from noisy labels by regularized estimation of annotator confusion. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11244–11253
Boughorbel S, Jarray F, Venugopal N, Elhadi H (2018) Alternating loss correction for preterm-birth prediction from EHR data with noisy labels. arXiv:1811.09782
Huang X, Liu MY, Belongie S, Kautz J (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the European conference on computer vision (ECCV), pp 172–189
Wang J, Sun K, Cheng T, Jiang B, Deng C, Zhao Y, Liu D, Mu Y, Tan M, Wang X et al (2020) Deep high-resolution representation learning for visual recognition. IEEE Trans Pattern Anal Mach Intell 43(10):3349–3364
Woo S, Park J, Lee JY, Kweon IS (2018) CBAM: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19
Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y (2018) Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European conference on computer vision (ECCV), pp. 286–301
Sotiras A, Davatzikos C, Paragios N (2013) Deformable medical image registration: a survey. IEEE Trans Med Imaging 32(7):1153–1190
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, pp 234–241. Springer
Trebing K, Staňczyk T, Mehrkanoon S (2021) SmaAt-UNet: precipitation nowcasting using a small attention-UNet architecture. Pattern Recognit Lett 145:178–186
Sugahara T, Korogi Y, Kochi M, Ikushima I, Shigematu Y, Hirai T, Okuda T, Liang L, Ge Y, Komohara Y et al (1999) Usefulness of diffusion-weighted MRI with echo-planar technique in the evaluation of cellularity in gliomas. J Magn Reson Imaging 9(1):53–60
Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232
Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802
Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R et al (2014) The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans Med Imaging 34(10):1993–2024
Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, Freymann JB, Farahani K, Davatzikos C (2017) Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci Data 4(1):1–13
Bakas S, Reyes M, Jakab A, Bauer S, Rempfler M, Crimi A, Shinohara RT, Berger C, Ha SM, Rozycki M et al (2018) Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv:1811.02629
Dar SUH, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T (2019) Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans Med Iimaging 38(10):2375–2388
Chen R, Huang W, Huang B, Sun F, Fang B (2020) Reusing discriminators for encoding: towards unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8168–8177
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980
Xu L, Zhang H, Song L, Lei Y (2022) Bi-MGAN: bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets. Biomed Signal Process Control 78:103994
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lu, X., Liang, X., Liu, W. et al. ReeGAN: MRI image edge-preserving synthesis based on GANs trained with misaligned data. Med Biol Eng Comput 62, 1851–1868 (2024). https://doi.org/10.1007/s11517-024-03035-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11517-024-03035-w