M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities

Authors

  • Hong Liu School of informatics, Xiamen University, Xiamen, China Tencent Jarvis Lab
  • Dong Wei Tencent Jarvis Lab
  • Donghuan Lu Tencent Jarvis Lab
  • Jinghan Sun Tencent Jarvis Lab School of Medicine, Xiamen University, Xiamen, China
  • Liansheng Wang School of informatics, Xiamen University, Xiamen, China
  • Yefeng Zheng Tencent Jarvis Lab

DOI:

https://doi.org/10.1609/aaai.v37i2.25253

Keywords:

CV: Medical and Biological Imaging, CV: Multi-modal Vision, CV: Segmentation, ML: Representation Learning, ML: Unsupervised & Self-Supervised Learning

Abstract

Multimodal magnetic resonance imaging (MRI) provides complementary information for sub-region analysis of brain tumors. Plenty of methods have been proposed for automatic brain tumor segmentation using four common MRI modalities and achieved remarkable performance. In practice, however, it is common to have one or more modalities missing due to image corruption, artifacts, acquisition protocols, allergy to contrast agents, or simply cost. In this work, we propose a novel two-stage framework for brain tumor segmentation with missing modalities. In the first stage, a multimodal masked autoencoder (M3AE) is proposed, where both random modalities (i.e., modality dropout) and random patches of the remaining modalities are masked for a reconstruction task, for self-supervised learning of robust multimodal representations against missing modalities. To this end, we name our framework M3AE. Meanwhile, we employ model inversion to optimize a representative full-modal image at marginal extra cost, which will be used to substitute for the missing modalities and boost performance during inference. Then in the second stage, a memory-efficient self distillation is proposed to distill knowledge between heterogenous missing-modal situations while fine-tuning the model for supervised segmentation. Our M3AE belongs to the ‘catch-all’ genre where a single model can be applied to all possible subsets of modalities, thus is economic for both training and deployment. Extensive experiments on BraTS 2018 and 2020 datasets demonstrate its superior performance to existing state-of-the-art methods with missing modalities, as well as the efficacy of its components. Our code is available at: https://github.com/ccarliu/m3ae.

Downloads

Published

2023-06-26

How to Cite

Liu, H., Wei, D., Lu, D., Sun, J., Wang, L., & Zheng, Y. (2023). M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1657-1665. https://doi.org/10.1609/aaai.v37i2.25253

Issue

Section

AAAI Technical Track on Computer Vision II