Skip to main content

Representation Disentanglement for Multi-task Learning with Application to Fetal Ultrasound

  • Conference paper
  • First Online:
Smart Ultrasound Imaging and Perinatal, Preterm and Paediatric Image Analysis (PIPPI 2019, SUSI 2019)

Abstract

One of the biggest challenges for deep learning algorithms in medical image analysis is the indiscriminate mixing of image properties, e.g. artifacts and anatomy. These entangled image properties lead to a semantically redundant feature encoding for the relevant task and thus lead to poor generalization of deep learning algorithms. In this paper we propose a novel representation disentanglement method to extract semantically meaningful and generalizable features for different tasks within a multi-task learning framework. Deep neural networks are utilized to ensure that the encoded features are maximally informative with respect to relevant tasks, while an adversarial regularization encourages these features to be disentangled and minimally informative about irrelevant tasks. We aim to use the disentangled representations to generalize the applicability of deep neural networks. We demonstrate the advantages of the proposed method on synthetic data as well as fetal ultrasound images. Our experiments illustrate that our method is capable of learning disentangled internal representations. It outperforms baseline methods in multiple tasks, especially on images with new properties, e.g. previously unseen artifacts in fetal ultrasound.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ben-Cohen, A., Mechrez, R., Yedidia, N., Greenspan, H.: Improving CNN training using disentanglement for liver lesion classification in CT. arXiv:1811.00501 (2018)

  2. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  3. Burgess, C.P., et al.: Understanding disentangling in \(\beta \)-VAE. arXiv:1804.03599 (2018)

  4. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS 2016, pp. 2180–2188. Curran Associates Inc., USA (2016)

    Google Scholar 

  5. Chen, X., et al.: Variational lossy autoencoder. In: ICLR 2017 (2017)

    Google Scholar 

  6. Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv:1811.12231 (2018)

  7. Gonzalez-Garcia, A., van de Weijer, J., Bengio, Y.: Image-to-image translation for cross-domain disentanglement. In: NeurIPS 2018, pp. 1287–1298. Curran Associates, Inc. (2018)

    Google Scholar 

  8. Hadad, N., Wolf, L., Shahar, M.: A two-step disentanglement method. In: CVPR 2018 (2018)

    Google Scholar 

  9. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR 2017 (2017)

    Google Scholar 

  10. Hyvärinen, A., Oja, E.: Independent component analysis: algorithms and applications. Neural Netw. 13(4–5), 411–430 (2000)

    Article  Google Scholar 

  11. Kamnitsas, K., et al.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 597–609. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_47

    Chapter  Google Scholar 

  12. Kim, H., Mnih, A.: Disentangling by factorising. CoRR arXiv/1802.05983 (2018)

    Google Scholar 

  13. Liu, A.H., Liu, Y.C., Yeh, Y.Y., Wang, Y.C.F.: A unified feature disentangler for multi-domain image translation and manipulation. In: NeurIPS, pp. 2590–2599. Curran Associates, Inc. (2018)

    Google Scholar 

  14. Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., LeCun, Y.: Disentangling factors of variation in deep representations using adversarial training. In: NeurIPS 2016, pp. 5040–5048 (2016)

    Google Scholar 

  15. Meng, Q., et al.: Weakly supervised estimation of shadow confidence maps in fetal ultrasound imaging. IEEE Trans. Med. Imaging (2019). https://ieeexplore.ieee.org/document/8698843

  16. NHS: Fetal anomaly screening programme: programme handbook June 2015. Public Health England (2015)

    Google Scholar 

  17. Pawlowski, N., et al.: DLTK: state of the art reference implementations for deep learning on medical images. arXiv:1711.06853 (2017)

  18. Tenenbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural Comput. 12(6), 1247–1283 (2000)

    Article  Google Scholar 

Download references

Acknowledgments

We thank the Wellcome Trust IEH Award [102431], Nvidia (GPU donations) and Intel.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingjie Meng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Meng, Q., Pawlowski, N., Rueckert, D., Kainz, B. (2019). Representation Disentanglement for Multi-task Learning with Application to Fetal Ultrasound. In: Wang, Q., et al. Smart Ultrasound Imaging and Perinatal, Preterm and Paediatric Image Analysis. PIPPI SUSI 2019 2019. Lecture Notes in Computer Science(), vol 11798. Springer, Cham. https://doi.org/10.1007/978-3-030-32875-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32875-7_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32874-0

  • Online ISBN: 978-3-030-32875-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics