Skip to main content

Advertisement

Log in

DeepNeuro: an open-source deep learning toolbox for neuroimaging

  • Software Original Article
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

Translating deep learning research from theory into clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a Python-based deep learning framework that puts deep neural networks for neuroimaging into practical usage with a minimum of friction during implementation. We show how this framework can be used to design deep learning pipelines that can load and preprocess data, design and train various neural network architectures, and evaluate and visualize the results of trained networks on evaluation data. We present a way of reproducibly packaging data pre- and postprocessing functions common in the neuroimaging community, which facilitates consistent performance of networks across variable users, institutions, and scanners. We show how deep learning pipelines created with DeepNeuro can be concisely packaged into shareable Docker and Singularity containers with user-friendly command-line interfaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., & et al. (2016). Tensorflow: a system for large-scale machine learning. In OSDI, 16, 265–283.

    Google Scholar 

  • Avants, B.B., Tustison, N.J., Stauffer, M., Song, G., Wu, B., & Gee, J.C. (2014). The insight toolkit image registration framework. Frontiers in neuroinformatics, 8, 44.

    Article  PubMed  PubMed Central  Google Scholar 

  • Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J.S., Freymann, J.B., Farahani, K., & Davatzikos, C. (2017). Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data, 170 117, 4.

    Google Scholar 

  • Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., & Bengio, Y. (2010). Theano: A cpu and gpu math compiler in python. In Proc 9th Python in Science Conf, Vol. 1.

  • Bezanson, J., Edelman, A., Karpinski, S., & Shah, V.B. (2017). Julia: a fresh approach to numerical computing. SIAM review, 59(1), 65–98.

    Article  Google Scholar 

  • Brown, J.M., Campbell, J.P., Beers, A., Chang, K., Ostmo, S., Chan, R.P., Dy, J., Erdogmus, D., Ioannidis, S., Kalpathy-Cramer, J., & et al. (2018). Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. JAMA ophthalmology.

  • Buda, M., Maki, A., & Mazurowski, M.A. (2018). A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106, 249–259.

    Article  PubMed  Google Scholar 

  • Chang, K., Bai, H.X., Zhou, H., Su, C., Bi, W.L., Agbodza, E., Kavouridis, V.K., Senders, J.T., Boaro, A., Beers, A., & et al. (2018a). Residual convolutional neural network for the determination of idh status in low-and high-grade gliomas from mr imaging. Clinical Cancer Research, 24(5), 1073–1081.

  • Chang, K., Balachandar, N., Lam, C., Yi, D., Brown, J., Beers, A., & Rosen, B. (2018b). Rubin, d. L., and Kalpathy-Cramer, J: Distributed deep learning networks among institutions for medical imaging. Journal of the American Medical Informatics Association.

  • Chang, K., Beers, A.L., Bai, H.X., Brown, J.M., Ly, K.I., Li, X., Senders, J.T., Kavouridis, V.K., Boaro, A., Su, C., & et al. (2019). Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement. Neuro-oncology.

  • Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., & Zhang, Z. (2015). Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv:1512.01274.

  • Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., & Ronneberger, O. (2016). 3d u-net: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention pages 424–432. Springer.

  • Clunie, D.A. (2000). DICOM structured reporting. PixelMed publishing.

  • Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning pages 160–167. ACM.

  • Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.

    Article  CAS  Google Scholar 

  • Fedorov, A., Beichel, R., Kalpathy-Cramer, J., Finet, J., Fillion-Robin, J.-C., Pujol, S., Bauer, C., Jennings, D., Fennessy, F., Sonka, M., & et al. (2012). 3d slicer as an image computing platform for the quantitative imaging network. Magnetic resonance imaging, 30(9), 1323–1341.

    Article  PubMed  PubMed Central  Google Scholar 

  • Gheiratmand, M., Rish, I., Cecchi, G.A., Brown, M.R., Greiner, R., Polosecki, P.I., Bashivan, P., Greenshaw, A.J., Ramasubbu, R., & Dursun, S.M. (2017). Learning stable and predictive network-based patterns of schizophrenia and its clinical symptoms. NPJ schizophrenia, 3(1), 22.

    Article  PubMed  PubMed Central  Google Scholar 

  • Gibson, E., Li, W., Sudre, C., Fidon, L., Shakir, D.I., Wang, G., Eaton-Rosen, Z., Gray, R., Doel, T., Hu, Y., & et al. (2018). Niftynet: a deep-learning platform for medical imaging. Computer methods and programs in biomedicine, 158, 113–122.

    Article  PubMed  PubMed Central  Google Scholar 

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets.

  • Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., & et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22), 2402–2410.

    Article  PubMed  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition.

  • Herz, C., Fillion-Robin, J. -C., Onken, M., Riesmeier, J., Lasso, A., Pinter, C., Fichtinger, G., Pieper, S., Clunie, D., Kikinis, R., & et al. (2017). Dcmqi: an open source library for standardized communication of quantitative image analysis results using dicom. Cancer research, 77(21), e87–e90.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A. -r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., & et al. (2012). Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal processing magazine, 29(6), 82–97.

    Article  Google Scholar 

  • Hunter, J.D. (2007). Matplotlib: a 2d graphics environment. Computing in science & engineering, 9(3), 90–95.

    Article  Google Scholar 

  • Hussain, Z., Gimenez, F., Yi, D., & Rubin, D. (2017). Differential data augmentation techniques for medical imaging classification tasks. In AMIA Annual Symposium Proceedings, volume 2017, page 979. American Medical Informatics Association.

  • Imaging, C. (2018). bioinformatics lab at the harvard medical school, b.. w. h., and institute D.-F. C Modelhub.ai.

  • Jones, E., Oliphant, T., & Peterson, P. (2014). {SciPy}: open source scientific tools for {Python}.

  • Kamnitsas, K., Baumgartner, C., Ledig, C., Newcombe, V., Simpson, J., Kane, A., Menon, D., Nori, A., Criminisi, A., Rueckert, D., & et al. (2017a). Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In International Conference on Information Processing in Medical Imaging, pages 597–609. Springer.

  • Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., & Glocker, B. (2017b). Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis, 36, 61–78.

  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196.

  • Kindlmann, G. (2008). Bigler. J., and Van Uitert, D Nrrd file format.

  • Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In In Advances in neural information processing systems (pp. 1097–1105).

  • Kurtzer, G.M., Sochat, V., & Bauer, M.W. (2017). Singularity: Scientific containers for mobility of compute. PloS one, 12(5), e0177–459.

    Article  Google Scholar 

  • Larobina, M., & Murino, L. (2014). Medical image file formats. Journal of digital imaging, 27(2), 200–206.

    Article  PubMed  Google Scholar 

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436.

  • Lee, C.S., Baughman, D.M., & Lee, A.Y. (2017). Deep learning is effective for classifying normal versus age-related macular degeneration oct images. Ophthalmology Retina, 1(4), 322–327.

    Article  PubMed  PubMed Central  Google Scholar 

  • Liu, S., Liu, S., Cai, W., Pujol, S., Kikinis, R., & Feng, D. (2014). Early diagnosis of alzheimer’s disease with deep learning. In Biomedical Imaging (ISBI), 2014 IEEE 11th International Symposium on, pages 1015–1018. IEEE.

  • Mehrtash, A., Pesteie, M., Hetherington, J., Behringer, P.A., Kapur, T., Wells, W.M., Rohling, R., Fedorov, A., & Abolmaesumi, P. (2017). Deepinfer: open-source deep learning deployment toolkit for image-guided therapy. In Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling, volume 10135, page 101351K. International Society for Optics and Photonics.

  • Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboom, J., Wiest, R., & et al. (2015). The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging, 34(10), 1993.

    Article  PubMed  Google Scholar 

  • Merkel, D. (2014). Docker: lightweight linux containers for consistent development and deployment. Linux journal, 2014(239), 2.

    Google Scholar 

  • Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J.T. (2017). Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics.

  • Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in pytorch.

  • Pawlowski, N., Ktena, S.I., Lee, M.C., Kainz, B., Rueckert, D., Glocker, B., & Rajchl, M. (2017). arXiv:1711.06853.

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention pages 234–241. Springer.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.

  • Sperduto, P.W., Berkey, B., Gaspar, L.E., Mehta, M., & Curran, W. (2008). A new prognostic index and comparison to three other indices for patients with brain metastases: an analysis of 1,960 patients in the rtog database. International Journal of Radiation Oncology* Biology* Physics, 70(2), 510–514.

    Article  Google Scholar 

  • Stonnington, C.M., Tan, G., Klöppel, S., Chu, C., Draganski, B., Jack, Jr C.R., Chen, K., Ashburner, J., & Frackowiak, R.S. (2008). Interpreting scan data acquired from multiple scanners: a study with alzheimer’s disease. NeuroImage, 39(3), 1180–1185.

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1–9).

  • van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J.D., Yager, N., Gouillart, E., & Yu, T. (2014). the scikit-image contributors scikit-image: image processing in Python.

  • Walt, S. v. d., Colbert, S.C., & Varoquaux, G. (2011). The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2), 22–30.

    Article  Google Scholar 

  • Winzeck, S., Hakim, A., McKinley, R., Pinto, J.A.A.D.S., Alves, V., Silva, C., Pisov, M., Krivov, E., Belyaev, M., Monteiro, M., & et al. (2018). Isles 2016 & 2017-benchmarking ischemic stroke lesion outcome prediction based on multispectral mri. Frontiers in Neurology, 9, 679.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The Center for Clinical Data Science at Massachusetts General Hospital and the Brigham and Woman’s Hospital provided technical and hardware support for the development of DeepNeuro, including access to high-powered graphical processing units.

Funding

This project was supported by a training grant from the NIH Blueprint for Neuroscience Research (T90DA022759/R90DA023427) and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health under award number 5T32EB1680 to KC. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This study was supported by National Institutes of Health grants U01 CA154601, U24 CA180927, and U24 CA180918 to JKC. This research was carried out in whole or in part at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies, P41EB015896, a P41 Biotechnology Resource Grant supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB), National Institutes of Health.

Author information

Authors and Affiliations

Authors

Contributions

AB, with contributions from JB and KC, was the original creator of the DeepNeuro package and all of its component parts. KH and JP contributed to modules within DeepNeuro, as well as additional data processing features. EG conducted clinical trials at Massachusetts General Hospital, contributed to data organization for clinical datasets, and was consulted on matters pertaining to clinical trials during DeepNeuro development. KIL contributed to the evaluation of results of trained network algorithms. ST and PB conducted clinical trials for brain metastases, supporting the development of DeepNeuro’s segmentation modules. BR and JKC guided the conceptual development of the package, especially with regards to clinical end-users. All the authors read, reviewed, and approved the manuscript.

Corresponding author

Correspondence to Jayashree Kalpathy-Cramer.

Ethics declarations

Conflict of interests

JKC is a consultant/advisory board member for Infotech, Soft. ERG is an advisory board member for Blue Earth Diagnostics. BR is on the advisory board for ARIA, Butterfly, Inc., DGMIF (Daegu-Gyeongbuk Medical Innovation Foundation), QMENTA, Subtle Medical, Inc., is a consultant for Broadview Ventures, Janssen Scientific, ECRI Institute, GlaxoSmithKine, Hyperfine Research, Inc., Peking University, Wolf Greenfield, Superconducting Systems, Inc., Robins Kaplin, LLC, Millennium Pharmaceuticals, GE Healthcare, Siemens, Quinn Emanuel Trial Lawyers, Samsung, Shenzhen Maternity and Child Healthcare Hospital, and is a founder of BLINKAI Technologies, Inc. PB is a consultant for Angiochem, Lilly, Tesaro, Genentech-Roche; has received honoraria from Genentech-Roche and Merckl and has received institutional funding from Merck and Pfizer. SMT receives institutional research funding from Novartis, Genentech, Eli Lilly, Pfizer, Merck, Exelixis, Eisai, Bristol Meyers Squibb, AstraZeneca, Cyclacel, Immunomedics, Odenate, and Nektar. SMT has served as an advisor/consultant to Novartis, Eli Lilly, Pfizer, Merck, AstraZeneca, Eisai, Puma, Genentech, Immunomedics, Nektar, Tesaro, and Nanostring. The other authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Information Sharing Statement

DeepNeuro is open-source, free, and available at https://github.com/QTIM-Lab/DeepNeuro (RRID:SCR 016911). DeepNeuro makes use of the following packages via Python wrappers: 3D Slicer (RRID:SCR 005619), ANTS (RRID:SCR 004757), and dcmqi (RRID:SCR 016933). DeepNeuro depends on the following third-party Python libraries: tqdm, scikit-image, scipy (RRID:SCR 008058), numpy, pydot, matplotlib (RRID:SCR 008624), imageio, six, pyyaml, nibabel (RRID:SCR 002498), pynrrd, and pydicom (RRID:SCR 002573) (van der Walt et al. 2014; Jones et al. 2014; Hunter 2007).

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beers, A., Brown, J., Chang, K. et al. DeepNeuro: an open-source deep learning toolbox for neuroimaging. Neuroinform 19, 127–140 (2021). https://doi.org/10.1007/s12021-020-09477-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-020-09477-5

Keywords

Navigation