Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Deep learning for cellular image analysis

Abstract

Recent advances in computer vision and machine learning underpin a collection of algorithms with an impressive ability to decipher the content of images. These deep learning algorithms are being applied to biological images and are transforming the analysis and interpretation of imaging data. These advances are positioned to render difficult analyses routine and to enable researchers to carry out new, previously impossible experiments. Here we review the intersection between deep learning and cellular image analysis and provide an overview of both the mathematical mechanics and the programming frameworks of deep learning that are pertinent to life scientists. We survey the field’s progress in four key applications: image classification, image segmentation, object tracking, and augmented microscopy. Last, we relay our labs’ experience with three key aspects of implementing deep learning in the laboratory: annotating training data, selecting and training a range of neural network architectures, and deploying solutions. We also highlight existing datasets and implementations for each surveyed application.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Software 2.0 combines data annotations with deep learning to produce intelligent software.
Fig. 2: Common mathematical components of deep learning models.
Fig. 3: Image classification applied to biological images.
Fig. 4: Image segmentation applied to biological images.
Fig. 5: Augmenting microscopy images with deep learning.

Data availability

Links to the data referred to in this Review can be found in Table 2.

Table 2 Available datasets

References

  1. Grimm, J. B. et al. A general method to fine-tune fluorophores for live-cell and in vivo imaging. Nat. Methods 14, 987–994 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  2. Liu, H. et al. Visualizing long-term single-molecule dynamics in vivo by stochastic protein labeling. Proc. Natl. Acad. Sci. USA 115, 343–348 (2018).

    CAS  PubMed  Google Scholar 

  3. Regot, S., Hughey, J. J., Bajar, B. T., Carrasco, S. & Covert, M. W. High-sensitivity measurements of multiple kinase activities in live single cells. Cell 157, 1724–1734 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  4. Sampattavanich, S. et al. Encoding growth factor identity in the temporal dynamics of FOXO3 under the combinatorial control of ERK and AKT kinases. Cell Syst. 6, 664–678 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  5. Megason, S. G. In toto imaging of embryogenesis with confocal time-lapse microscopy. Methods Mol. Biol. 546, 317–332 (2009).

    PubMed  PubMed Central  Google Scholar 

  6. Udan, R. S., Piazza, V. G., Hsu, C. W., Hadjantonakis, A.-K. & Dickinson, M. E. Quantitative imaging of cell dynamics in mouse embryos using light-sheet microscopy. Development 141, 4406–4414 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  7. Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

    PubMed  PubMed Central  Google Scholar 

  8. Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016).

    CAS  PubMed  Google Scholar 

  9. McDole, K. et al. In toto imaging and reconstruction of post-implantation mouse development at the single-cell level. Cell 175, 859–876 (2018).

    CAS  PubMed  Google Scholar 

  10. Shah, S., Lubeck, E., Zhou, W. & Cai, L. seqFISH accurately detects transcripts in single cells and reveals robust spatial organization in the hippocampus. Neuron 94, 752–758 (2017).

    CAS  PubMed  Google Scholar 

  11. Keren, L. et al. A structured tumor-immune microenvironment in triple negative breast cancer revealed by multiplexed ion beam imaging. Cell 174, 1373–1387 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  12. Lin, J.-R. et al. Highly multiplexed immunofluorescence imaging of human tissues and tumors using t-CyCIF and conventional optical microscopes. eLife 7, e31657 (2018).

    PubMed  PubMed Central  Google Scholar 

  13. Caicedo, J. C. et al. Data-analysis strategies for image-based cell profiling. Nat. Methods 14, 849–863 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  14. van der Walt, S., Colbert, S. C. & Varoquaux, G. The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13, 22–30 (2011).

    Google Scholar 

  15. Jones, E. et al. SciPy: open source scientific tools for Python. https://www.scipy.org/ (2001).

  16. McKinney, W. Data structures for statistical computing in Python. In Proc. 9th Python in Science Conference (eds. van der Walt, S. & Millman, J.) 51–56 (SciPy, 2010).

  17. van der Walt, S. et al. scikit-image: image processing in Python. PeerJ 2, e453 (2014).

    PubMed  PubMed Central  Google Scholar 

  18. Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).

    Google Scholar 

  19. Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90–95 (2007).

    Google Scholar 

  20. Kluyver, T. et al. Jupyter Notebooks—a publishing format for reproducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas (eds. Loizides, F. & Schmidt, B.) 87–90 (IOS Press, 2016).

  21. Stylianidou, S., Brennan, C., Nissen, S. B., Kuwada, N. J. & Wiggins, P. A. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells. Mol. Microbiol. 102, 690–700 (2016).

    CAS  PubMed  Google Scholar 

  22. Paintdakhi, A. et al. Oufti: an integrated software package for high-accuracy, high-throughput quantitative microscopy analysis. Mol. Microbiol. 99, 767–777 (2016).

    CAS  PubMed  Google Scholar 

  23. Ursell, T. et al. Rapid, precise quantification of bacterial cellular dimensions across a genomic-scale knockout library. BMC Biol. 15, 17 (2017).

    PubMed  PubMed Central  Google Scholar 

  24. Carpenter, A. E. et al. CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 7, R100 (2006).

    PubMed  PubMed Central  Google Scholar 

  25. McQuin, C. et al. CellProfiler 3.0: next-generation image processing for biology. PLoS Biol. 16, e2005970 (2018).

    PubMed  PubMed Central  Google Scholar 

  26. Sommer, C., Straehle, C., Köthe, U. & Hamprecht, F. A. Ilastik: interactive learning and segmentation toolkit. In 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro (Wright, S., Pan, X. & Liebling, M.) 230–233 (IEEE, 2011).

  27. Belevich, I., Joensuu, M., Kumar, D., Vihinen, H. & Jokitalo, E. Microscopy Image Browser: a platform for segmentation and analysis of multidimensional datasets. PLoS Biol. 14, e1002340 (2016).

    PubMed  PubMed Central  Google Scholar 

  28. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    CAS  PubMed  Google Scholar 

  29. Allan, C. et al. OMERO: flexible, model-driven data management for experimental biology. Nat. Methods 9, 245–253 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  30. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    CAS  PubMed  Google Scholar 

  31. Krizhevsky, A., Sutskever, I. & Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proc. 25th International Conference on Neural Information Processing Systems (eds. Pereira, F. et al.) 1090–1098 (Curran Associates, 2012).

  32. Carrasquilla, J. & Melko, R. G. Machine learning phases of matter. Nat. Phys. 13, 431–434 (2017).

    CAS  Google Scholar 

  33. Nguyen, T. Q. et al. Topology classification with deep learning to improve real-time event selection at the LHC. Preprint available at https://arxiv.org/abs/1807.00083 (2018).

  34. Castelvecchi, D. Artificial intelligence called in to tackle LHC data deluge. Nature 528, 18–19 (2015).

    CAS  PubMed  Google Scholar 

  35. Ramsundar, B. et al. Massively multitask networks for drug discovery. Preprint available at http://arxiv.org/abs/1502.02072 (2015).

  36. Feinberg, E. N. et al. Spatial graph convolutions for drug discovery. Preprint available at http://arxiv.org/abs/1803.04465 (2018).

  37. Coudray, N. et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 24, 1559–1567 (2018).

    CAS  PubMed  Google Scholar 

  38. Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Poplin, R. et al. A universal SNP and small-indel variant caller using deep neural networks. Nat. Biotechnol. 36, 983–987 (2018).

    CAS  PubMed  Google Scholar 

  40. Zhou, J. et al. Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat. Genet. 50, 1171–1179 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  41. Alipanahi, B., Delong, A., Weirauch, M. T. & Frey, B. J. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat. Biotechnol. 33, 831–838 (2015).

    CAS  PubMed  Google Scholar 

  42. Angermueller, C., Pärnamaa, T., Parts, L. & Stegle, O. Deep learning for computational biology. Mol. Syst. Biol. 12, 878 (2016).

    PubMed  PubMed Central  Google Scholar 

  43. Falk, T. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019).

    CAS  PubMed  Google Scholar 

  44. Karpathy, A. Software 2.0. Medium https://medium.com/@karpathy/software-2-0-a64152b37c35 (2017).

  45. Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017).

    PubMed  Google Scholar 

  46. Xing, F., Xie, F., Su, H., Liu, F. & Yang, L. Deep learning in microscopy image analysis: a survey. IEEE Trans. Neural Netw. Learn. Syst. 29, 4550–4568 (2018).

    Google Scholar 

  47. Smith, K. et al. Phenotypic image analysis software tools for exploring and understanding big image data from cell-based assays. Cell Syst. 6, 636–653 (2018).

    CAS  PubMed  Google Scholar 

  48. Van Valen, D. A. et al. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLOS Comput. Biol. 12, e1005177 (2016).

    PubMed  PubMed Central  Google Scholar 

  49. Cireşan, D. C., Meier, U., Gambardella, L. M. & Schmidhuber, J. Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22, 3207–3220 (2010).

    PubMed  Google Scholar 

  50. Zhang, W. et al. Deep model based transfer and multi-task learning for biological image analysis. IEEE Trans. Big Data https://doi.org/10.1109/TBDATA.2016.2573280 (2016).

  51. Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. How transferable are features in deep neural networks? In Proc. 27th International Conference on Neural Information Processing Systems (eds. Ghahramani, Z. et al.) 3320–3328 (MIT Press, 2014).

  52. Caicedo, J.C. et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Preprint available at https://www.biorxiv.org/content/early/2018/06/16/335216 (2018).

  53. Newby, J. M., Schaefer, A. M., Lee, P. T., Forest, M. G. & Lai, S. K. Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D. Proc. Natl. Acad. Sci. USA 115, 9026–9031 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  54. Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Automated training of deep convolutional neural networks for cell segmentation. Sci. Rep. 7, 7860 (2017).

    PubMed  PubMed Central  Google Scholar 

  55. Chen, J. et al. The Allen Cell Structure Segmenter: a new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. Preprint available at https://www.biorxiv.org/content/early/2018/12/08/491035 (2018).

  56. Hughes, A. J. et al. Quanti.us: a tool for rapid, flexible, crowd-based annotation of images. Nat. Methods 15, 587–590 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  57. Sullivan, D. P. et al. Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nat. Biotechnol. 36, 820–828 (2018).

    CAS  PubMed  Google Scholar 

  58. Abadi, M. et al. TensorFlow: a system for large-scale machine learning. In Proc. 12th USENIX Conference on Operating Systems Design and Implementation (eds. Keeton, K. & Roscoe, T.) 265–283 (USENIX Association, 2016).

  59. Chollet, F. Keras. GitHub https://github.com/keras-team/keras (2015).

  60. Paszke, A. et al. Automatic differentiation in PyTorch. Oral presentation at NIPS 2017 Workshop on Automatic Differentiation, Long Beach, CA, USA, 9 December 2017.

  61. Chen, T. et al. MXNet: a flexible and efficient machine learning library for heterogeneous distributed systems. Preprint available at http://arxiv.org/abs/1512.01274 (2015).

  62. Seide, F. & Agarwal, A. CNTK: Microsoft’s open-source deep-learning toolkit. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (eds. Krishnapuram, B. et al.) 2135 (ACM, 2016).

  63. Bergstra, J. et al. Theano: deep learning on GPUs with Python. Paper presented at Big Learning 2011: NIPS 2011 Workshop on Algorithms, Systems, and Tools for Learning at Scale, Sierra Nevada, Spain, 16–17 December 2011.

  64. Jia, Y. et al. Caffe: convolutional architecture for fast feature embedding. In Proc. 22nd ACM International Conference on Multimedia (eds. Hua, K. A. et al.) 675–678 (ACM, 2014).

  65. Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. 44th Annual International Symposium on Computer Architecture (eds. Moshovos, A. et al.) 1–12 (ACM, 2017).

  66. Owens, J. D. et al. GPU computing. Proc. IEEE 96, 879–899 (2008).

    Google Scholar 

  67. Chetlur, S. et al. cuDNN: efficient primitives for deep learning. Preprint available at http://arxiv.org/abs/1410.0759 (2014).

  68. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. 29th IEEE Conference on Computer Vision and Pattern Recognition (eds. Agapito, L. et al.) 770–778 (IEEE, 2016).

  69. Huang, G., Liu, Z., van der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Liu, Y. et al.) 2261–2269 (IEEE, 2017).

  70. Pelt, D. M. & Sethian, J. A. A mixed-scale dense convolutional neural network for image analysis. Proc. Natl. Acad. Sci. USA 115, 254–259 (2018).

    CAS  PubMed  Google Scholar 

  71. Bishop, C. M. Pattern Recognition and Machine Learning (Information Science and Statistics) (Springer-Verlag, 2006).

  72. Ebrahimi, M. S. & Abadi, H. K. Study of residual networks for image recognition. Preprint available at http://arxiv.org/abs/1805.00325 (2018).

  73. Richardson, L. & Ruby, S. RESTful Web Services (O’Reilly Media, 2007).

  74. Merkel, D. Docker: lightweight Linux containers for consistent development and deployment. Linux J. 2014, 2 (2014).

    Google Scholar 

  75. Haberl, M. G. et al. CDeep3M-Plug-and-Play cloud-based deep learning for image segmentation. Nat. Methods 15, 677–680 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  76. Pawlowski, N., Caicedo, J. C., Singh, S., Carpenter, A. E. & Storkey, A. Automating morphological profiling with generic deep convolutional networks. Preprint available at https://www.biorxiv.org/content/early/2016/11/02/085118 (2016).

  77. Godinez, W. J., Hossain, I., Lazic, S. E., Davies, J. W. & Zhang, X. A multi-scale convolutional neural network for phenotyping high-content cellular images. Bioinformatics 33, 2010–2019 (2017).

    CAS  PubMed  Google Scholar 

  78. Kandaswamy, C., Silva, L. M., Alexandre, L. A. & Santos, J. M. High-content analysis of breast cancer using single-cell deep transfer learning. J. Biomol. Screen. 21, 252–259 (2016).

    CAS  PubMed  Google Scholar 

  79. Sommer, C., Hoefler, R., Samwer, M. & Gerlich, D. W. A deep learning and novelty detection framework for rapid phenotyping in high-content screening. Mol. Biol. Cell 28, 3428–3436 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  80. Simm, J. et al. Repurposing high-throughput image assays enables biological activity prediction for drug discovery. Cell Chem. Biol. 25, 611–618 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  81. Buggenthin, F. et al. Prospective identification of hematopoietic lineage choice by deep learning. Nat. Methods 14, 403–406 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  82. Kraus, O. Z., Ba, J. L. & Frey, B. J. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 32, i52–i59 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  83. Kraus, O. Z. et al. Automated analysis of high-content microscopy data with deep learning. Mol. Syst. Biol. 13, 924 (2017).

    PubMed  PubMed Central  Google Scholar 

  84. Pärnamaa, T. & Parts, L. Accurate classification of protein subcellular localization from high-throughput microscopy images using deep learning. G3 (Bethesda) 7, 1385–1392 (2017).

    Google Scholar 

  85. Nitta, N. et al. Intelligent image-activated cell sorting. Cell 175, 266–276 (2018).

    CAS  PubMed  Google Scholar 

  86. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 (eds. Navab, N. et al.) 234–241 (Springer, 2015).

  87. Bai, M. & Urtasun, R. Deep watershed transform for instance segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Liu, Y. et al.) 2858–2866 (IEEE, 2017).

  88. Wang, W. et al. Learn to segment single cells with deep distance estimator and deep cell detector. Preprint available at https://arxiv.org/abs/1803.10829 (2018).

  89. Ren, S., He, K., Girshick, R. & Sun, J. Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28 (eds. Cortes, C. et al.) 91–99 (Curran Associates, 2015).

  90. Lin, T., Goyal, P., Girshick, R., He, K. & Dollar, P. Focal loss for dense object detection. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2999–3007 (IEEE, 2018).

  91. He, K., Gkioxari, G., Dollar, P. & Girshick, R. Mask R-CNN. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2980–2988 (IEEE, 2018).

  92. Johnson, J. W. Adapting Mask-RCNN for automatic nucleus segmentation. Preprint available at http://arxiv.org/abs/1805.00500 (2018).

  93. Tsai, H.-F., Gajda, J., Sloan, T. F. W., Rares, A. & Shen, A. Q. Usiigaci: instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning. Preprint available at https://www.biorxiv.org/content/early/2019/01/18/524041 (2019).

  94. Hollandi, R. et al. A deep learning framework for nucleus segmentation using image style transfer. Preprint available at https://www.biorxiv.org/content/10.1101/580605v1 (2019).

  95. De Brabandere, B., Neven, D. & Van Gool, L. Semantic instance segmentation with a discriminative loss function. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (eds. Liu, Y. et al.) 478–480 (IEEE, 2017).

  96. Payer, C., Štern, D., Neff, T., Bischof, H. & Urschler, M. Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018 (eds. Frangi, A. F. et al.) 3–11 (Springer, 2018).

  97. Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. Preprint available at http://arxiv.org/abs/1703.10593 (2017).

  98. Haering, M., Grosshans, J., Wolf, F. & Eule, S. Automated segmentation of epithelial tissue using cycle-consistent generative adversarial networks. Preprint available at https://www.biorxiv.org/content/early/2018/04/30/311373 (2018).

  99. Mahmood, F. et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. Preprint available at http://arxiv.org/abs/1810.00236 (2018).

  100. Tokuoka, Y. et al. Convolutional neural network-based instance segmentation algorithm to acquire quantitative criteria of early mouse development. Preprint available at https://www.biorxiv.org/content/early/2018/06/01/324186 (2018).

  101. Januszewski, M. et al. High-precision automated reconstruction of neurons with flood-filling networks. Nat. Methods 15, 605–610 (2018).

    CAS  PubMed  Google Scholar 

  102. Li, P. H. et al. Automated reconstruction of a serial-section EM Drosophila brain with flood-filling networks and local realignment. Preprint at https://www.biorxiv.org/content/10.1101/605634v1 (2019).

  103. Booz Allen Hamilton. 2018 Data Science Bowl. Kaggle https://www.kaggle.com/c/data-science-bowl-2018 (2018).

  104. Facchetti, G., Knapp, B., Flor-Parra, I., Chang, F. & Howard, M. Reprogramming Cdr2-dependent geometry-based cell size control in fission yeast. Curr. Biol. 29, 350–358 (2019).

    CAS  PubMed  PubMed Central  Google Scholar 

  105. Khoshdeli, M., Winkelmaier, G. & Parvin, B. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes. BMC Bioinforma. 19, 294 (2018).

    Google Scholar 

  106. Kumar, N. et al. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 36, 1550–1560 (2017).

    PubMed  Google Scholar 

  107. Regev, A. et al. The Human Cell Atlas. eLife 6, e27041 (2017).

    PubMed  PubMed Central  Google Scholar 

  108. Rozenblatt-Rosen, O., Stubbington, M. J. T., Regev, A. & Teichmann, S. A. The Human Cell Atlas: from vision to reality. Nature 550, 451–453 (2017).

    CAS  PubMed  Google Scholar 

  109. Purvis, J. E. & Lahav, G. Encoding and decoding cellular information through signaling dynamics. Cell 152, 945–956 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  110. Kimmel, J. C., Chang, A. Y., Brack, A. S. & Marshall, W. F. Inferring cell state by quantitative motility analysis reveals a dynamic state system and broken detailed balance. PLoS Comput. Biol. 14, e1005927 (2018).

    PubMed  PubMed Central  Google Scholar 

  111. Wang, P. et al. Robust growth of Escherichia coli. Curr. Biol. 20, 1099–1103 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  112. Dow, J. A., Lackie, J. M. & Crocket, K. V. A simple microcomputer-based system for real-time analysis of cell behaviour. J. Cell Sci. 87, 171–182 (1987).

    PubMed  Google Scholar 

  113. Levine, M. D., Youssef, Y. M., Noble, P. B. & Boyarsky, A. The quantification of blood cell motion by a method of automatic digital picture processing. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 444–450 (1980).

    Google Scholar 

  114. Smal, I., Niessen, W. & Meijering, E. Bayesian tracking for fluorescence microscopic imaging. In 3rd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2006 (eds. Kovačević, J. et al.) 550–553 (IEEE, 2006).

  115. Godinez, W. et al. Tracking of virus particles in time-lapse fluorescence microscopy image sequences. In 2007 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (eds. Fessler, J. et al.) 256–259 (IEEE, 2007).

  116. Ngoc, S. N., Briquet-Laugier, F., Boulin, C. & Olivo, J.-C. Adaptive detection for tracking moving biological objects in video microscopy sequences. In Proc. International Conference on Image Processing (eds. Chang, S.-F. et al.) 484–487 (IEEE, 1997).

  117. Kachouie, N. N. & Fieguth, P. W. Extended-Hungarian-JPDA: exact single-frame stem cell tracking. IEEE Trans. Biomed. Eng. 54, 2011–2019 (2007).

    PubMed  Google Scholar 

  118. Meijering, E., Dzyubachyk, O., Smal, I. & van Cappellen, W. A. Tracking in cell and developmental biology. Semin. Cell Dev. Biol. 20, 894–902 (2009).

    PubMed  Google Scholar 

  119. Jaqaman, K. et al. Robust single-particle tracking in live-cell time-lapse sequences. Nat. Methods 5, 695–702 (2008).

    CAS  PubMed  PubMed Central  Google Scholar 

  120. Tinevez, J.-Y. et al. TrackMate: an open and extensible platform for single-particle tracking. Methods 115, 80–90 (2017).

    CAS  PubMed  Google Scholar 

  121. Cooper, S., Barr, A. R., Glen, R. & Bakal, C. NucliTrack: an integrated nuclei tracking application. Bioinformatics 33, 3320–3322 (2017).

    PubMed  PubMed Central  Google Scholar 

  122. Magnusson, K. E. G., Jalden, J., Gilbert, P. M. & Blau, H. M. Global linking of cell tracks using the Viterbi algorithm. IEEE Trans. Med. Imaging 34, 911–929 (2015).

    PubMed  Google Scholar 

  123. Amat, F. et al. Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data. Nat. Methods 11, 951–958 (2014).

    CAS  PubMed  Google Scholar 

  124. Akram, S. U., Kannala, J., Eklund, L. & Heikkilä, J. Cell tracking via proposal generation and selection. Preprint available at https://arxiv.org/abs/1705.03386 (2017).

  125. Cireşan, D. C., Giusti, A., Gambardella, L. M. & Schmidhuber, J. Mitosis detection in breast cancer histology images with deep neural networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2013 (eds. Mori, K. et al.) 411–418 (Springer, 2013).

  126. Nie, W.-Z., Li, W.-H., Liu, A.-A., Hao, T. & Su, Y.-T. 3D convolutional networks-based mitotic event detection in time-lapse phase contrast microscopy image sequences of stem cell populations. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (eds. Agapito, L. et al.) 55–62 (IEEE, 2016).

  127. Mao, Y. & Yin, Z. A hierarchical convolutional neural network for mitosis detection in phase-contrast microscopy images. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 (eds. Ourselin, S. et al.) 685–692 (Springer, 2016).

  128. Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018).

    CAS  PubMed  Google Scholar 

  129. Pereira, T. D. et al. Fast animal pose estimation using deep neural networks. Nat. Methods 16, 117–125 (2019).

    CAS  PubMed  Google Scholar 

  130. Romero-Ferrero, F., Bergomi, M. G., Hinz, R. C., Heras, F. J. H. & de Polavieja, G. G. idtracker.ai: tracking all individuals in small or large collectives of unmarked animals. Nat. Methods 16, 179–182 (2019).

    CAS  PubMed  Google Scholar 

  131. Gordon, D., Farhadi, A. & Fox, D. Re3 : real-time recurrent regression networks for visual tracking of generic objects. IEEE Robot. Autom. Lett. 3, 788–795 (2018).

    Google Scholar 

  132. Cui, Z., Xiao, S., Feng, J. & Yan, S. Recurrently target-attending tracking. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (eds. Agapito, L. et al.) 1449–1458 (IEEE, 2016).

  133. Wang, Y., Mao, H. & Yi, Z. Stem cell motion-tracking by using deep neural networks with multi-output. Neural Comput. Appl. https://doi.org/10.1007/s00521-017-3291-2 (2017).

    Article  PubMed  Google Scholar 

  134. Sadeghian, A., Alahi, A. & Savarese, S. Tracking the untrackable: learning to track multiple cues with long-term dependencies. In 2017 IEEE International Conference on Computer Vision (eds. Ikeuchi, K. et al.) 300–311 (IEEE, 2017).

  135. Zhang, D., Maei, H., Wang, X. & Wang, Y.-F. Deep reinforcement learning for visual object tracking in videos. Preprint available at http://arxiv.org/abs/1701.08936 (2017).

  136. Wen, C. et al. Deep-learning-based flexible pipeline for segmenting and tracking cells in 3D image time series for whole brain imaging. Preprint available at https://www.biorxiv.org/content/early/2018/08/06/385567 (2018).

  137. Sullivan, D. P. & Lundberg, E. Seeing more: a future of augmented microscopy. Cell 173, 546–548 (2018).

    CAS  PubMed  Google Scholar 

  138. Ounkomol, C. et al. Three dimensional cross-modal image inference: label-free methods for subcellular structure prediction. Preprint available at https://www.biorxiv.org/content/10.1101/216606v4 (2017).

  139. Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  140. Johnson, G. R., Donovan-Maiye, R. M. & Maleckar, M. M. Building a 3D integrated cell. Preprint available at https://www.biorxiv.org/content/early/2017/12/21/238378 (2017).

  141. Osokin, A., Chessel, A., Salas, R. E. C. & Vaggi, F. GANs for biological image synthesis. In 2017 IEEE International Conference on Computer Vision (eds. Ikeuchi, K. et al.) 2252–2261 (IEEE, 2017).

  142. Ounkomol, C., Seshamani, S., Maleckar, M. M., Collman, F. & Johnson, G. R. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917–920 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  143. Johnson, G., Donovan-Maiye, R., Ounkomol, C. & Maleckar, M. M. Studying stem cell organization using “label-free” methods and a novel generative adversarial model. Biophys. J. 114, 43a (2018).

    Google Scholar 

  144. Stumpe, M. & Mermel, C. An augmented reality microscope for cancer detection. Google AI Blog https://ai.googleblog.com/2018/04/an-augmented-reality-microscope.html (2018).

  145. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Preprint available at https://www.preprints.org/manuscript/201812.0137/v1 (2018).

  146. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Preprint available at https://www.biorxiv.org/content/early/2018/07/03/236463 (2018).

  147. Wang, H. et al. Deep learning achieves super-resolution in fluorescence microscopy. Preprint available at https://www.biorxiv.org/content/early/2018/04/27/309641 (2018).

  148. Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).

    Google Scholar 

  149. Angelo, M. et al. Multiplexed ion beam imaging of human breast tumors. Nat. Med. 20, 436–442 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  150. Acuna, D., Ling, H., Kar, A. & Fidler, S. Efficient interactive annotation of segmentation datasets with Polygon-RNN++. Preprint available at http://arxiv.org/abs/1803.09693 (2018).

  151. Zoph, B. & Le, Q.V. Neural architecture search with reinforcement learning. Preprint available at http://arxiv.org/abs/1611.01578 (2016).

  152. Zoph, B., Vasudevan, V., Shlens, J. & Le, Q.V. Learning transferable architectures for scalable image recognition. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Forsyth, D. et al.) 8697–8710 (IEEE, 2018).

  153. Jackson, A. S., Bulat, A., Argyriou, V. & Tzimiropoulos, G. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. Preprint available at http://arxiv.org/abs/1703.07834 (2017).

  154. Nair, V. & Hinton, G. E. Rectified linear units improve restricted Boltzmann machines. In Proc. 27th International Conference on Machine Learning (eds. Fürnkranz, J. & Joachims, T.) 807–814 (Omnipress, 2010).

  155. Li, H., Zhao, R. & Wang, X. Highly efficient forward and backward propagation of convolutional neural networks for pixelwise classification. Preprint available at http://arxiv.org/abs/1412.4526 (2014).

  156. Chollet, F. Xception: deep learning with depthwise separable convolutions. In Proc. 30th IEEE Conference on Computer Vision and Pattern Recognition (eds. Liu, Y. et al.) 1800–1807 (IEEE, 2017).

  157. Howard, A. G. et al. MobileNets: efficient convolutional neural networks for mobile vision applications. Preprint available at https://arxiv.org/abs/1704.04861v1 (2017).

  158. Lin, T. et al. Feature pyramid networks for object detection. In Proc. 30th IEEE Conference on Computer Vision and Pattern Recognition (eds. Liu, Y. et al.) 936–944 (IEEE, 2017).

  159. Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Preprint available at http://arxiv.org/abs/1502.03167 (2015).

  160. Santurkar, S., Tsipras, D., Ilyas, A. & Madry, A. How does batch normalization help optimization? (No, it is not about internal covariate shift). Preprint available at http://arxiv.org/abs/1805.11604 (2018).

  161. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).

    Google Scholar 

  162. Li, X., Chen, S., Hu, X. & Yang, J. Understanding the disharmony between dropout and batch normalization by variance shift. Preprint available at http://arxiv.org/abs/1801.05134 (2018).

  163. Bannon, D. et al. DeepCell 2.0: automated cloud deployment of deep learning models for large-scale cellular image analysis. Preprint available at https://www.biorxiv.org/content/early/2018/12/22/505032 (2018).

  164. Thul, P. J. et al. A subcellular map of the human proteome. Science 356, eaal3321 (2017).

    PubMed  Google Scholar 

  165. Ljosa, V., Sokolnicki, K. L. & Carpenter, A. E. Annotated high-throughput microscopy image sets for validation. Nat. Methods 9, 637 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  166. Maška, M. et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014).

    PubMed  PubMed Central  Google Scholar 

  167. He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In Proc. 2015 IEEE International Conference on Computer Vision (eds. Bajcsy, R. et al.) 1026–1034 (IEEE, 2015).

  168. Polyak, B. T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964).

    Google Scholar 

  169. Nesterov, Y. E. A method for solving the convex programming problem with convergence rate O (1/k2). Dokl. Akad. Nauk SSSR 269, 543–547 (1983).

    Google Scholar 

  170. Sutskever, I., Martens, J., Dahl, G. & Hinton, G. On the importance of initialization and momentum in deep learning. Proc. Mach. Learn. Res. 28, 1139–1147 (2013).

  171. Tieleman, T. & Hinton, G. Neural Networks for Machine Learning lecture 6.5—rmsprop: divide the gradient by a running average of its recent magnitude. Coursera https://www.coursera.org/learn/neural-networks (2012).

  172. Duchi, J., Hazan, E. & Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011).

    Google Scholar 

  173. Zeiler, M. D. ADADELTA: an adaptive learning rate method. Preprint available at http://arxiv.org/abs/1212.5701 (2012).

  174. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint available at http://arxiv.org/abs/1412.6980 (2014).

  175. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N. & Recht, B. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 4148–4158 (Curran Associates, Inc., 2017).

  176. Keskar, N. S. & Socher, R. Improving generalization performance by switching from Adam to SGD. Preprint available at http://arxiv.org/abs/1712.07628 (2017).

  177. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).

    Google Scholar 

  178. Sjoberg, J. & Ljung, L. Overtraining, regularization and searching for a minimum, with application to neural networks. Int. J. Control 62, 1391–1407 (1995).

    Google Scholar 

  179. Ting, K. M. Confusion matrix. In Encyclopedia of Machine Learning and Data Mining (eds. Sammut, C. & Webb, G. I.) 260–260 (Springer, 2017).

  180. Bajcsy, P. et al. Survey statistics of automated segmentations applied to optical imaging of mammalian cells. BMC Bioinforma. 16, 330 (2015).

    Google Scholar 

  181. Sokolova, M. & Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45, 427–437 (2009).

    Google Scholar 

  182. Everingham, M., Van Gool, L., Williams, C. K., Winn, J. & Zisserman, A. The Pascal Visual Object Classes (voc) challenge. Int. J. Comput. Vis. 88, 303–338 (2010).

    Google Scholar 

  183. Kotila, M. Hyperparameter Optimization for Keras Models (Autonomio, 2018).

Download references

Acknowledgements

We thank A. Anandkumar, M. Angelo, L. Cai, S. Cooper, M. Elowitz, K.C. Huang, G. Johnson, A. Karpathy, L. Keren, A. Raj, T. Vora, and R. Wollman for helpful discussions and comments. This work was supported by several funding sources, including the Allen Discovery Center (award supporting W.G.; award supporting T.K., M.C., and D.V.V.), the Burroughs Wellcome Fund Postdoctoral Enrichment Program, a Figure Eight AI for Everyone award, and the NIH (subaward U24CA224309-01 to D.V.V.).

Author information

Authors and Affiliations

Authors

Contributions

E.M., D.B., T.K., W.G., M.C., and D.V.V. wrote the paper.

Corresponding author

Correspondence to David Van Valen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moen, E., Bannon, D., Kudo, T. et al. Deep learning for cellular image analysis. Nat Methods 16, 1233–1246 (2019). https://doi.org/10.1038/s41592-019-0403-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-019-0403-1

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing