Large-scale automatic reconstruction of neuronal processes from electron microscopy images

https://doi.org/10.1016/j.media.2015.02.001Get rights and content

Highlights

  • We provide a pipeline for automatic reconstructions of neurons from EM images.

  • The pipeline is scalable to large data sets.

  • We show successful automatic long-range reconstructions over more than 30 micrometer.

Abstract

Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a 27,000μm3 volume of brain tissue over a cube of 30μm in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles.

Introduction

Brain imaging modalities such as diffusion tensor MRI or functional MRI provide important information about the brain and the connectivity between brain regions (Seung, 2012). However, at a resolution of a cubic millimeter per voxel they provide little data about connectivity between individual neurons. Information about the anatomy and connectivity of neurons can provide new insights into the relation between the brain’s structure and its function (Marc et al., 2013, Helmstaedter and Mitra, 2012, Denk et al., 2012, Lee and Reid, 2011, Seung, 2009). Such information may provide insights into the physical underpinnings of common serious disorders of brain function such as mental illnesses and learning disorders Kuwajima et al., 2013b, Penzes et al., 2011. Furthermore, information about the individual strength of synapses or the number of connections between two cells has important implications for computational neuroscience and theoretical analysis of neuronal networks (Valiant, 2006). As the resolution of light microscopy is generally limited by diffraction, electron microscopy (EM) is a better imaging modality to resolve the brain at the level of synapses and thus provides insight into the anatomy and connectivity of neurons at nm resolution. To reconstruct the neuronal circuit at the level of individual cells, the field of neuroanatomy faces the challenge to acquire and analyze data volumes that cover a brain tissue volume large enough to allow meaningful analysis of circuits and detailed enough to detect synapses and thus the connectivity structure of the circuit. Recently, significant progress has been made in the automation of sample preparation (Hayworth et al., 2006) and automatic image acquisition (Kuwajima et al., 2013a, Bock et al., 2011, Knott et al., 2008, Denk and Horstmann, 2004) for electron microscopy. These techniques allow neuroscientists to acquire large datasets in the GB-TB range. Briggman and Bock (2012) provide an overview of different sample preparation and electron microscopy techniques used for Connectomics. With a resolution of 5nm per pixel, and a section thickness of 50nm, one cubic millimeter of brain tissue results in 20,000 sections with 40 Gigapixels per image, leading to an image volume of 800 TB. For comparison, this volume corresponds to the size of one voxel in an fMRI data set. With data sets this size, manual analysis is no longer feasible, leading to new challenges in automated analysis and visualization.

In this paper we present a pipeline for semi-automated 3D reconstruction of neurons from serial section electron microscopy images. The pipeline is designed to address large data sets, while reducing user interaction to the initial training of a random forest classifier on manually annotated data and computer aided proofreading of the automatic reconstruction output. Our experiments demonstrate that the proposed pipeline yields state-of-the art reconstruction results, based on sparse annotations of only ten EM images (1024×1024pixels). We provide quantitative evaluation for each step of the pipeline and an example of a reconstructed volume of 27,000μm3, which to our knowledge is the largest volume of conventionally stained mammalian brain tissue reconstructed automatically (see Fig. 1).

Some of the work in this paper has been previously published (Kaynig et al., 2010a, Vazquez-Reina et al., 2011, Roberts et al., 2011). However, this is the first time we publish the complete reconstruction pipeline and its application to large data. Specifically the novel contributions in this paper are:

  • We demonstrate that interactively training a random forest classifier for membrane detection not only reduces the manual annotation effort, but leads to significantly better cell region segmentations measured in terms of variation of information against manual annotated data.

  • We combine the cell region segmentation of Kaynig et al. (2010a) with the segmentation fusion of Vazquez-Reina et al. (2011) into a consistent pipeline leading to long-range reconstructions of neuronal processes over 30μm of brain tissue (up to 1000 image sections).

  • We extend the segmentation fusion approach to allow for branching structures.

  • We enable parallel processing of sub volumes via a pairwise matching scheme of segmented blocks into one consistent reconstruction volume.

  • We provide large-scale reconstruction results covering a volume of 27,000μm3. To our knowledge we are the first to achieve automatic reconstructions of individual spine necks in anisotropic serial section electron microscopy data prior to manual proofreading.

  • Finally, we introduce Mojo, a semi-automated proofreading tool, utilizing sparse user scribbles as described by Roberts et al. (2011) to correct for merge errors in the 3D reconstruction.

Section snippets

Related work

Automated reconstruction of neuronal processes has received increased attention in recent years. With electron microscopy techniques acquiring large volumes automatically, automated analysis is becoming the major bottleneck in gaining new insights into the functional structure of the brain at nm scale. The task to reconstruct the full neuroanatomy including synaptic contacts is referred to as connectomics in the literature (Lichtman and Sanes, 2008). A number of software packages have been

Overview

We now provide an overview of our reconstruction workflow (see Fig. 2), as well as evaluation metrics for neuron segmentation and the data sets used for all experiments throughout this paper.

Region segmentation

While the texture characteristics of cell regions in electron microscopy images can vary significantly between different animal types and staining protocols, the basic appearance of the cell boundary membranes as thin, smooth, and elongated structures remains the same. Thus, instead of segmenting interior cell regions, we focus on segmenting the cell membranes to make our approach easily adaptable to a wide range of data.

Region grouping across sections

The previous steps of the pipeline focus on the segmentation of neuronal processes in the 2D image plane to take advantage of the high resolution provided by the electron microscope. To extract the 3D geometry of neuronal processes, these regions need to be grouped across sections. We follow the segmentation fusion approach of Vazquez-Reina et al. (2011) that allows for globally optimal groupings of regions across sections. The term fusion refers to the option to pick the best choice of

Semiautomatic proofreading with Mojo

Manual proofreading is necessary in order to guarantee the correct topology of the neuron reconstruction. Fig. 10 shows an example segmentation of a 2D section compared to a manual annotation. While most regions are correctly segmented, some are split into several parts and need manual merging, while other regions span multiple objects and need to be manually split.

In order to minimize the user effort required to correct split and merge errors, we developed an interactive system called Mojo

Parallel implementation

Our pipeline has been designed to efficiently scale to large data sets in the GB-TB range. Scalability is an important aspect in the context of Connectomics. While a resolution of 5 nm is essential to allow for identification of biological structures like vesicles or synapses, whole neuronal cells expand over several μm of brain tissue. In the following sections we describe the run-time performance and scalability of the current implementation. We use the Harvard Research Computing cluster

Large-scale reconstruction results

We successfully used our pipeline to reconstruct neuronal processes in a 27,000μm3 volume of brain tissue. This volume is more than 150 times larger than the manually annotated volume we used for quantitative evaluations, and it would take about 13,500 h to segment this volume manually, rendering a full quantitative evaluation of the large volume infeasible. To address this challenge and still provide a quantitative measure for the quality of our automatic reconstruction we measure the number of

Conclusions

In this paper we address the automatic reconstruction of neuronal processes at nm resolution for large-scale data sets. We demonstrate state-of-the art performance of our pipeline with respect to automatic dense reconstruction of neuronal tissue, and also for long range reconstructions covering neuronal processes over many μm. The workflow is designed to minimize manual effort and to be easy parallelizable on computer clusters and GPUs, with most steps scaling linearly with the number of

Acknowledgment

The authors would like to thank Daniel Berger for providing manual segmentation, and Nancy Aulet for interactively annotating the training data. We also would like to thank Jan Funke for providing the Sopnet results, and Bjoern Andres for providing an efficient implementation of variation of information. This work has been partially supported by NSF Grants PHY 0938178, OIA 1125087, NIH Grant 2R44MH088088-03, Transformative R01 NS076467, the Gatsby Charitable Trust, Nvidia, Google, and the Intel

References (53)

  • Y. Mishchenko

    Automation of 3d reconstruction of neural tissue from large volume of conventional serial section transmission electron micrographs

    J. Neurosci. Methods

    (2009)
  • H.S. Seung

    Reading the book of memory: sparse sampling versus dense mapping of connectomes

    Neuron

    (2009)
  • B. Andres et al.

    Segmentation of sbfsem volume data of neural tissue by hierarchical classification

  • Andres, B., Kröger, T., Briggman, K.L., Denk, W., Korogod, N., Knott, G., Köthe, U., Hamprecht, F.A., 2012b. Globally...
  • P. Arbeláez et al.

    Contour detection and hierarchical image segmentation

    IEEE Trans. Pattern Anal. Machine Intell.

    (2011)
  • D. Bock et al.

    Network anatomy and in vivo physiology of visual cortical neurons

    Nature

    (2011)
  • Y. Boykov et al.

    Graph cuts and efficient n-d image segmentation

    Int. J. Comput. Vis.

    (2006)
  • Y. Boykov et al.

    An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision

    IEEE Trans. Pattern Anal.

    (2004)
  • L. Breiman

    Random forests

    Mach. Learn.

    (2001)
  • A. Cardona et al.

    An integrated micro- and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy

    PLoS Biol.

    (2010)
  • Chen, C., Liaw, A., Breiman, L., July 2004. Using random forest to learn imbalanced data. Technical report, Department...
  • N.M. daCosta et al.

    How thalamus connects to spiny stellate cells in the cat’s visual cortex

    J. Neurosci: Official J. Soc. Neurosci.

    (2011)
  • W. Denk et al.

    Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure

    PLoS Biol.

    (2004)
  • W. Denk et al.

    Structural neurobiology: missing link to a mechanistic understanding of neural computation

    Nature Rev. Neurosci.

    (2012)
  • J.C. Fiala

    Reconstruct: a free editor for serial section microscopy

    J. Microsc.

    (2005)
  • Funke, J., Andres, B., Hamprecht, F.A., 2012. Efficient automatic 3D-reconstruction of branching neurons from EM data....
  • Cited by (82)

    • Imaging Peripheral Nerve Regeneration: A New Technique for 3D Visualization of Axonal Behavior

      2019, Journal of Surgical Research
      Citation Excerpt :

      Electron microscopy is another useful tool for evaluation of nerve, particularly beneficial for analysis at an ultra-structural level. Modern electron microscopes are capable of imaging at a resolution of 4 nm pixels, allowing vesicles within synapses to be clearly visualized,12 while extensive geometrical parameters can be used to assess nerve fibers.13 In spite of these developments, the nervous system remains a challenge to study.

    View all citing articles on Scopus
    View full text