Volumetric feature extraction and visualization of tomographic molecular imaging
Introduction
Most if not all proteins in a cell are organized into cellular machines that are often built from several dozens of individual proteins (Alberts, 1998). While certain cellular machines, such as the ribosome, are always built in only one and well defined way, other cellular machines are expected to vary in their exact 3D structure, while following a similar architectural principle. Cellular machines are dynamic with transient addition or loss of protein components. Hence, two such machines can be expected to be similar in composition and architecture, but not necessarily identical in their 3D structure. Moreover, some of the most interesting cellular machines are too rare or too fragile to be isolated and purified by biochemical means, and they only function in their cellular context, requiring for example the integrity of the cytoskeleton, the plasma membrane as well as extracellular matrix components. For such delicate yet biologically very important multi-protein complexes, electron tomographic imaging provides the only foreseeable way to obtain 3D structural information. All other structural techniques such as spectroscopic, diffraction or single-particle analysis cryo-electron microscopic techniques rely implicitly or explicitly on averaging of a large number of identical particles (Auer, 2000; Glaeser, 1999; van Heel et al., 2000). Electron tomography, in contrast can provide 3D structural information of such unique volumes as whole cells. Although tomographic imaging is by no means a new technique (Hart, 1968; Hoppe et al., 1974), only recently it has received more attention (Baumeister et al., 1999; Baumeister and Stevens, 2000; Frank, 1992; McEwen and Frank, 2001; McEwen and Marko, 2001) due to progress on the automation of data acquisition (Dierksen et al., 1992; Koster et al., 1992), minimization of the electron dose for data collection (McEwen et al., 1995), as well as hardware improvements from electron microscope manufacturers. Although still an expert technique, electron tomographic data collection is no longer the bottleneck, and user-friendly commercial packages for data collection are being offered.
While recording devices (CCDs) are becoming larger, and data collection becomes faster, the bottleneck in this emerging field lies more and more on the visualization and interpretation of the tomograms. So why are tomograms so much harder to study and interpret? The answer may lie in the following co-mingled reasons: First, most tomograms exhibit a low signal-to-noise ratio, and straightforward averaging techniques cannot be employed to enhance the signal. Second, the cellular machines does not reside in isolation but are embedded in their cellular context, and densely surrounded by other proteins that may or may not directly interact with the cellular machine, a concept also known as macromolecular crowding (Ellis, 2001). Third, one often does not know the exact composition and conformation of cellular machines at the time of investigation. The poor signal-to-noise ratio usually observed in tomograms complicates automated feature extraction, as well as the visualization of the volume. Hence, noise reduction is always utilized as a pre-processing step to improve the signal-to-noise ratio. Segmentation, is often necessary to obtain an unobstructed view into the machinery’s architectural organization, and to reduce the complexity of the scenery to allow for biological interpretation Feature extraction is particularly challenging if the cellular machine of interest is in close contact to its cellular surrounding, and if there is no preconception of its 3D structure. In such cases, manual segmentation approaches appear somewhat subjective and become less feasible even with the help of 3D data re-slicing along non-orthogonal angles to obtain a more favorable view (Harlow et al., 2001; Hessler et al., 1996; Kremer et al., 1996; Li et al., 1997; McEwen and Marko, 1999). Moreover, they are unlikely to keep up with the amount of data that can be generated by modern-day electron microscopes.
Electron tomography may have its biggest impact in the emerging field of structural cell biology with the goal to visualize cellular compartments without prior anticipation of the machine’s architectural organization. Hence when exploring uncharted territory, we need a tool for interactive exploration of a 3D density data set. Section-by-section inspection of the raw volume, followed by segmentation and rendering is a very time consuming process, and does not allow real-time data exploration and mining. Moreover one may fail to recognize the architecture of the complex if one has to segment the volume one slice at the time. Rendering of whole tomograms is usually beyond the graphical capabilities of computer desktop machines, due to the size of typical tomograms of 512 × 512 × 100 voxels. For fast interactive data exploration it is desirable to have a visualization tool in hand that allows simultaneous real time high-quality rendering of the whole tomogram at a lower resolution for navigation, as well as sub-volumes at full resolution for close inspection and analysis.
The ultimate aim is to interpret the densities obtained by tomographic imaging and segmentation using models of protein components. Model building has been the key in interpreting protein biological structures, and has led to a level of insight that was not available from the obtained electron density alone (Taylor et al., 1999). If the exact composition is known and the resolution is sufficient, protein structures can be fitted into the density maps, either manually using interactive 3D graphics programs (e.g., Jones et al., 1991) or semi-automatically (Volkmann and Hanein, 1999; Wriggers and Birmanns, 2001; Wriggers et al., 1999). Other approaches such as template matching (Bohm et al., 2000) have been proposed for data exploration and analysis. The complexity of cellular 3D volumes requires some form of data reduction and simplification. Skeletonization (Lam et al., 1992; Zhou and Toga, 1999) is an additional way to simplify 3D data sets while retaining their characteristics, which is also important in comparing two complexes that are similar but not identical. Skeletons will be helpful in comparing two such cellular machines and describing their similarities and discrepancies.
The rest of this paper is as follows. Section 2 presents algorithms for fully automatic volumetric boundary segmentation as well as skeletonization, as applied to electron tomography imaging data. In Section 3, we present an interactive volumetric exploration tool (Volume Rover) that we have developed which encapsulates implementations of the filtering, and curve/surface feature extraction algorithms, and additionally uses multi-resolution interactive geometry and volume rendering, for visualization. Finally, in Section 4, we exhibit results of our application of the volumetric image processing and visualization of transmission electron tomographic 3D cell organelle data.
Section snippets
Gradient vector diffusion
It is sometimes more convenient to work on vector fields rather than the gray-scale intensities. A widely used vector field, is the gradient vector field, which has been employed for image segmentation (Xu and Prince, 1998; Yu and Bajaj, 2002b; Yu and Bajaj, 2002c). We show in the present paper, how gradient vector fields are also useful for skeleton extraction. The gradient vector field calculated from the original (and even filtered) tomogram is often subject to noise. Even though the noise
Surface and volume visualization
Typically, informative visualizations are based on the combined use of multiple techniques, including volume rendering, isocontouring, dynamic mesh reduction, global and local scalar, vector topology computation, feature extraction, etc. Informative visualization is thus a way to guide data-intensive computations to a spatial and temporal locales of interest and significance. Informative visualization consists of two primary components: computation (rapid computation of isosurfaces, reduced
Applications and results
In Fig. 6 we demonstrate the segmentation of densely grouped extracellular fibrillar proteins. Fig. 7 shows the boundary segmentation of the intracellular actin-filaments (actin bundle). Fig. 8 shows the boundary segmentation of extracellular filaments being secreted from frog saccular sensory epithelium supporting cells. It is worth noting that the criterion for classifying critical points may differ from data to data. All the maximum (or minimum) critical points may not necessarily classified
Conclusion
We have presented algorithms for fully automatic boundary segmentation and skeletonization, and have successfully applied them to cell and molecular tomographic imaging data. These algorithms have been implemented in C and are part of our image processing library. We have also developed an interactive volumetric exploration and visualization tool (Volume Rover) which encapsulates implementations of the above filtering, and curve/surface feature extraction algorithms, and additionally uses
Acknowledgements
The research of C. Bajaj and Z. Yu was supported in part by NSF Grants ACI-9982297, CCR-9988357, and a grant from UCSD 1018140 as part of NSF-NPACI, Interactive Environments Thrust. The research of M. Auer was supported in part by NIH Grant DC00241. Thanks are also due to Anthony Thane of CCV for the continued development of the Volume Rover Visualization tool, and to Dr. Ulrike Ziese and Bram Koster at University of Utrecht for recording some of the tomographic tilt series data. Manfred Auer
References (65)
The cell as a collection overview of protein machines: preparing the next generation of molecular biologists
Cell
(1998)- et al.
Euclidean skeleton via centre-of-maximal-disc extraction
Image Vision Comput.
(1993) - et al.
Electron tomography of molecules and cells
Trends Cell Biol.
(1999) - et al.
Macromolecular electron microscopy in the era of structural genomics
TIBS
(2000) - et al.
Towards automatic electron tomography
Ultramicroscopy
(1992) Macromolecular crowding: obvious but underappreciated
Trends Biochem. Sci.
(2001)- et al.
Segmentation of two- and three-dimensional data from electron microscopy using eigenvector analysis
J. Struct. Biol.
(2002) - et al.
Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion
J. Struct. Biol.
(2001) Electron crystallography: present excitement, a nod to the past, anticipating the future
J. Struct. Biol.
(1999)- et al.
A flexible environment for the visualization of three-dimensional biological structures
J. Struct. Biol.
(1996)
Skeletonization via distance maps and level sets
Comput. Vision Image Understanding
Automated microscopy for electron tomography
Ultramicroscopy
Computer visualization of three-dimensional image data using imod
J. Struct. Biol.
Tinkerbell—a tool for interactive segmentation of 3d data
J. Struct. Biol.
Euclidean skeletons
Image Vision Comput.
Sterecon—three-dimensional reconstructions from stereoscopic contouring
J. Struct. Biol.
The relevance of dose-fractionation in tomography of radiation-sensitive specimens
Ultramicroscopy
Electron tomographic and other approaches for imaging molecular machines
Curr. Opin. Neurobiol.
Hierarchic voronoi skeletons
Pattern Recognit.
Moving object localization using a multi-label fast marching algorithm
Signal Process.: Image Commun.
Tomographic 3d reconstruction of quick-frozen, Ca2+-activated contracting insect flight muscle
Cell
Quantitative fitting of atomic models into observed densities derived by electron microscopy
J. Struct. Biol.
A novel three-dimensional variant of the watershed transform for segmentation of electron density maps
J. Struct. Biol.
Using situs for flexible and rigid-body fitting of multiresolution single-molecule data
J. Struct. Biol.
Situs: a package for docking crystal structures into low-resolution maps from electron microscopy
J. Struct. Biol.
Electron cryo-microscopy as a powerful tool in molecular medicine
J. Mol. Med.
3D RGB image compression for interactive applications
ACM Trans. Graph.
Visualization of scalar topology for structural enhancement
IEEE Visual.
Cited by (54)
Electron tomography and immunogold labeling of plant cells
2020, Methods in Cell BiologyCitation Excerpt :There are different alternatives for segmentation, from manual tracing to automated feature extraction with different levels of user input. Automated approaches are usually density- or gradient-based (Bajaj, Yu, & Auer, 2003; Lin et al., 2003). Some algorithms use three-dimensional edge detection, allowing an object to be defined independently of its orientation within a tomographic volume (Ali et al., 2017).
RAZA: A Rapid 3D z-crossings algorithm to segment electron tomograms and extract organelles and macromolecules
2017, Journal of Structural BiologyCitation Excerpt :Noise reduction techniques include wavelet transforms (Reichel et al., 2001), median filters (Sandberg, 2007; van der Heide et al., 2007), bilateral filters (Jiang et al., 2003; Pantelic et al., 2006), anisotropic (Fernandez, 2009; Fernandez and Carrascosa, 2010; Fernandez et al., 2008) and non-anisotropic diffusion filters (Volkmann, 2010; Yamashita et al., 2007). Segmentation algorithms include drawing and interpolation tools (Alber et al., 2008; Noske et al., 2008), thresholding algorithms (John, 1986; Shapiro and Linda, 2002), ridge detectors (Cardenes et al., 2017) gradient-based edge detectors (Gonzalez 2002a,b; Prewitt, 1970; Roberts, 1963), snake algorithms (Kang et al., 2015; Kass et al., 1988), watershed transforms (Adiga et al., 2004; Roerdink and Meijster, 2001; Sijbers et al., 1997; Volkmann, 2010), bilateral edge filters (Gonzalez 2002a,b; Marr and Hildreth, 1980; Pantelic et al., 2007), Laplacian of Gaussian filters (Marr and Hildreth, 1980), fast marching methods (Bajaj et al., 2003; Baker et al., 2006), the 3D recursive filter (Monga et al., 1991; Yu and Bajaj, 2005), template matching techniques (Comolli et al., 2009; Frangakis et al., 2002; Lebbink et al., 2007), correlation approaches (Zhu et al., 2003) and machine learning approaches (Luengo et al., 2017; Mallick et al., 2004; Moussavi et al., 2010). These tools along with the advantages and drawbacks of each are reviewed more comprehensively elsewhere (Ali, 2016).
Accurate membrane tracing in three-dimensional reconstructions from electron cryotomography data
2015, UltramicroscopyCitation Excerpt :Recent years have seen several attempts to make membrane tracing in cryo-tomograms more automatic. These efforts included boundary-based methods formulated through several different energy functions that are then optimized using some type of local search [19,2,3,23]. Other boundary detection methods used with electron tomography data include line and orientation filtering [20], template matching [14] and methods based on differential geometry [16,17].
Combined approaches to flexible fitting and assessment in virus capsids undergoing conformational change
2014, Journal of Structural BiologyComputational methods for electron tomography
2012, MicronCitation Excerpt :Numerous automatic or semi-automatic approaches have been proposed in the field (Sandberg, 2007; Volkmann, 2010). There exist methods based on density thresholding (Cyrklaff et al., 2005; Sandberg, 2007), the Watershed transform extended to 3D (Volkmann, 2002), eigenvector analysis of an affinity matrix (Frangakis and Hegerl, 2002), active contours (Bajaj et al., 2003; Bartesaghi et al., 2005; Bazan et al., 2009), oriented filters (Sandberg and Brega, 2007), fuzzy logic (Garduno et al., 2008) and sophisticated edge-detectors (Ali et al., 2012). Also, template matching with simple 3D geometric templates has been proposed for tomograms with relatively good SNR and contrast (Lebbink et al., 2007).
Methods for Segmentation and Interpretation of Electron Tomographic Reconstructions
2010, Methods in EnzymologyCitation Excerpt :Many of the computational segmentation approaches developed for electron tomography attempt to improve upon manual segmentation using various types of surface fitting approaches. These range from simple spatial gradient optimization in two dimensions (Ress et al., 2004) through the use of 3D geodesic active contours (Bartesaghi et al., 2005), to implementations of static (fast marching method; Bajaj et al., 2003), as well as full-fledged dynamic level-set based approaches (Osher and Sethian, 1988; Whitaker and Elangovan, 2002). Drawbacks for these energy-minimization based algorithms are their tendency to be subject to local optima and scalability issues resulting in the requirement of reasonably good initial surface models and careful fine-tuning and preconditioning of parameters to ensure correct convergence.