It's hard out there for a neuroanatomist — or at least for those who are working to reinvigorate a field that has come to be viewed as outdated and relatively 'unsexy'. “Very few people openly use the term 'neuroanatomy' at this stage for the kind of thing we're talking about, but frankly that's what it is,” says Stephen Smith of the Stanford School of Medicine in California. “It's just a kind of neuroanatomy that was impossible until now.”

High-resolution data from SBF-SEM enables long-range neuronal reconstructions, such as these cells from the inner-plexiform layer of a rabbit retina. Credit: M. HELMSTAEDTER, K. L. BRIGGMAN, & W. DENK

Smith and his colleagues are part of a small community of scientists striving to pick up the mantle of Spanish neuroscience pioneer Santiago Ramón y Cajal by developing sophisticated methods for brain-wide mapping of synaptic connections and neural circuits. “Neuroscience got off to a very good start with the idea that wiring diagrams [are probably the key to understanding] brain function, but remarkably little has happened with that idea,” says Jeff Lichtman of Harvard University in Cambridge, Massachusetts.

This is now changing, and although these researchers may debate whether to call what they do 'connectomics' or 'circuit mapping' — or even 'neuroanatomy' — there's no question that ongoing strides in cell biology, imaging and computational analysis are bringing scientists closer to understanding the structural foundations of brain function.

Detail-oriented

For more than half a century, scientists have recognized the power that electron microscopy's still-unparalleled resolution could bring to the exploration of neural circuitry. Indeed, the successful assembly by Sydney Brenner and colleagues in the 1980s of the 300-neuron wiring diagram1 of the nematode worm Caenorhabiditis elegans was a neuroscience tour de force made possible through reconstruction of transmission electron microscopy (TEM) images from serially collected tissue sections.

Sadly, that was pretty much it for the next 25 years, as the labour-intensive reconstruction process — which consumed more than 10 years' work from Brenner's team — was simply too demanding to deliver large-scale brain mapping. It wasn't until 2004 that Winfried Denk's team at the Max Planck Institute for Medical Research in Heidelberg, Germany, revitalized electron microscopy as a tool for high-resolution neuroanatomy.

In his serial block-face (SBF) imaging method2, samples are mounted on an ultramicrotome housed within a scanning electron microscope (SEM), which images the surface of the embedded tissue immediately before the diamond knife shaves a thin slice off the top, exposing the next layer for a subsequent round of imaging. This introduces unprecedented capacity for automation to the imaging workflow, but also overcomes several other issues, including the ability to collect data from ultrathin sections without the distortion that can arise with imaging of slices.

Denk's team has since learned how to give the samples a closer shave, boosting the accuracy of reconstruction. “In 2004, the thinnest slices were 40–45 nanometres, but now we're at 25 nanometres,” he says. Another major objective has been to tweak staining to optimize the labelling of neuronal processes. “We were working very hard on getting a staining technique that selectively stains the surface of cells and gets rid of the insides, so you don't get distracted by things such as mitochondria, nuclei or the endoplasmic reticulum,” says Denk.

This method is compatible with instruments from leading manufacturers such as FEI, of Hillsboro, Oregon, and Hitachi High-Technologies in Tokyo, and users can even buy integrated SBF-SEM systems, such as the 3View platform made by Gatan in Pleasanton, California, which is based on the Denk lab's design. “I think Denk's was really a landmark publication,” says Ben Lich, strategic marketing manager at FEI. At the same time, SBF-SEM is still limited in resolution by the amount of energy that can be pumped into samples safely. “Specimens are typically embedded in a resin, and under the influence of the electron beam they will crosslink and the material becomes harder to cut,” explains Lich. “If you put a lot of electrons into your material to create that image, you also do a lot of damage in terms of crosslinking and you cannot really cut it reliably.

As an alternative, FEI is applying technology initially developed for the semiconductor industry, using a focused ion beam to precisely remove thin layers of tissue. “The advantage is that we can put a lot more charge into these blocks and create crosslinking,” says Lich, “because the focused ion beam can cut silicon or diamond — basically, we can cut anything with it.” Thus, greater resolution is possible, and FEI's DualBeam instruments can image voxels of × 4 × 10 nanometres, relative to the present 20 × 20 × 25-nanometre resolution limit of SBF-SEM. However, SBF-SEM can image much greater volumes — on the scale of two to three orders of magnitude more — making focused ion beam and SBF complementary rather than competitive tools.

A beautiful mind

There are alternatives to electron microscopy — particularly for researchers interested in more than a static snapshot. “All the things I've studied up until now have been dynamic questions,” says Lichtman. “And you just can't do that with electron microscopy — you've got to kill it to look at it!”

Lichtman's solution was the Brainbow transgenic mouse3, which uses a site-specific DNA recombination system to randomize expression of multiple fluorescent protein genes in neurons, yielding intermediate colour combinations that distinguish each cell from its neighbours. With a broad portfolio of commercially available fluorescent proteins from which to choose — including the Living Colors proteins made by Clontech in Mountain View, California, and the TurboColors proteins from Evrogen in Moscow — Lichtman's group had many options. However, just a handful of colours proved sufficient to generate nearly 100 distinct labels. “All of the colours of the rainbow that we see are interpreted from three pigments in our retina,” he explains. “So we just inverted that, thinking that if we could just mix different amounts of three colours in different cells, we should be able to get all the visible colours of the rainbow.

Focused-ion-beam microscopy, as performed with instruments such as FEI's Helios NanoLab DualBeam, allows more energy to be used for imaging, improving the resolution. Credit: C. GENOUD, FMI/FEI

In other cases, more selective labelling is desirable, and scientists since Ramón y Cajal have pursued chemical and biological methods for exclusively targeting neurons that are functionally linked via active synapses. One promising method, being pioneered by scientists such as Lynn Enquist at Princeton University in New Jersey and Ed Callaway at the Salk Institute for Biological Studies in La Jolla, California, exploits natural infection patterns of neurotropic viruses for the fluorescent labelling of individual neural circuits4.

Callaway works with modified rabies virus, a pathogen that spreads so efficiently across mouse neurons that a single particle injected into the brain can prove lethal. His viruses are constrained via deletion of a key glycoprotein gene. “We preserved the ability to replicate and amplify, but provided a means to control the spread,” he says. “Deleting the glycoprotein gene also allows us to control the initial infection and target specific cell types.” Some investigators are applying viral tracing to trace entire networks of interconnected cells, but Callaway is mostly interested in targeting smaller 'neighbourhoods'. “When we get to the point where we can go into a live animal and target one cell and label every single input to that cell, that will be a huge advance,” says Callaway. “But it's clear we're far from labelling all of them. We're now labelling up to 100 inputs, but it should be 1,000.

By definition, such methods lend themselves to 'sparse' mapping of a limited subset of neurons at a lower resolution than 'dense' strategies such as electron microscopy. But many scientists see this as a feature rather than a bug, enabling a more selective type of connectomics (see 'Whose map is it anyway?') that has the capacity to correlate circuit structure with function. For instance, it can combine the maps with either neuronal activity sensors, such as the calcium-sensitive Cameleon indicator from Invitrogen in Carlsbad, California, or light-activated ion-channel proteins, such as the channelrhodopsin and halorhodopsin molecules engineered by Karl Deisseroth's team at Stanford University in California5.

Indeed, even exquisite resolution may soon no longer be the sole domain of electron microscopy, as optical methods emerge that exploit clever workarounds to overcome the diffraction limit for fluorescence imaging, including stimulated emission depletion, stochastic optical reconstruction microscopy, photoactivation localization microscopy and structured illumination6, and Lichtman and Smith are among those exploring the benefits and challenges of using 'super-resolution' imaging to characterize circuits at the molecular level.

Filling in the details

Another important capability of fluorescence imaging is the ease with which multiple molecular targets can be labelled and readily distinguished — an essential consideration in building a useful map. “As soon as people have a black-and-white connectivity diagram,” says Smith, “they'll realize they're really stumped by not knowing what molecules are at a synapse, how the synapse is going to transmit, what its kinetics are going to be, and what's going to turn it on and off.”

Close pairing of electron microscopy and light microscopy represents a potential solution. In the array tomography technique developed by Smith's team7, for example, a resin-embedded sample is continually sliced by a diamond knife, with the sections sequentially collected on an adhesive surface, enabling them to be arrayed on a slide. These arrays can then be subjected to multiple rounds of immunofluorescence staining and, ultimately, prepared for SEM imaging. Combining imaging modalities enables data relating to expression of channels and receptors to be overlaid onto high-resolution circuit maps. “There are tricks that let us routinely work with 10–15 labels on individual specimens,” says Smith. “It's a far cry from the 20,000 genes that we'd have to image to fully unlock the brain, but it's a big step in the right direction.”

Ed Callaway's team is working towards methods to trace every cell that synapses on a target neuron (reprinted with permission from I. R. Wickersham et al. Neuron 53, 639–647; 2007).

Lichtman's team has been following a similar path, as it continues to refine its automated tape-collecting lathe ultramicrotome (ATLUM) method8. In ATLUM, an epoxy-embedded tissue sample is rotated continuously on a lathe, grazing against a diamond knife that pares away ultrathin sections, which are automatically collected on a continuous strip of adhesive tape. The resulting strips can be imaged by SEM then retained indefinitely for further study. The next generation of this platform promises to deliver sections as thin as 20–25 nanometres along the z-axis, enabling near-seamless circuit reconstruction. “There is a diminishingly small amount of ambiguity at 30 nanometres, and I think most of those ambiguities would go away if we went down to 25 nanometres,” he says. Lichtman is also looking to integrate fluorescent imaging with ATLUM, perhaps via electron-microscopy-friendly labelling methods that target the tags used in Brainbow. “A purple axon in Brainbow might have two epitope tags, so that blue and red fluorescent proteins can both be stained by immunofluorescence with gold beads of different sizes,” he says. “So if you see an axon with equal numbers of big and little gold beads, then you know that it's a purple axon.”

Sebastian Seung is helping computers to recognize neurons by teaching them to look at images in the same way a human might. Credit: S. SEUNG

This would prove useful not only for integrating data from both modalities, but also as a way to 'fact-check' circuit traces. “I think that soon we'll be able to look at a cell body and predict what kind of glutamate receptors and kinases we're going to find in the terminals,” says Smith. “So if you follow your wire for millimetres, it better have a certain marker in the synaptic terminal at the end of that wire. If it doesn't, that means you made a mistake.”

Trace elements

After all the imaging is done, the fundamental problem remains of turning mountains of data into three-dimensional reconstructions and following the neuronal processes that weave through them.

Array tomography offers sufficient resolution to distinguish individual synapses, yet also allows direct molecular characterization of cells and connections. Credit: S. J. SMITH

Numerous commercial tools are available for user-guided neuronal tracing, including Imaris from Bitplane in Zurich, Switzerland, and Neurolucida, from MBF Bioscience in Williston, Vermont. Neurolucida was initially developed more than 20 years ago and is one of the most established tools for the manual charting of neurons from fluorescent or electron-microscopy images, offering a relatively straightforward interface for charting complex neuronal processes. “Using a motorized microscope stage and video camera, the software lets you map neuronal processes that are far larger than a single field of view,” explains chief scientific applications officer Geoff Greene. “When the user reaches the end of a dendrite, the software will automatically take you back to the xy–z coordinates of the last unfinished branch, so it can travel down the alternate branch and trace it.” Denk's team has also developed an elegant tool, Knossos, for rapidly tracing neurons within their reconstructions, in which users sketch rudimentary 'skeletons' along the path taken by a given neurite.

Of course, manual tracing is wholly impractical for large-scale mapping, and the hunt is on for algorithms that can automatically define individual neurons within three-dimensional reconstructions, a process known as segmentation. Neurolucida features a module called AutoNeuron that strives to deliver rapid, machine-assisted tracing, but Greene acknowledges that this is still a work in progress. The fundamental problem is that near-perfect accuracy is required. “If you lose a wire somewhere along its length, then you lose all the connections downstream from where you lose it,” says Denk. “If I lose an axon halfway down its length, I could lose 5,000 connections.”

The core of the segmentation problem, explains Sebastian Seung of the Massachusetts Institute of Technology in Cambridge, is getting computers to see the world as humans do. “To be able to trace the neurons, we have to know the boundaries of every object,” he says. “This is one of the first problems ever attacked in computer vision, back in the late 1960s — and yet today we still don't have reliable systems that do it.”

Partha Mitra thinks that scientists already have the tools they need to begin building a 'mesoscale' map of the mouse brain. Credit: P. MITRA

Rather than trying to establish strict neuron-recognition guidelines, Seung and his colleagues such as Dmitri Chklovskii at the Janelia Farm Research Campus in Ashburn, Virginia, are pursuing 'machine-learning' strategies that teach computers by example. “We have humans trace the boundaries, and create a training set,” says Seung, “then we have the computer learn how to imitate the human tracing.” This approach brings with it a number of complex challenges, including the development of metrics that enable unambiguous comparison of different tracing strategies and quantification of overall accuracy. These computer 'students' also need to know the limitations of their organic instructors. “When humans trace stuff, there's just a lot of jitter in the way they trace,” says Seung. “So we devised ways in which computers can be made to imitate humans but not take them too literally.”

Even historically pure 'wet labs' are finding themselves joining the fight. Lichtman's lab routinely churns out hundreds of 16,000 × 16,000-pixel images, each of which can consume a gigabyte of space; as such, developing tools for data handling is now a daily fact of life. “In my hiring now, about a third of the people I'm looking at are people who are computer scientists,” he says, “and that would have been unthinkable five years ago.”

Thinking big

Ultimately, many of the obstacles that constrain large-scale circuit mapping boil down to maximizing throughput; for example, the rate of electron-microscopy imaging. “We'll either have to parallelize acquisition, or have a microscope that's much faster,” says Denk. “And I don't mean by a factor of five, I mean by factors of 100.

Consistent quality of sample preparation is also a key problem, as all downstream analysis rests on this step. Accordingly, Smith's main nemesis these days is the dirt that can obliterate fine details from specimens, and his team is learning lessons in cleanliness from their neighbours in Silicon Valley. “I take much heart from the fact that today people can make a microprocessor chip with a billion transistors that each work perfectly for about $20 a chip,” he says.

In fact, many cite the semiconductor industry as a model for what will be needed for any large-scale 'Connectome Project': consistent application of an established set of optimized methods. “At some point we'll have all the tools lined up,” says Denk, “and then we'll decide to spend some real money on this to do the whole brain of some animal.”