Compressive light-field microscopy for 3D neural activity recording

Understanding the mechanisms of perception, cognition, and behavior requires instruments that are capable of recording and controlling the electrical activity of many neurons simultaneously and at high speeds. All-optical approaches are particularly promising since they are minimally invasive and potentially scalable to experiments interrogating thousands or millions of neurons. Conventional light-field microscopy provides a single-shot 3D fluorescence capture method with good light efficiency and fast speed, but suffers from low spatial resolution and significant image degradation due to scattering in deep layers of brain tissue. Here, we propose a new compressive light-field microscopy method to address both problems, offering a path toward measurement of individual neuron activity across large volumes of tissue. The technique relies on spatial and temporal sparsity of fluorescence signals, allowing one to identify and localize each neuron in a 3D volume, with scattering and aberration effects naturally included and without ever reconstructing a volume image. Experimental results on live zebrafish track the activity of an estimated 800+ neural structures at 100 Hz sampling rate. © 2016 Optical Society of America


INTRODUCTION
Brain tissue is a dense network of neurons that exchange information by means of electrical signals called action potentials.Understanding the mechanisms by which the brain processes information requires the ability to detect action potentials from many individual neurons simultaneously across large volumes of tissue.Engineered calcium-sensitive proteins [1] and voltagesensitive dyes [2] enable optical detection of action potentials without disturbing the neuron's physiology.However, in deep layers of brain tissue, optical aberrations [3] and scattering are generally too strong to resolve individual neurons with conventional fluorescence microscopy, so more advanced methods are required.The most popular of these, two-photon microscopy [4], uses a nonlinear effect to restrict fluorescence excitation to a small spot or plane [5] which scans or hops through the volume point by point [6,7].Light-sheet microscopy [8] achieves faster 3D acquisition by scanning in only one dimension, and confocal light-sheet microscopy [9] gives improved performance in strongly scattering tissue at the expense of photon efficiency and speed.Another variant implements light-sheet imaging with a single objective, giving practical benefits at the expense of spatial resolution [10].All of these methods involve scanning, so frame rates are limited for large-volume imaging.
Light-field imaging [11][12][13] captures full-volume 3D information in a single shot.A light-field measurement includes both the position (x, y) and angle of incidence (θ x ; θ y ) of light rays reaching the sensor.In contrast, a traditional 2D sensor only captures the position of the rays.With 4D light-field information, it is possible to later adjust focus, change perspective, or retrieve 3D images in post-processing.Conveniently, any microscope can be converted into a light-field imager by making a simple and inexpensive hardware modification: a microlens array placed in front of the sensor.Traditional light-field imaging makes a ray optics assumption that breaks down for microscopy, but can be corrected by wave-optics models [14,15].The main advantages of light-field microscopy for 3D imaging are its fast capture speed (limited only by the camera's frame rate) and its photon efficiency, since all the photons that reach the image plane are captured.Unfortunately, these benefits come at the cost of a severe loss of spatial resolution since the limited number of pixels on the sensor must be spread across four dimensions instead of two.Various attempts have been made to improve resolution though deconvolution [16] or additional measurements [17][18][19].
Light-field microscopy has already provided promising results for functional brain imaging [20] with 3D volume image reconstructions to quantify the fluorescence levels of individual neurons.However, the number of pixels on the sensor limits the number of voxels that can be reconstructed with fidelity, and thus the number of neurons that can be monitored.Here, we incorporate sparsity-based algorithms that enable large volumes to be captured with high spatial resolution, provided that only a sparse set of neurons are active at once.We skip the step of explicitly reconstructing a 3D image and instead attempt to simply distinguish and localize each neural structure in 3D.
The prime advantage of our method over previous light-field microscopy work is that the data collection requirements scale not with the number of voxels to be reconstructed, but rather with the number of active neurons at a particular time.Hence, it may be possible in the future to use a conventional sensor for recording the activity of thousands or millions of neurons in real time.Since brain activity is not always sparse, we add a processing step that implements an independent component analysis on the raw video data to separate out temporally correlated neural activity.This results in spatially sparse components that satisfy our model, even for densely active neural experiments.
A major impediment for neural activity tracking is optical scattering.Digitally undoing the effects of multiple scattering in 3D is an ill-posed nonlinear problem that is difficult or impossible to solve [21].Conventional light-field microscopy ignores scattering effects when reconstructing volume images, so deeper sources blur.Recently, we showed that phase space (e.g., light field) measurements can be robust to scattering when an appropriate wave-optical multislice forward model is used together with compressive methods for sparse (in 3D) samples [19].Here, we extend these ideas to the problem of localizing neurons and quantifying 3D fluorescence in brain tissue.
We further exploit the fact that functional imaging need not reconstruct a visual rendering of the 3D shape of neurons, but only needs to distinguish and localize them.Our algorithm estimates the light-field signature for each neuron and maps its 3D location without ever producing a traditional image.In reality, because different structures of one neuron may be spatially distributed, our method may distinguish any active structures (e.g., neurons, axons, dendrites, astrocytes, glia cells, membranes, synaptic terminals).Aberrations and scattering effects are incorporated into the light-field signatures, so do not degrade discrimination ability, though they may impact 3D localization accuracy.Video data can then be decomposed directly, skipping the error-prone 3D image reconstruction step and directly reaching the final goal: a quantitative measurement of fluorescence in each individual neural structure.The result is a task-based approach that is well-suited for 3D in vivo functional brain monitoring.We demonstrate our method experimentally for zebrafish neural activity tracking with 800+ neural structures at 100 fps.

A. Experimental Setup
The experimental setup (shown in Fig. 1) is a fluorescence microscope that has been modified by introducing a microlens array at the imaging plane, with the sensor placed at the back focal plane of the array.By the principles of light-field imaging, both position x; y and direction of propagation (θ x ; θ y ) of light rays can be reconstructed from the 2D intensity captured by the sensor.This is because each microlens' sub-image corresponds to the local pupil plane (angular distribution of rays).Capturing a 4D light field, Ix; y; θ x ; θ y , using a 2D sensor necessitates a tradeoff between spatial and angular sampling.Our microlens array design choice is controlled by two parameters.The pitch (microlens diameter) controls spatial resolution in the (x, y) plane; here, we use 150 μm pitch, which corresponds to d 4 μm at the sample.The focal length of each microlens (f 5.2 mm) controls the range of angles measured based on numerical aperture (NA ML 0.014), which is chosen to match the microscope output port for the case of a 40 × water immersion objective (NA 0.5).Angular range and sampling will determine the axial resolution of the reconstruction.
We call the 2D intensity measurement at the sensor plane, I u; t, a light-field measurement since it maps to the 4D light-field information for each time, t: Iu; t Ix; y; θ x ; θ y ; t.The sampling of the 4D light field on a 2D plane of pixels is given by u x N p ⌊x∕p⌋ θ x ∕2 NA and u y N p ⌊y∕p⌋ θ y ∕ 2 NA, where u u x ; u y are the lateral coordinates at the sensor, p is the pitch of the microlens array (square lattice), and ⌊⌋ is the floor function.We use a 4f relay with 1.7 × magnification to record the signal I u; t on the sensor with a square of edge length N p 40 pixels under each microlens.This achieves good angular (and hence, axial) sampling by sacrificing lateral resolution, which will be improved in post-processing.The resulting field of view (at the sample) is a 200 μm square, with N l 50 microlenses in each direction, for a total of N 2 p N 2 l 4 × 10 6 pixels on the sCMOS sensor (Andor Zyla 4.2).Each acquired frame contains full-volume fluorescence data, with temporal resolution equal to the camera's frame rate, 1∕δ t 100 fps.
For functional brain activity monitoring, calcium ions entering the cell through specialized channels will change the conformational state of the genetically encoded calcium dye, GCaMP6, so that the fluorescence of each neuron correlates with its action potential firing rate [1].The timescale of calcium diffusion is faster than the response time of GCaMP6.The light-field measurement at a given time, I u; t, can then be written as a linear superposition of the individual contributions of the N neurons.We decompose the measurement into a set of independent spatial components that change over time: where a j t represents our ultimate goal: the time-dependent magnitude of fluorescence in the jth neuron.We call I j u the corresponding light-field signature-the measurement that would result if only the jth neural structure were active.Each neuron has a unique light-field signature due to its unique location in 3D space.Conveniently, the signature naturally encodes any shape variations and effects of aberrations and scattering.We assume here that the light-field signatures do not change with time, i.e., every time the neuron fires, the light passes through the same path.
Our goal is to build up a dictionary of light-field signatures for each neuron, so that subsequent data frames can be decomposed into their constituent neural signatures.Of course, it is not feasible to sequentially activate each neuron and directly measure the dictionary; hence, we must calibrate the system using only uncontrolled data.We do this using a training video of light-field measurements in which all of the neurons of interest activate at least once during the video's capture time.This may be done either before the experiment of interest or as part of the actual experiment.From this video, we are able to extract the light-field signatures of individual neurons and their 3D positions, which together make up our dictionary.Since our extraction method requires spatial sparsity, an optional preprocessing step may be employed to exploit temporal correlations for generating sparse components from nonsparse video data.

B. Light-Field Signature Identification
Our training routine aims to extract the light-field signature for each neuron from video data of many neurons firing at random.We cannot assume that only one neuron is active in each frame, but we will assume, for now, that the active neurons in any one frame, Iu; t, are sparse in 3D (spatial sparsity condition).We can then use a compressed sensing approach to separate and localize individual neural structures in 3D from scattered and aberrated light-field measurements having multiple neurons active at once.
Every point source traces out a 2D hyperplane in the 4D lightfield space (see Fig. 2).The x; y position where this plane crosses the θ x ; θ y axes defines the source's lateral position and the tilt of the plane defines its depth.Scattering repeatedly spreads information along the angle dimensions as light propagates, causing deeper sources to both tilt and spread.Neurons have fairly compact cell bodies, so calcium fluorescence in the cytoplasm is mainly confined to a 5 μm region around the nucleus, which behaves similarly to a point source.Using this forward model and searching for a sparse solution, it is possible to robustly estimate the 3D position of each active neuron.
We use an accelerated proximal gradient algorithm to solve the following l 1 -regularized optimization problem for c [19]: where r i x i ; y i ; z i discretizes the volume, μ is a hand-tuned regularization constant which enforces sparsity, and is the predicted measurement, given by applying our forward model to the current estimate of the spatial distribution of sources in the sample, cr i ; t.Our forward model, Ar i ; u x ; u y , describes the light-field measurement from a point source located at position r i in the scattering media (refractive index n r ) [19]: where r i x i ; y i ; z i and z 0 n r ∕ ffiffi ffi 2 p λσ is the distance traveled through the scattering media.Here, the microscope is focused on the near-objective surface of the tissue, so z i is the distance to this surface.We optimize over σ, which describes the amount of scattering (assumed to be homogenous and Gaussian-distributed in angle).Our model is based on waveoptical diffraction and accounts for multiple scattering [19].
We solve the optimization problem in Eq. ( 2) for c at each time frame and add the resulting light-field signatures and 3D positions to our dictionary.Each solution provides the sparse set of k neural structures active in that frame, along with their 3D positions.Each light-field signature I j u corresponding to neuron j is a normalized time-independent quantity: The sum of all the elements should match the measured data: Z I j udu 1; and Figure 2 shows an example sparse frame from our zebrafish experiments (Fig. 4) and its extracted light-field signatures and 3D positions of detected neurons.Neurons near the surface have fairly compact light-field signatures, whereas deeper neurons spread and scatter to larger areas of the sensor, as expected.

Experiments
To demonstrate 3D detection and localization capabilities, we show experimental results for a simple test object with and without scattering.Our nonscattering sample is a static suspension Assuming that the two-photon result is accurate, the error in our scheme is small enough to distinguish individual neurons and localize them.Next, we test our method with scattering tissue by repeating the same experiment after placing a 100 μm slice of mouse brain tissue on top of the sample.This emulates conditions that would normally prevent good depth reconstructions.To get a sense of the amount of scattering, we show 2D intensity images in Fig. 3(c).The scattered image is degraded, yet there is still structure in the 4D light-field measurement.Despite scattering, the median difference in detected positions between our algorithm and the two-photon data is 1.8 μm in the x; y plane and 15.5 μm along the z axis, slightly worse than the nonscattering case.For comparison, we show that traditional light-field refocusing (see Visualization 1) and threshold-based detection (see Visualization 2) fails.However, our method's detection and localization capabilities are not significantly affected by the presence of optical scattering in this case.

C. Independent Component Analysis
The detection and localization of neurons described thus far requires spatial sparsity in each video frame.Very large dictionaries of light-field signatures can be mapped out by videos with many frames, but only a sparse set may be active in each frame.More sparsity also leads to better localization (see Supplement 1).Unfortunately, our raw data is not always spatially sparse, in particular when the brain experiences periods of intense activity such as responding to a strong stimulus.
To address this issue, we implement a preprocessing step on the video data: Independent component analysis (ICA).The purpose of this optional first step is to take advantage of the temporal diversity of action potentials across many frames in order to computationally isolate time-correlated sets of neurons that can be spatially distinguished.With a large-enough training dataset, ICA provides spatially sparse light-field components.We represent this space-time separation as follows: where N k is the number of spatially independent components in the training dataset.The positive spatially independent components, I n u, are modulated in time by positive coefficients f n t.We consider a dataset of N t frames, each containing a single-shot light-field measurement from the camera sensor, given by I i;n I n u i , where i 1…N 2 l N 2 p pixels.The coefficients of matrix I correspond to photon counts from each neuron, and so must be non-negative (positivity constraint).For large values of N t it is fair to assume that, even despite strong activity correlations between neurons, the acquired dataset is a linear superposition of spatial components from one or more neurons.Equation ( 7) therefore becomes with S being a N 2 l N 2 p by N k matrix of positive components: and T being a N k by N t matrix of positive temporal components, The number of individual components in the training data, N k , is determined by decomposing matrix I [Eq.( 8)] into singular values (see Supplement 1).Equation ( 8) seeks to reduce the dimensionality of a dataset by finding a locally optimal choice of S and T such that S; T ≥ 0, and is known as non-negative matrix factorization (NMF) [22].NMF has been shown to be a nondeterministic, polynomial-time (NP-hard) problem, but there exist polynomial-time local-search heuristics that are guaranteed to converge [23].Traditional NMF is known to yield a sparse decomposition, yet does not exploit some of the characteristic properties of functional brain imaging.Here, in order to facilitate further processing of independent components, we slightly modify the standard NMF objective function.We implement a sparsity constraint known as l 1 "Lasso" regularization [24,25] on the temporal components: where λ 1 is a hand-tuned parameter which depends on the level of spontaneous neural activity in the calibration dataset.Our implementation uses an active-set approach to alternating non-negative least squares in order to find a locally optimal solution in polynomial time [26].The optimal solution is given by T and Ŝ.Since temporal correlations between neurons' activity are very common, the independent component extraction does not guarantee identification of individual neurons but rather of sparse spatial components, Ŝ (see Visualization 3).
To summarize, our method for building the dictionary of light-field signatures has two steps.First, training data video is acquired and decomposed into spatially sparse components with ICA.Second, each sparse component is decomposed into lightfield signatures which represent the footprints of single neurons (Fig. 2).By acquiring enough data frames, it becomes possible to identify every active neuron in the volume of interest as long as two neurons are not fully correlated in both space and time.Should the ICA step make the same neuron appear in multiple components, we identify and merge the signatures by evaluating the mutual distances between identified neurons and detecting overlaps within the typical size of one neuron (here we use a 4 μm mutual distance threshold).The method is naturally robust to optical scattering and aberrations, whose effects are included in the light-field signature.In fact, scattering may help to make each neuron's signature more distinguishable (Fig. 2).
We build the dictionary from the set of all extracted light-field signatures, each of which also comes with an estimated 3D position of the neuron.The dictionary of signatures for N identified neurons is denoted by fI j ujj 1…N g or, as in Eq. ( 9), with a matrix representation, D, defined by D i;k I k u i .Although the 3D position accuracy degrades with depth and scattering (see Supplement 1), the ability to distinguish neurons remains intact deep into the scattering tissue.

D. Neural Activity Reconstruction from Experimental Data
After the completion of the training step, the dictionary of light-field signatures can be used to efficiently decompose any single-shot measurement acquired by the light-field microscope (including the training data) into a linear positive combination of elements of the dictionary (see Supplement 1).The number of active neurons, N , in one frame should be smaller than the number of sensor pixels (N 2 p N 2 l 4 × 10 6 ).Even though a raw data frame may not necessarily show sparsity nor represent a sparse set of active neurons, the amount of fluorescence in each neuron, a 1 ;…; a N , in each frame is the solution of a non-negative least square optimization problem given by min Experimental results for neural activity tracking with both the ICA step and the compressive detection and localization are shown in Fig. 4. A five-day-old Tg(NeuroD:GCaMP6f ) zebrafish expressing GCaMP6 in the telencephalon is placed in the microscope.The generation of the transgenic zebrafish line will be the subject of a future publication [27].The fish is live, awake, and immobilized in 2% low-temperature melting agarose.40 independent components are extracted from 500 diverse frames from the calibration step.Each independent component is then separated into single neuron signatures.The final dictionary contains a set of 802 light-field signatures, as well as an estimated position in 3D for each corresponding calcium source [see Fig. 4(b)].We then record 10 s of spontaneous brain activity at 100 fps.The solution of Eq. ( 12) provides a quantitative measurement of fluorescence for all neurons in the field of view for which a light-field signature has been identified.Figure 4(a) shows color-coded lines representing the normalized change of fluorescence, F , as a function of time, given by dF where T is the time duration of the video.We display activity in each neuron as a function of time in Fig. 4 and show a video reconstruction of the 3D activity in Visualization 4.
Scattering is minimal in zebrafish (σ 0.02), but motion is a limiting problem.In Fig. 4, artifacts appear at three time points when the zebrafish attempted to move, making the results inaccurate for the duration of muscular activity ≈0.1 s (dark blue lines).Once the zebrafish returns to rest, the dictionary becomes valid again (residual error drops, see Visualization 5).In this experiment, every neuron expresses GCaMP and the resulting 802 detected active sources are most likely neurons but may also be, for example, dendrites whose size is comparable to the spatial resolution.In future applications, this issue can be solved by limiting the expression of functional markers of neural activity to a localized volume, for instance near the nucleus [28], or by capturing a two-photon structural scan before the experiment to disambiguate.Future work will explore ways to correct for motion by periodically recalibrating the dictionary or implementing motion-correction algorithms in light-field space.Further improvements may come from taking into account the specific temporal dynamics of calcium fluorescence [29] and accounting for inhomogeneous scattering.

ANALYSIS A. Spatial Resolution
In traditional light-field microscopy where volume image reconstruction is the goal, resolution can be obtained by measuring the size of the point-spread function [12] or spatial bandwidth [14].In brain tissue, resolution is further complicated by a dependence on scattering and density of neurons.Here, our method operates without ever reconstructing a volume image.In order to experimentally demonstrate our ability to resolve closely spaced neurons in scattering tissue, the experimental setup is slightly modified [see Fig. 5(a)].We place a slice of mouse brain tissue (that does not express any particular fluorescence) above an artificial "neuron".The artificial neuron is implemented by a second objective focusing light (in the emission spectral range, λ 532 nm) to a spot the size of a typical neuron cell body (≈10 μm).This source is controllably moved through the 3D space x; y; z in order to collect measurements through the tissue for each known source position.Two active neurons can be mimicked by adding the measurements from two positions of the source.This data is then input into our algorithm in order to compare and quantify localization performance.By removing the microlens array, we can also compare to conventional 2D fluorescence microscopy.The thickness of the slice is varied from 100 μm to 400 μm so as to mimic scattering from various depths.
We define the spatial resolution along a given axis (x, y, or z) as the minimum allowed separation distance δ x ; δ y , or δ z between two neurons for identification as two separate sources.This provides an upper limit on the number of neurons, N , that can be simultaneously observed in a volume, V , of brain tissue: Spatial resolution here will correspond to the ability to distinguish and localize neurons.At a minimum, two resolved neurons must produce different light-field measurements.In Figs.5(c) and 5(d), we display the light-field measurements from two source positions (40 μm separation) simultaneously on a two-color scale, with one in green and the other in red.The two sources yield distinct lightfield measurements, as expected, but also distinct 2D fluorescence images.Since our algorithm never reconstructs a 3D image, it can potentially be applied directly to 2D images without ever using the microlens array.
To compare these two situations, consider two sources, 1 and 2, and the corresponding measurements, I 1 and I 2 .We compute a metric for distinguishability, D, given by By definition, D 1 (fully distinguishable) when the recorded images give two disjoint sets of pixels and D 0 (not distinguishable) for the case of identical light-field signatures, as a direct consequence of Cauchy-Schwarz's inequality.Light-field measurements provide better distinguishability than 2D fluorescence images, particularly in the axial dimension.Figures 5(d) and 5(e) plot experimentally measured distinguishability as the separation distance between two sources is increased, for both the lateral and axial dimensions and with both light-field and 2D fluorescence data.To get a sense for how distinguishability relates to localization error in our algorithm, we recover 3D positions as the source is moved along y through 300 μm of scattering brain tissue, using the light-field data [see Fig. 5(f )].
In the absence of noise, accurate decomposition of a training dataset into light-field signatures is possible as soon as the lightfield measurements associated with any two different neurons are not strictly identical.In theory, a single-pixel difference would be sufficient (D > 0).Practically, at full frame rate and in low-light conditions, a conservative condition for identification [30] is to compare the distinguishability, D, to the signal to noise ratio (SNR) in the light-field measurements: Here, at 100 Hz sampling rate and without significant photobleaching, fluorescence is excited with SNR ≈ 3, which corresponds to a minimal separation distance in the focal plane of δ x δ y as well as along the optical axis δ z , which we defined as the spatial resolution.The experiment is repeated in several locations and for various thicknesses of brain tissue, with results summarized in Figs.6(a) and 6(b).Overall, the light-field data provides ∼10 × better localization resolution in all dimensions as compared to 2D fluorescence images parsed by the same algorithm.Light-field deconvolution methods [14] are expected to have performance somewhere in between these two.
To understand how our resolution metric relates to functional imaging capabilities, we deduce from Eq. ( 14) the maximal density of neurons, N ∕V , that can be resolved by light-field data versus 2D fluorescence data, then compare both to the density of neurons typically observed in layers I to IV of mouse brain (Primary somatosensory cortex) [31] [see Fig. 6(c)].This plot confirms experimental observations: 2D fluorescence microscopy is unable to identify neurons located below Layer I in the barrel cortex.However, light-field data enables a 1000-fold improvement (tenfold improvement along each axis) in the neuron density that can be resolved as compared to 2D fluorescence, so it is a promising avenue toward neural activity tracking in all layers.

CONCLUSION
We have demonstrated compressive light-field microscopy as a path toward directly addressing the needs of neuroscience for accurate, quantitative measurement of fluorescence activity in the living brain.Our method enables single-shot capture of volumetric brain activity with neuron-scale resolution.We exploit both spatial and temporal sparsity in order to distinguish and 3D localize individual neural structures.Because the light-field signatures are calibrated in situ, the strategy is robust to optical scattering and allows for real-time readout of brain activity without ever reconstructing a 3D image.Conveniently, it does not require careful alignment or calibration and can be implemented with inexpensive lenslet arrays.Since the data requirements scale with the number of active neurons in a single frame, not the number of voxels reconstructed, we believe that this method can scale to extremely large networks of neurons and be amenable to use with patterned stimulation, enabling functional activity mapping of the entire mouse brain cortex.
Funding.David and Lucille Packard Foundation; New York Stem Cell Foundation (NYSCF); Arnold and Mabel Beckman Foundation.

Fig. 1 .
Fig. 1.Experimental setup and post-processing steps for samples tagged with engineered fluorescent proteins to track brain activity.A fluorescence microscope is fitted with a microlens array for light-field data acquisition.A training video of sparse frames is acquired or computed by ICA.(Step 1) Training: light-field measurements are processed to separate and identify individual calcium sources (neural structures) by their 3D position.The extracted "light-field signature" represents the measurement (including scattering and aberrations) that would be made if only that corresponding neural structure was active.(Step 2) Subsequent data frames are decomposed as a linear positive combination of the light-field signatures in the dictionary.The coefficients of this decomposition represent a quantitative measure of calcium-induced fluorescence in each identified neuron.

Fig. 2 .
Fig. 2. Extracting light-field signatures and 3D positions of individual neural structures.(a) One of the 40 sparse light-field components.(b) Light-field slice along the red dashed line.Each distinct structure prescribes a line in the space-angle plot, whose position and tilt indicates lateral position and depth, respectively.Individual neural structures are distinguished and localized-shown here as different colors.(c) Overlay of extracted light-field signatures for multiple neural structures, each with a different color.(d) Estimated 3D positions for each of the neurons in this component.
(5.0 10 3 μL −1 ) of 1 μm sparsely distributed fluorescent beads in a 200 μm slice of agarose gel.As a proxy to ground-truth knowledge of the bead positions, we use two-photon microscopy to scan the imaging volume.We then record a single-shot light-field frame, which is shown in Fig.3(a) along with several space-angle slices of the 4D light field.Each bead traces out a tilted line in the space-angle plot, as expected.After estimating the 3D bead positions using Eq.(2), we compare the results to our two-photon data [Figs.3(d)and 3(e)].Both detect the same set of beads, with a median difference in position of 1.3 μm in the x; y plane and 12.8 μm along the z axis.

Fig. 3 .
Fig. 3. Single-shot experimental detection and 3D localization of sparsely distributed fluorescent beads, with and without scattering, as compared to two-photon microscopy scanned images.(a) Single-shot light-field measurement and several space-angle slices (along the red lines) without scattering.(b) Dataset recorded after placing a 100 μm slice of wild-type mouse brain tissue directly above the beads so as to introduce realistic scattering conditions without displacing the volume of interest.(c) 2D intensity images become blurred by scattering.(d) and (e) Comparison of localization capabilities for two-photon and our light-field microscopy, with and without scattering.(d) Estimated source positions are projected onto the x, y plane for visualization and (e) shown in 3D.

Fig. 4 .
Fig. 4. Neural activity tracking in the telencephalon of a five-day-old live zebrafish restrained in agarose.(a) Light-field signatures were extracted for 802 neural structures and 10 s of spontaneous activity was recorded at 100 Hz.(b) The normalized change of fluorescence, dF ∕F , is displayed for each neuron as a function of time.Motion is quantified by digitally tracking the first moment of the 2D image, with visible motion artifacts at t 1.9 s, t 4.9 s, and t 9 s.(c) For each identified neuron, the position in 3D space is estimated, with color showing time-averaged fluorescence activity across both telencephalic lobes of the fore-brain.

Fig. 5 .
Fig. 5. (a) Modified experimental setup for spatial resolution measurements.Slices of mouse brain tissue with varying thickness are placed above an artificial source (created by a second microscope objective) that is intended to mimic the fluorescence in an active neuron.The artificial source can be precisely positioned at any location in 3D space.Comparison of distinguishability for light-field data versus 2D fluorescence data.Measurements for two source positions are displayed simultaneously with red and green color maps, for separation (b) in the x; y plane and (c) along the z axis.(d) Distinguishability of the two captured images as a function of separation distance between two sources.Lightfield measurements outperform 2D fluorescence in the lateral plane and (e) along the optical axis.(f) We use our algorithm to estimate the position of the source through a 300 μm slice for controlled source displacements along the y axis in strong scattering.

Fig. 6 .
Fig.6.Spatial resolution analysis for our method, according to the minimal distance between two sources required for correct identification as separate neurons (a) in the lateral (x, y) plane and (b) along the optical axis through a given depth of mouse brain tissue.Fluorescence microscopy (green) and light-field microscopy (blue) are compared on the same scale and show a tenfold difference in performance along all axes.(c) Estimated maximum resolvable neuron density as a function of depth in mouse brain tissue, as compared to typical neuron density observed in the mouse barrel cortex.