Imaging Subcellular Dynamics with Fast and Light-Efficient Volumetrically Parallelized Microscopy

In fluorescence microscopy, the serial acquisition of 2D images to form a 3D volume limits the maximum imaging speed. This is particularly evident when imaging adherent cells in a light-sheet fluorescence microscopy format, as their elongated morphologies require ~200 image planes per image volume. Here, by illuminating the specimen with three light-sheets, each independently detected, we present a light-efficient, crosstalk free, and volumetrically parallelized 3D microscopy technique that is optimized for high-speed (up to 14 Hz) subcellular (300 nm lateral, 600 nm axial resolution) imaging of adherent cells. We demonstrate 3D imaging of intracellular processes, including cytoskeletal dynamics in single cell migration and collective wound healing for 1500 and 1000 time points, respectively. Further, we capture rapid biological processes, including trafficking of early endosomes with velocities exceeding 10 microns per second and calcium signaling in primary neurons.


INTRODUCTION
Cells continuously sense and adapt to environmental cues by modulating biochemical, mechanical, and organellar processes that span orders of magnitude in both time and space, and a critical challenge in cell biology is understanding how these cues ultimately alter phenotypic outcomes [1]. Fluorescence microscopy, due to its resolution, molecular specificity, and capacity to obtain dynamic information across populations of cells, remains pivotal to this effort [2]. However, quantitative subcellular imaging requires Nyquist sampling in both space and time [3], and many biological processes occur too rapidly to be imaged three-dimensionally throughout entire cells. For example, ciliate dynamics in the model organism Tetrahymena thermophila cannot be accurately tracked, even when imaged at 3.2 volumes per second [4]. Similarly, models for the control and locomotion of motordriven transport remain incomplete [5], in part due to challenges in accurately measuring their rapid 3D dynamics in vivo.
To date, microscopy techniques capable of sensitively imaging intracellular features in 3D have been limited to volumetric image acquisition rates of ~3 Hz [4,6]. In part, this is due to bandwidth limitations of modern piezoelectric scanners that are used to rapidly reposition heavy microscope objectives. To mitigate these demands, several methods have been developed that operate in the absence of objective scanning. These include extended depth of focus, either through the introduction of spherical aberrations [7] or a cubic phase mask [8], oblique illumination [9,10], confocally-aligned oblique illumination [11], rapidly defocusing the detected wavefront with an electrically tunable lens [12,13], or remote focusing through wavefront engineering [14]. However, these methods typically sacrifice sensitivity (e.g., due to aberrations, incomplete use of the detection numerical aperture, and inevitably small pixel dwell times) and spatial resolution (usually on the order of a few microns), rendering them incompatible with imaging subcellular structures.
Another major challenge in improving the temporal resolution of fluorescence microscopy is that 2D image planes must be acquired serially to form a 3D volume [15]. To overcome the limitations of serial image acquisition, several optical strategies have been developed that enable simultaneous imaging of multiple planes throughout a 3D volume using refractive [16] or diffractive optics [17][18][19][20]. Nevertheless, in each case, the fluorescence signal imaged in each plane is reduced by the degree of parallelization, and most fluorescence is detected as out-of-focus blur. For example, when 3 planes are imaged simultaneously, 66% of the light in each plane is lost to adjacent planes as image blur. Further, diffractive systems suffer from additional losses due to the limited diffraction efficiencies of the beam-splitting and achromatic correction gratings (Note S1) [17]. Thus, to accommodate this drastic reduction in fluorescence intensity, such schemes inevitably require longer exposure times or increased illumination intensities, negating the advantages of parallelized image acquisition.
Here we introduce a parallelized and photon-efficient 3D subcellular imaging scheme, which we refer to as Parallelized Light-Sheet Fluorescence Microscopy (pLSFM). For shallow sample volumes, pLSFM achieves 3-fold spatial parallelization without crosstalk between image planes, and is thus 3 to 4.5-fold less lossy than refractive or diffractive methods, respectively [16][17][18][19][20]. Unlike techniques that introduce aberrations [8,12], sacrifice numerical aperture [11], or rely on short dwell times [13,14], resolution and sensitivity are not compromised. This allowed us to image 3D subcellular processes in whole adherent cells at 14 Hz, including cytoskeletal dynamics, endosomal trafficking, and neuronal action potentials.

A. Parallelizing 3D Image Acquisition by Staggered Illumination and Detection
To achieve 3D parallelization, we tilt a cover slip mounted specimen and illuminate it with three laterally and axially displaced 1D Gaussian light-sheets (Figs. 1A and B). Because the light-sheets are staggered in sample space, fluorescence from each light-sheet can be imaged orthogonally in a light-sheet fluorescence microscopy (LSFM) format with a second objective, separated in image space with knife-edge mirrors, and independently detected on separate cameras (Figs. 1C, S1, and S2). The three planes are thus imaged independently, and a complete 3D volume is acquired by scanning the sample with a piezoelectric actuator through the three light-sheets. The illumination duty cycle for each 1D Gaussian light-sheet is 100%, which allows for both sensitive and delicate imaging of subcellular architectures [6].
Since image planes I and III are located 25 μm away from the nominal focal plane of the detection objective, spherical aberrations decrease the fluorescence intensity by ~25%.
Although not used on the data shown in this manuscript, these aberrations and losses can be corrected optically (Fig. S3). Imaging is cross-talk free, and ~95% of the light is properly picked off by the knife-edge mirrors, for a field of view of ~13 microns in the laser propagation direction (Note S2, Figs. S4 and S5). This corresponds to a height normal to the coverslip of ~8.5 microns for views II and III (view I is not fundamentally limited in its FOV), which is sufficiently large for imaging most adherent mammalian cell types. Imaging beyond that range is possible, however at the cost of gradual increase in cross-talk and light loss. Thus, in its current implementation, pLSFM is optimized for parallelized 3D imaging of adherent cells and monolayers with elongated, thin morphologies.
The decreased image size afforded by the confined Y-dimension maximizes the image acquisition rate of scientific CMOS cameras (1603 frames per second for 128 pixels in the Y-dimension). Following registration of each sub-volume with fluorescent nanospheres (Fig.  S6), the data from each camera is deconvolved, fused, and de-skewed, resulting in a ~3-fold increased image volume encompassing ~130 × 20 × 100 microns (Figs. 1D and E). The lateral resolution is close to diffraction limited (~300 nm, full width half maximum, FWHM) for all three views after 10 iterations of Richardson-Lucy deconvolution ( Fig. 1F and Table  S1). The deconvolved axial resolution is ~600 nm FWHM for each view, which is determined by the thickness of the illumination beam waist and the optical transfer function of the detection objective ( Fig. S7) [21]. The final 3D image (Figs. 1E and G) exhibits excellent optical sectioning and individual image slices are free of out-of-focus image blur (Figs. 1H-K).

B. Sensitive Time Lapse Imaging of Cytoskeletal Dynamics
In theory, due to its light-efficient and spatially parallelized image acquisition, pLSFM improves 3D microscopy by enabling 3-fold longer dwell times for a given volumetric image acquisition rate. In practice, this factor is reduced to ~2.5-2.7 due to intentional overlap between the image sub-volumes. In the case of no overlap, which we refer to for simplicity, pLSFM can thus operate with 3-fold lower peak illumination intensities, and 3-fold longer camera integration times, respectively, compared to conventional serial 3D lightsheet acquisition. As peak illumination intensity is one of the principal determinants in photobleaching and phototoxicity [4], this provides gentler illumination without sacrificing signal-to-noise or overall imaging speed. This allowed us to image a migrating MDA-MB-231 breast cancer cell expressing GFP-Tractin for 1500 Z-stacks, comprising 306,000 individual images, without stereotypic signs of phototoxicity, which are often evident as alterations in cell morphology or initiation of cellular retraction (Figs. 2A-C, and Visualization 1) [4,22]. We also imaged vimentin dynamics in retinal pigment epithelial (RPE) cells for 1000 Z-stacks at the leading edge of a wound healing response (Figs. 2D-E, and Visualization 2). Due to the large field of view, and uncompromised resolution, polarization of vimentin filament networks can be observed simultaneously in multiple cells as they coordinate collective cell migration throughout the monolayer.

C. Faster Volumetric Image Acquisition Rates without Increased Photobleaching
Alternatively, for a given camera framerate, the volumetric image acquisition rate of pLSFM increases linearly with the degree of parallelization. This allows fast timescale biological events, including filopodial dynamics in an RPE cell, to be imaged with high spatiotemporal sampling (Figs. 2F, G, and Visualization 3). Despite a 2.7-fold increase in volumetric image acquisition rate (~10% overlap between image sub-volumes), photobleaching is unaffected. To demonstrate this, we imaged microtubule plus tips (+TIPs) with EB3-mNeonGreen in a layer of confluent U2OS osteosarcoma cells (Fig. 2H, and Visualization 4), and compared imaging performance to traditional single plane LSFM using the same exposure duration and illumination intensity. To quantify photobleaching, EB3 comets were automatically detected and their intensities measured through time in 3D with u-track [23]. Indeed, pLSFM allowed us to achieve a 2.7-fold increase in the volumetric image acquisition rate, while maintaining the same level of photobleaching as single plane LSFM (Fig. 2I). Thus, our approach overcomes both technological (maximum piezo scanning speed) and photophysical (finite fluorescence photon flux) limitations of existing 3D microscopy, without compromising resolution or introducing out-of-focus image blur.

D. Endosomal Trafficking at 10 Microns per Second
To highlight the potential for rapid volumetric imaging, we used pLSFM to observe early endosomal trafficking in Rab5a-labeled cells [24]. Retrograde trafficking of early endosomal Rab5a-positive vesicles is carried out by cytoplasmic dynein, a motor protein which in vitro achieves a processive motion of ~1 micron per second [25]. In vivo, early endosomes have been observed to translocate in short but linear bursts with velocities greater than 7 microns per second [26]. However, these events have not been observed in 3D, since faithful measurement of active transport necessitates that temporal sampling increases with particle velocity, acceleration, and density [6]. In ChoK1 cells, imaged at 2.92 Hz for 500 time points, rapid vesicular dynamics were observed ( Fig. 3a and Visualization 5). These included the fast translocation of vesicles from the basal to dorsal surface of the cell (Fig.  3B), which would otherwise be missed with two-dimensional widefield or confocal imaging.
This highlights the importance of imaging a cell in its entirety, as limitations in 2D imaging could lead to inaccurate biological conclusions. We further imaged vesicle translocation in an RPE cell for 400 time points at a volumetric image acquisition rate of 7.26 Hz (Fig. 3C, Visualization 6). At this volumetric image acquisition rate, using automated particle detection and tracking, we observed Rab5a-positive vesicles that exhibited retrograde transport with peak velocities beyond 10 microns per second (Fig. 3D). To our knowledge, such fast vesicular movements have never been observed in 3D, possibly due to limitations in the volumetric image acquisition rate. At this sample scanning speed, some oscillations and distortions became visually noticeable. However, 95% of the oscillations observed were below 50 nm in amplitude (Fig S8), which is significantly smaller than the inter-particle distance (~450 nm), and therefore do not result in ambiguities in particle tracking.
Biochemically, the activity and velocity of dynein is modulated through its association with adapter proteins and its cargo, perhaps pointing towards unknown effectors that 'supercharge' retrograde transport [5]. This clearly illustrates the potential of pLSFM to measure rapid intracellular dynamics and to study biophysical phenomena in situ at the single particle level. Further, this points to challenges in in vitro reconstitution systems, where critical or unknown components may be omitted.

E. Action Potentials in Primary Neurons Imaged at a 14 Hz Volumetric Acquisition Rate
Secondary messenger waves (e.g., Ca 2+ ) propagate at even greater speeds through cells and organisms, thus also necessitating faster image acquisition rates for proper temporal resolution [27]. By overcoming the technical limitations associated with piezoelectric scanners (improved bandwidth at decreased scan amplitudes) and camera readout times (efficient use of CMOS parallelization), pLSFM is capable of volumetrically imaging at 14 Hz, or ~2800 image planes per second. This enabled us to observe calcium signals in dissociated primary cortical neurons labeled with GCaMP-6f over 400 time points, encompassing multiple neurons and their axonal and dendritic projections (Fig. 4A) [28]. Calcium waves propagated along dendrites (Fig. 4B), and dissipated asymmetrically in space and time (Fig. 4C). Because of the large field of view, individual firing events, evident as spikes in the intensity time traces, could be observed at sites distant from one another and could provide insight into dendritic multiplexing ( Fig. 4D and Visualization 7). Indeed, this is true even in thick, overlapping regions, which is particularly important for quantifying signal propagation between synapses, and is not possible with 2D imaging modalities (Figs. 4E and F, Visualization 7).

DISCUSSION
We have demonstrated a 3D parallelized imaging scheme that achieves for every image plane a performance that is comparable to a conventional single plane microscope. This has the important consequence that 3D imaging can be performed more delicately (i.e., with lower excitation intensities, thereby decreasing photodamage), with higher sensitivity, and with increased volumetric acquisition rates. Such improvements are necessary to image rapid subcellular processes with dimly labeled constituents, throughout the entire 3D volume of a cell. Our initial experiments revealed rapid vesicular dynamics that were previously unobservable in 3D, and point towards exciting new research opportunities. For example, compared to other 3D imaging modalities, Rab5a-vesicles were ~10 and 20-fold faster than EB3 comets and mitochondrial translocation events observed by Lattice LSFM [4] and structured illumination microscopy [29], respectively.
In principle, pLSFM can be performed with a greater number of light-sheets, allowing further improvements in parallelization. However, as light-sheets are positioned further away from the nominal focal planes of the excitation and detection objectives, aberration correction becomes increasingly important. Here, we observed a loss of peak intensity for cameras 1 and 3, which were oppositely positioned ~25 microns away from the detection objective's nominal focal plane, by ~25%. To minimize the effect that these aberrations had on image resolution, we deconvolved the data from each camera with their corresponding point-spread functions. Since the aberrations are stationary, they can be corrected optically, which we performed as proof of principle with a deformable mirror (See Fig. S3). In future implementations, compensation of spherical aberrations could be incorporated directly into the relay optics (using similar principles as a correction collar on an objective). This is in direct comparison to previous parallelization methods, where ~66% of the remaining light is irrevocably lost to adjacent image planes as image blur (not accounting for additional losses by diffractive elements) [17].
The arrangement of the three light-sheets and camera planes in pLSFM was static and best suited for an image volume spanning 100 microns in the scan direction. While we found this arrangement well-suited for a wide range of adherent cell types, more flexibility in the spacing of the light-sheets could be beneficial. On the illumination side, light-sheets could be generated holographically with a spatial light modulator, which would allow effortless changes to the number of light-sheets, and their respective positioning. However, to maximize the detection light efficiency, refractive optics should be used to separate the different imaging planes. To accommodate variable beam positioning, the pick-up mirrors, as well as the remaining optical path would need to be mechanically adjusted in an automated fashion.
As described here, pLSFM is optimized for imaging adherent, mammalian cells in a shallow volume adjacent to a coverslip. Mechanical scanning of the coverslip, which we used to develop and demonstrate the governing principles of this method, can introduce vibrations into the sample at higher scanning frequencies. For example, frame-to-frame vibrations on the order of 20-50 nm were observed in the 7.26 Hz Rab5a data (Fig. S8). Furthermore, sample scanning becomes rate-limiting at volumetric image acquisition rates >14 Hz for piezoelectric actuators. At such fast acquisition rates, the piezo actuator response is expected to become nonlinear and distort the image volumes in the scan direction. In principle, greater scan rates could be achieved with more-advanced piezo control systems [30], but agitation of the sample and excitation of vibrational eigenmodes in the sample stage could become limiting. As such, we envisage replacing the mechanical cell movement with an optical scan of the light-sheets along the coverslip, thus keeping cells stationary [6]. To improve spatial resolution, our parallelization approach could be advantageously combined with lattice LSFM [4]. In fact, parallelization could help speed up the structured illumination mode in lattice LSFM, which so far has not been widely adopted for imaging rapid cellular dynamics due to the higher number of raw images needed for reconstruction. pLSFM could also be adapted to image larger 3D environments, including synthetic hydrogels and reconstituted extracellular matrices, by scanning staggered light-sheets in their propagation direction with selective fluorescence detection [31]. In such a scheme, some out of focus cross talk from adjacent image planes will occur, which could be suppressed with two-photon excitation [32,33].
In summary, pLSFM provides parallelized 3D fluorescence microscopy without increasing the illumination burden, or reducing the detected photon flux. As an important consequence, pLSFM greatly improves imaging speed without a concomitant increase in photobleaching or phototoxicity. Furthermore, the high duty-cycle of illumination at each plane (in contrast to digitally scanned methods), provides excellent sensitivity. Thus, pLSFM offers an unparalleled combination of gentle illumination, resolution, speed, and sensitivity. Together, these attributes make pLSFM the ideal choice for imaging fast timescale subcellular events in a complete, three-dimensional perspective.

A. Microscope Layout
An optically pumped semiconductor laser (Coherent, Obis 488 LX) is spatially filtered with a 30-micron pinhole and telescopically expanded (ThorLabs, BE05M-A) to a beam size of 9 mm (1/e2). Laser intensity is controlled with the manufacturer-provided software (Coherent Connection) and shuttering is achieved by direct digital modulation of the laser driver after preconditioning of the FPGA-derived trigger with a scaling amplifier (Stanford Research Systems, SIM983 and SIM900). The laser light is serially directed through two nonpolarizing beam-splitters (ThorLabs, BS019 and BS013), resulting in 3 beams of approximately equal power (30:35:35). Each individual beam is truncated with a variable aperture slit (ThorLabs, VA100C), focused to a 1D Gaussian beam with a cylindrical achromatic doublet (ThorLabs, ACY254-150-A), reflected with a knife-edge right-angle prism mirror (ThorLabs, MRAK25-P01), and imaged into sample space with an infinitycorrected tube lens (ThorLabs, ITL200) and a 40x water-dipping objective (Nikon Instruments, NA = 0.8). Each variable slit was adjusted that all light-sheets have the same beam width, as verified by measuring the axial FWHM of fluorescent nanospheres. Truncation of the beam with the variable slit apertures and the small waist size in the sample plane make the beam differ from a true Gaussian beam. Yet the light sheets share some characteristics of Gaussian beams (Fig. S7). Fine adjustments of the beam intensity of each light-sheet was achieved with neutral density filters. All three beams intersected with the sample, mounted at 49 degrees, in a custom stainless steel environment chamber. Gross sample positioning is achieved with a 3D stage (ThorLabs, PT3). However, we recommend using a stage with lockable positioning as this improves long-term stability and decreases the amplitude of external vibrations (data not shown).
Fluorescence was collected at 90 degrees with an identical 40x water-dipping objective and imaged with an infinity-corrected tube lens (ThorLabs, ITL200), creating a series of staggered image planes. The first two focal planes are reflected with knife-edge right-angle prism mirrors and relayed to a sCMOS camera (Hamamatsu, Sample scanning is performed with a 100-micron piezo stage (Physik Instrumente, P-621.1CD) with a step size of 600 nm. Given the 49-degree rotation of the stage relative to the optical axis, this resulted in an axial step size of 393 nm. Despite the high-bandwidth of the piezo (unloaded resonant frequency of 800 Hz), the fly back time between Z-stacks is rate-limiting. To decrease the bandwidth demands of the piezo, we implemented a bidirectional scan mode where images are acquired in both the forward and reverse scan directions. At very fast acquisition rates, the response of the piezo becomes slightly phase delayed relative to the input triangular waveform. Since this phase delay is deterministic, we phase delayed the camera trigger pulse train by an equivalent amount such that image acquisition remained well synchronized with the actual piezo scan. The duration of these delays depended upon the camera integration time, and were 2, 2.5, 2.7, 2.8, 3.1, 3.0, and 4.5 ms, for camera integration times of 20, 10, 7.5, 5, 2, 1, and 0.5 ms, respectively.

B. Distributed acquisition system
The distributed data acquisition system consists of a master computer and two remote nodes communicating via a dedicated 10G switch (NETGEAR, XS708E). To accelerate data transfer between the nodes and the master computer, each computer is configured such that their 10G Ethernet cards operate in a static link aggregation mode with two Ethernet cables. The nodes and the master computer each house a FireBird PCI Express Camera Link, allowing maximum frame rate imaging with Hamamatsu Flash 4.0 cameras. The master computer also houses the field programmable gate array (National Instruments, PCIe-7852R) for analog and digital control of the microscope, controlled by custom LabView (National Instruments) software. The distributed acquisition system is compatible with four remote nodes, allowing simultaneous imaging with 5 cameras.
The remote nodes are 64-bit Dell Precision 5810 machines operating Windows 8.1 and equipped with an Intel Xeon Processor E5-1620 (4C, 3.5GHz), 64 Gb DDR4 RAM, an NVIDIA Quadro NVS 310 512 MB Video Card, a 500 GB Serial-ATA (7,200 RPM) hard drive, and an Intel X540-T2 10GbE NIC Dual Port network card. The master node is a 64bit Dell Precision 5810 machine operating Windows 8.1 and equipped with an Intel Xeon Processor E5-1650 (6C, 3.5GHz), 128 GB 2133 MHz DDR4 RAM, an NVIDIA Quadro K420 1 GB video card, 4x 512 Gb Solid State Drives operating in a software-based RAID 0 configuration via an Integrated Intel AHCI chipset SATA controller, and an Intel X540-T2 10GbE NIC Dual Port network card.

C. Image Deconvolution
For image deconvolution, PSFs of isolated 100 nm fluorescent beads (Polysciences Inc., Fluoresbrite Plain YG) were acquired for each camera in both the forward and reverse scans. These PSFs were then individually applied to deconvolve each image volume, depending upon the scan trajectory, using a Richardson-Lucy algorithm with 10 iterations implemented in Matlab. The PSF was kept in its raw form (skewed) and the deconvolution was performed on the raw data (before de-skewing and stitching).

D. 3D Image Registration and Stitching
To register and stitch the full image volume, the relative volume positions, orientations, and scale are calibrated using dense arrays of surface-immobilized 200 nm beads (Polysciences Inc., Fluoresbrite Plain YG) imaged with the full 100 μm scan range of the sample piezo. This results in ~20 μm of overlap between the image volumes acquired with cameras 1 and 2, and cameras 2 and 3. Subsequently, cameras 1 and 3 are registered to camera 2 with Matlab (MathWorks). To ensure convergence of the estimators, the registration approach uses three steps: a) It computes the axial distance between the central volume and volumes 1 and 3, b) the lateral and scaling components of the affine transform, and c) the rotational component of the affine transform. Each volume is then intensity scaled to the same median intensity, and the affine transforms are combined with a deskewing transform, and applied in a single image transformation to cameras 1 and 3. For camera 2, only the deskewing transform is applied. In regions with overlap between the image volumes, the average intensity was kept. These transforms were then applied to deconvolved cellular data, which were imaged with a 40 μm scan range and ~4 μm of overlap. The Matlab scripts used for data processing can be obtained by contacting the corresponding author. Unless otherwise stated, no photobleaching correction was used.

E. Image Analysis and Evaluation of Relative Detection Efficiency of Cameras
Action potentials were identified by summing all voxels over a small 3D region of interest on undeconvolved, but deskewed, data. Action potentials in Visualization 7 were highlighted by performing a temporal FFT on the data. To this end, the low frequency components from 0-0.1Hz were set to zero in the temporal frequency domain. In addition, the highest frequency component in the FFT (the Nyquist frequency) was set to zero, which removed some spurious pixel artefacts that arose in the backwards FFT and damped oscillations. Since the frequency step size is small (0.036Hz), this causes some minor low pass filtering of the data (highest detectable frequency was reduced from 7.19 Hz to 7.15 Hz). This highpass filtered data, which highlights rapidly occurring events such as action potentials, was pseudocolored and superimposed with data that was not high pass filtered. Importantly, for the analysis of the data shown in Fig. 4, no such Fourier filtering was applied, but instead the raw data was used. The temporal filtering was solely employed to aid the viewer in the Visualization 7 by better highlighting the high-frequency events.
The light loss due to spherical aberrations on cameras 1 and 3 was measured relative to camera 2 by illuminating a solution of fluorescein with a single focused beam originating from the excitation objective. The detection objective was translated until it was centered and in focus on each camera, and the fluorescence intensities were compared. To find the nominal focal plane (which must coincide with camera 2), all three cameras were translated axially and the Z-position of the detection objective was adjusted accordingly. This was repeated until losses due to spherical aberrations on camera 1 and 3 were equivalent. The observed intensity losses for camera 1 and 3 of ~25% are in good agreement with those reported below in Section 4J regarding adaptive optics.
Rab5a-associated vesicles were automatically detected and tracked with a 3D version of uTrack, as previously reported [6,31]. Brownian and directed motion are modeled with multiple Kalman filters. The actively transport Rab5 are identified by high diffusion coefficient (>500 μm 2 /s) and lifetime (> 2.75 s). The highest measured velocity amounted to 10.58 μm/s. We manually inspected the particle movement of the corresponding track in the raw data and confirmed the consistency of the track as well as the reported velocity.

F. Animal Care
Rat primary neurons were obtained in accordance to protocols approved by the University of Texas Southwestern Medical Center Institutional Animal Care and Use Committee (IACUC).

G. Cell Culture and Labeling
Male and female embryonic day 18 (E18) primary cortical neurons were prepared from timed pregnant Sprague Dawley rats (Charles River Laboratories, Wilmington, MA) as previously reported [34]. Embryonic cortices were harvested, neurons dissociated, and plated on 5 mm coverslips coated with poly-D-lysine at a density of ~7,000 neurons per coverslip. Neurons were cultured in completed Neurobasal medium (Gibco 21103049) supplemented with 2% B27 (Gibco 17504044), 1 mM glutamine (Gibco 25030081), and penicillin streptomycin (Gibco 15140148) at 37°C in a 5% CO2 environment. Neurons were infected with purified and concentrated lentivirus encoding GCaMP6f after 4 days in vitro.

H. Photobleaching Measurements
To quantify the rate of photobleaching, confluent U2OS cells stably expressing mNeonGreen-EB3 from a truncated CMV promoter were volumetrically imaged. For multiplane imaging with three cameras, 61 images were acquired per camera, with an integration time of 15 ms and a total laser power of 0.57 mW in the pupil plane (~0.19mW per light-sheet). The final stitched data had ~5% overlap between adjacent image volumes.
For single plane imaging, 166 images were acquired on a single camera with an integration time of 15 ms and a laser power of 0.19 mW in the pupil plane. Thus, both modes imaged the same volume for 50 Z-stacks, with the same camera integration time, and approximately equal signal-to-noise ratios, but the multiplane volumetric image acquisition rate was 3-fold faster due to the parallelized detection. Data were deskewed into their proper Euclidian reference, and EB3 particles were detected automatically in a parameter-free manner, allowing both the intensity and number of detected particles to be evaluated through time. The experiment was replicated 6 and 8 times, for pLSFM and single-plane SPIM imaging, respectively. Only data that remained in focus throughout the experiment were analyzed. Data were not randomized, but handled identically with the same analytical pipeline.

I. Image Vibration Analysis
To measure and assess the impact of vibrations, we performed frame-to-frame registrations of small 3D regions of interest. Regions were selected in disparate parts of the cellular volume, and tracked throughout the entire image sequence. The measurement was performed using the imregister function in Matlab (Mathworks) configured with a translational motion model. Fig. S8 presents vibrations in a stitching region, as well as the region used to illustrate our tracking results. The 95 th percentile of displacement does not exceed 50 nm in either sub-volume. However, the maximum displacement can reach up to 100 nm (Fig. S8B-H). To assess the impact of these vibrations, we measured the minimum inter-particle distance for each particle detected in the cell. Fig. S8E and S8I shows that the maximum vibration measured is 5-10x smaller than the mean inter-particle distance, hence creating very little ambiguities in the reconstruction of trajectories.

J. Adaptive Optics
To correct the spherical aberrations present in image planes 1 and 3, a modified microscope was built that included a deformable mirror (Mirao52e, Imagine Optics) conjugate to the back-pupil plane of the detection objective, but dispensed with the pick-off mirrors in the detection path (see Figure S3A). The illumination was changed from a light-sheet to a collimated beam to decrease sample position-dependent fluctuations in fluorescence intensity. The detection objective was equipped with a piezoelectric positioner (P-726.1CD, Physik Instrumente) for Z-stepping, and the sCMOS camera could be translated along the optical axis to refocus at planes away from the nominal focal plane. Fluorescence from the intermediate image plane was relayed to the deformable mirror and camera with a pair of 350 mm focal length lenses. An interaction matrix for the mirror was established using a Shack-Hartmann sensor (Haso 4, Imagine Optics) and a collimated 632 nm laser beam, which was injected in the space between the detection objective and the tube lens, and propagated along the optical axis defined by the detection objective. A flip mirror allowed switching between the Shack-Hartmann sensor and the sCMOS camera. Using the Shack-Hartmann sensor, a flat mirror shape was optimized using an adaptive optics feedback loop.
The sample consisted of 200 nm fluorescent nanospheres immobilized on a coverslip, which was mounted parallel to the focal plane of the detection objective. First the nominal focal plane was found by translating camera and objective along the optical axis (the beads remained stationary) until spherical aberrations were minimal and the bead intensity was maximal. From this position, the objective was offset by ±25 microns (in respect to the Zaxis shown in Figure S3) and the camera was translated to bring the beads back into focus. These two settings recreated the spherical aberrations encountered by image planes 1 and 3. We then applied different amounts of spherical aberration to the mirror to maximize the intensity for these two offset image planes. The intensity was measured in a 3D stack, and the peak intensity of each setting for 20 beads was compared to imaging at the nominal focal plane with a flat mirror shape. For an offset of +25 microns (corresponding to image plane 3), compensation of spherical aberrations improved the bead intensity from 78% to 98% compared to the nominal focal plane. For an offset of −25 microns (corresponding to image plane 1), compensation of spherical aberrations improved the bead intensity from 76% to 90% compared to the nominal focal plane.
We assume that the difference in compensation efficiency comes from the divergence and convergence of the fluorescent light for the offset focal planes. As such, for an implementation of adaptive optics in pLSFM system, the magnification between the objective pupil and the mirror would need to be matched for each beam path individually.

Supplementary Material
Refer to Web version on PubMed Central for supplementary material.    shown as a maximum intensity projection parallel to the coverslip. (F). Cross-section normal to the coverslip of boxed region in E, shown as a maximum intensity projection over a depth of 3.2 μm. Arrows point at crossing dendrites that are also labeled in E. Scale bar 10 μm. Dean et al. Page 20