Spatially modulated illumination allows for light sheet fluorescence microscopy with an incoherent source and compressive sensing

: Light sheet ﬂuorescence microscopy has become one of the most widely used techniques for three-dimensional imaging due to its high speed and low phototoxicity. Further improvements in 3D microscopy require limiting the light exposure of the sample and increasing the volumetric acquisition rate. We hereby present an imaging technique that allows volumetric reconstruction of the ﬂuorescent sample using spatial modulation on a selective illumination volume. We demonstrate that this can be implemented using an incoherent LED source, avoiding shadowing artifacts, typical of light sheet microscopy. Furthermore, we show that spatial modulation allows the use of Compressive Sensing, reducing the number of modulation patterns to be acquired. We present results on zebraﬁsh embryos which prove that selective spatial modulation can be used to reconstruct relatively large volumes without any mechanical movement. The technique yields an accurate reconstruction of the sample anatomy even at signiﬁcant compression ratios, achieving higher volumetric acquisition rate and reducing photodamage biological samples.


Introduction
Selective Plane Illumination Microscopy (SPIM), also known as Light Sheet Fluorescent Microscopy (LSFM), is an optical imaging technique which has been increasingly used in biological applications ranging from molecular biology to whole mount tissue analysis. The basic idea behind SPIM consists in confining the fluorescence excitation to a single plane (or light sheet), e.g. focusing a laser beam with a cylindrical lens, and acquiring the emitted fluorescence signal along an orthogonal detection path, as shown schematically in Fig. 1a. [1]. This confinement of light confers optical sectioning capability to the microscope. The main advantages of this technique are its high-speed volumetric acquisition rate and low photo-toxicity, which make it an ideal tool for rapid 3D and 4D imaging [2].
Selective Volume Illumination (SVI) microscopy has been recently developed with the goal of further increasing the acquisition rate [3]. As is the case with SPIM, SVI uses a perpendicular illumination and detection. In this approach, rather than illuminating the sample in a single plane, one applies the excitation across a confined volume. In Truong et al. [3], Light Field Microscopy [4] was used to collect the light from the volume and reconstruct the 3D sample. In SVI, the illumination volume is chosen so as to cover the part of the sample which is of interest to the experiment, minimizing the background generated by fluorescent or diffusive regions outside said part. Here, we report a microscopy setup that exploits spatially modulated Selective Volume Illumination (sm-SVI). Using a spatial modulator, we structure the illumination light along the detection direction ( Fig. 1(b)). We demonstrate that we can reconstruct the sample in 3 dimensions by solving an inverse problem which is based on the data generated by several spatially modulated patterns. We show that this approach is compatible with Light Emitting Diode (LED) illumination, presenting resolution comparable with that achievable with SPIM. LEDs are suitable sources for fluorescence microscopy thanks to their colour availability, low cost and stability. However, they are not well suited for SPIM microscopy, because their low spatial coherence limits the possibility to focus the light into a single plane. Here we show not only that LEDs can be used to optically section the sample, but also that the use of incoherent light can reduce the unwanted speckle pattern and shadowing effects typical of SPIM [5], being an alternative to methods shown in [6][7][8].
Furthermore, we demonstrate that this technique can be combined with Compressive Sensing (CS) [9]. Compressive Sensing is a signal processing technique that has been proven to overcome Shannon's theorem by recovering a signal from a set of under sampled data [10,11]. For its construction, CS finds applications in many imaging fields, such as ultrafast imaging [12], lensless imaging [13], single pixel imaging and holography [14], among others. It has been used in fluorescence microscopy [15,16] and in lattice light sheet fluorescence microscopy [9]. Here we exploit CS to demonstrate that the use of sm-SVI can reduce the amount of acquired data and consequently the light dose to the sample. Therefore, CS together with LED source make the presented technique particularly attracting for biology, where non-invasive investigation tools are required.

Selective volume illumination is achievable with a LED source
It holds for any light source, that the number of collectable photons is determined by its étendue, which is at best preserved along the optical path in a lossless system. For a laser source, due to the low divergence output beam, the emitted photons can be collected and focused over a surface which is fundamentally limited by diffraction, reaching high intensities over small areas. Conversely, photons coming from an incoherent source as a LED are emitted over a wide angle from a relatively large emitting surface, so that only a small percentage can be gathered and refocused. For this reason, it is normally unfeasible to use a LED in a SPIM setup, which requires to focus the light tightly along one direction [1,2].
However, when illuminating a volume of several micrometres thickness, a LED illumination can provide the intensity required for fluorescence imaging. This is achieved in the setup depicted in Fig. 1(b) and Fig. 1(c), which is configured to have illumination and detection pathways mutually orthogonal. In this work we mark with x the illumination axis and with z the detection axis.
The first part of the system consists of a LED source which is used in combination with a Köhler illuminator to create a uniform spot on a Digital Micromirror Device (DMD). The DMD displays patterns which modulate the light spatially. Two paired objective lenses are used to reproduce the pattern on the sample and depending on the illumination numerical aperture (NA ILL ), the persistence length of the pattern can be changed from some tens of micrometers to millimeters. The pattern, together with the two objective lenses, limits the illumination light to a thickness of ∆z and modulates the light along the z direction. A detection objective lens is placed perpendicular to the illumination direction and, in combination with a tube lens, forms the image of the sample at the detector (a CMOS camera), collecting the photons emitted from the entire thickness ∆z.
In a typical configuration (see Material and Methods), starting from the initial LED power of 3.5W, the intensity on the sample is approximately 500mW/cm 2 , which is one order of magnitude lower than the power used in SPIM [1,17]. However, the illumination volume is wider (typically 100-150µm thick, 10 times wider than SPIM) and the collected photons per image are therefore comparable: high-power diodes make it possible to perform SVI measurements with fluorescence intensity at the detector comparable to SPIM.

Axially modulated illumination enables volumetric reconstruction
Spatially modulated light has been widely used in SPIM. Structuring the light in the detection plane is a powerful method to improve the axial resolution of a microscope [18] as well as its lateral resolution [19]. This concept has been adopted to increase resolution and contrast in SPIM [20][21][22][23] even beyond the diffraction limit [24].
Sinusoidal illumination, modulated along the detection axis, has been used to encode the axial profile of a sample in the Fourier domain [25]. Our setup, taking advantage of a programmable spatial light modulator, allows one not only to adopt a similar approach, but also to generalize it by projecting any kind of pattern set (e.g. Hadamard, Wavelet, Fourier, etc). We present the results obtained using Walsh-Hadamard (WH) patterns [26] that were chosen both for their compatibility with fast DMD modulation and for the 50% fill factor. The DMD modulates the incoming light according to a binary pattern, encoded in two different states: ON and OFF. Since the Hadamard functions have entries ± 1, a rescaled version of these can be easily encoded (see Material and Methods). Reconstruction of the sample image is possible by solving an inverse problem, as commonly done in single pixel camera applications [26,27]. Each pixel of the detector collects a signal that is the line integral of the fluorescence emitted by the sample along the optical axis z, within the illuminated thickness ∆z, which must be comparable or lower than the depth of field of the detection optics. This signal varies depending on the projected pattern, which is spatially modulated along the same axis. In order to recover the fluorescence concentration χ in each voxel of the sample (x,y,z) we acquire N images ξ i (x, y), with i = 1..N, corresponding to different illumination patterns. For each (x,ȳ) position, we solve the linear problem: where ξ (x,ȳ) is the column vector of the intensities ξ i (x,ȳ), A is the NxN measurement matrix, whose rows are the WH patterns and χ (x,ȳ) is the fluorescence concentration profile along z at the position (x,ȳ). This problem can be efficiently solved in parallel for each voxel by using standard WH routines available in MATLAB ® (overall reconstruction time ∼ 3 s). No point spread function deconvolution was applied to the data in detection, nor in illumination, although this could potentially improve the results and enlarge the reconstruction volume. We used N = 64 patterns in order to structure the light within the thickness ∆z = N · e = 134µm, where e = 2.1µm is the DMD pixel size projected at the sample. In particular we discuss the results obtained at detection magnification of 4X and numerical aperture NA DET = 0.13, a common configuration to study so-called mesoscopic samples, ranging in size between hundreds of micrometres and a few millimetres.
We tested the method on a sample consisting of fluorescent nano-beads (500nm of average diameter, with emission peak λ = 582nm) embedded in a solid gel matrix (1.5% phytagel in distilled water). An example of bead reconstruction is shown in Fig. 2. In the centre of the field of view the reconstruction is limited by diffraction. The lateral resolution (xy plane) is given by the detection objective lens: δr = λ 2·NA DET (with NA DET = 0.13). The axial resolution is dominated by the pixel size of the DMD convolved with the illumination profile (the illumination was upper limited by NA ILL = 0.1), considering that the detection depth of field is an order of magnitude larger than the effective pixel size. In the centre of the field of view, the measured lateral resolution was δr = 2.3 ± 0.2µm and the axial resolution was δz = 2.9 ± 0.2µm ( Fig. 2(b), (c)), which is compatible with theoretical values. This resolution is achievable only in part of the volume (Fig. 2(b)): moving along the x direction, away from the image centre, the axial resolution decreases because of the defocusing of the illumination pattern. Conversely, moving along the z axis, the transverse resolution is affected by the defocusing of the detection lens. For the given illumination and detection geometry, these two effects are independent on the y position within the sample. In any case, a good resolution is preserved on a relatively large volume: the resolution is within ± √ 2 δr laterally and ± √ 2 δz axially ( Fig. 2(d)) across the whole region between ∆X = 200µm, ∆Y = 3.2mm, ∆Z = 120µm (here ∆Y = 3.2mm is the size of the camera field of view along the y axis, red rectangle in Fig. 2(b)), where ∆X and ∆Z are comparable with the illumination and detection depths of field.
This volume fits well specimens that are elongated in one direction (y). We anticipate that in order to use this technique in many biological applications involving bigger samples (e.g. imaging of large chemically cleared organs [28]), methods to extend the depth of field both in detection and illumination will be required. To this end several solutions could be adopted, from translating the sample along the optical axis (as normally done in SPIM) to scanning the detection path [29], tiling the illumination [22,30,31] or engineering the detection point spread function [32,33]. Furthermore, it is worth noting that we could modulate the light in the perpendicular direction (axis y) [22], even if the possible advantages of this modulation, shown in [20], are not discussed in the present paper.
Nevertheless, the imaging volume ∆X · ∆Y · ∆Z is suitable for studying zebrafish (Danio rerio) embryos, as at least half size of the specimen is covered, without the need of translating the sample or the detection optics.

Three-dimensional imaging of zebrafish
To evaluate the quality of the 3D reconstructions we imaged 4-day post fertilization living transgenic zebrafish embryos Tg(α-actin:GFP) and Tg(kdrl:GFP), expressing green fluorescent protein (GFP) in the skeletal muscles and endothelial cells respectively. The samples were acquired with sm-SVI and with a standard laser-based SPIM [17], using the same detection objective lens (NA DET = 0.13) for comparison. The power of both laser and LED was set to provide the same total optical energy to the volume during the measurement.
By comparing the results achieved with the two techniques, we observe that the shadowing artifacts are drastically reduced in sm-SVI (Fig. 3). This improvement results from the incoherence of the light source, which eliminates interference effects within the sample.
A whole embryo is displayed in Fig. 4, in which the reconstruction sectioning is enough to distinguish the muscular fibers along the entire zebrafish length, both for lateral and transverse sections.
The acquisition of the Tg(kdrl:GFP) zebrafish (Fig. 5) confirms that the technique can provide an adequate resolution for the observation of a large portion of the embryonic vascular network: the vessels are distinguishable in the brain and along the trunk-tail region of the zebrafish with almost isotropic resolution.
A requirement of sm-SVI is having a static sample, not moving for the entire duration of the patterns acquisition. For this reason, the zebrafishes were anesthetized in tricaine 0.1% and   restrained in Fluorinated ethylene propylene (FEP) tubes [34]. Looking at Fig. 5 we observe that the inevitable movements of the beating heart did not significantly affect the measurement results.
The fluorescence signal emitted by the two transgenic lines has different spatial features: the fluorescence of the Tg(α-actin:GFP) is relatively uniform within each zebrafish somite while the fluorescence of Tg(kdrl:GFP) is spatially sparse, as it appears in the zebrafish vasculature. In both cases we conclude that sm-SVI is a suitable tool for the reconstruction of a relatively large portion of zebrafish embryos, avoiding the sample mechanical translation or sample scanning and presenting reduced speckle artifacts, thanks to LED illumination. Spatial modulation enables to exploit Compressive Sensing (CS) strategies devoted to the reduction of the number of measurements [9,35]. In fact, when the signal to be recovered χ has a sparse representation under a certain basis W, there is a high probability of lossless recovery when the measurement matrix A is incoherent with the basis W [36]. The choice of the sensing and sparsifying basis is fundamental both for compression and recovery. The problem (1) is recast in the following form (thex,ȳ dependence has been omitted for simplicity):

Compressive sensing reduces the amount of acquired data
where ψ is the representation of the signal χ in the basis W, R is a penalty functional enforcing some desired characteristic on the solution and λ is the hyperparameter weighting the penalty with respect to the data-fidelity term. As an example, when χ is assumed to be sparse under the W representation, a typical choice is R(ψ) = ||ψ|| 1 , which has been proved to enforce sparsity. As a particular case, W = I if the image is sparse in the pixel basis. Under this framework, another penalty term, which enforces sparsity in the image gradient, has been proposed: which is the L 2 or L 1 norm of the Total Variation (TV) (isotropic or anisotropic case, respectively) [37]. The TV is suitable for images presenting sharp features, as in the case of the Tg(kdrl:GFP) zebrafish embryos expressing fluorescence in the vascular tree. We generated scrambled Hadamard (SH) patterns for illumination [38], which were obtained by randomly permuting rows and columns of an Hadamard matrix. The inverse problem (2) with W = I and the regularization given by the L 2 norm of (3) was solved as a whole (not pixelwise) by adapting the well-known TV minimization by Augmented Lagrangian and ALternating direction ALgorithms (TVAL3) algorithm [39] to 3D images.
The measurements consisted of the acquisition of M = N/C images upon structured volumetric illumination with SH patterns drawn randomly from the complete set, with C being the data compression ratio.
We reconstructed Tg(kdrl:GFP) zebrafish embryos starting from different measurement sets respectively at compression factor of 2, 4 and 8. Throughout the entire volume, the differences between the reconstruction obtained using the full dataset and compression C = 2 are negligible ( Fig. 6(a), (b)). When compression C = 4 is applied, some structures appear more blurred, particularly along the axial direction (see for example the axial blurring of the vessel indicated by the green arrow), but the full vascular network is visible. This effect is further emphasized at higher compression rate (C = 8). We observe that the regions where multiple vessels cross are not resolved and contribution from different axial position is mixed (see for example the blue arrow indicating the intersomitic vessel emerging from the posterior caudal vein). Yet, even at C = 8 the reconstruction preserves the major spatial features of the vascular system. Remarkably, this result is obtained using only M = 8 acquisitions. We have repeated this analysis on fluorescent beads. A reconstructed region is shown in Fig. 7, sectioned along the xy and xz planes. The results on beads confirm what observed in zebrafish samples: the quality of the reconstruction is preserved at low and medium compression rates (C = 2 and C = 4), with some background noise appearing, especially in the xz sections. More artifacts become visible at high compression rate (C = 8) and some beads seem to be reconstructed in the wrong axial position or appear to be duplicated. This can be explained by the fact that a very small number of patterns (8) was used for the reconstruction and some axial positions are not adequately sampled by said illumination set.
These results indicate that CS can be a suitable tool to reduce the number of acquired images which, on the one hand, has an impact in reducing the exposure to light of the sample (and consequently the photo-toxicity), and on the other hand offers new possibilities to shorten the acquisition time.

Conclusions
To sum up, we reported a microscopy scheme that enables 3D reconstruction of fluorescence samples in a selective volume illumination microscope by axially modulating the excitation light. Light modulation was made possible by using a spatial light modulator (DMD) and an incoherent LED, with the advantage of reducing speckle artifacts. LEDs offer other advantages over lasers in terms of higher stability and can potentially be a valuable solution for multicolor imaging, at a much lower cost.
The spatial modulation enables compressive sensing reconstruction of the sample which preserves anatomical details even at relevant compressing ratios. The present proof of principle on living zebrafish embryos opens the possibility to reduce the number of acquisitions for 3D imaging and consequently the acquisition time and the phototoxic effects induced in the sample.
Presently, the technique has the disadvantage to require a motionless sample for the entire duration of the measurement. Since the typical measurement time was 6.4s, the method could be applied to study fixed or anesthetized specimens, of interest for developmental biology. Further optimization of the LED illumination and DMD modulation will be required in order to study fast biological processes. Advanced deconvolution algorithms that fit well the presented method, together with methods to extend the depth of field in illumination and detection, could allow us to extend the approach to a variety of biological applications.

Optical setup
As excitation source, a high-power LED (Thorlabs SOLIS-470C), emitting at 470nm was used. The light creates a c.a. 10mm diameter uniform spot on the DMD through a custom made Köhler illuminator (Fig. 1(c)). To reproduce a suitable, uniform light spot, we used an aspheric lens (Thorlabs LB1723-A, f 1 = 60mm in Fig. 1(c)) as collector, together with two lenses f 2 = 200mm (Thorlabs LA1979-A) and f 3 = 50mm (Thorlabs LA1131-A), as field and condenser lens, respectively. The DMD is made by 1920 × 1080 squared mirrors, whose side is 7.56µm (Texas-Instrument DLP LightCrafter 6500). The device allows ON-OFF single mirror transition at a 9523Hz maximum frequency. The DMD is mounted on a moving platform, enabling 3 axis manual translation and tilt, for optical alignment. The excitation path is tilted by approximatively 12°with respect to the DMD normal, so that a ON state micromirror would reflect light to the chamber direction. Because the DMD micromirrors tilt along their diagonal, the whole array is rotated by 45°, so that both incident and reflected radiation belong to same plane (i.e. parallel to the optical table). Once reflected by the DMD, the light is collected by an infinity corrected objective lens (2X Mitutoyo Plan Apo Infinity Corrected Long, 46-142), it then passes through a band pass excitation filter and impinges on a second objective lens, at higher magnification (5X Mitutoyo Plan Apochromat Objective, 46-143). The presented setup showed light losses mostly given by the size modulated region, which was almost ten times smaller than the 10mm light spot on the DMD.
This configuration led to an overall magnification of 0,4 which resized the imaged pixel to 3µm. Considering the DMD is placed at 45°from the illumination plane, the spacing between two lines is given by the semi-diagonal of the imaged pixel: e = 2.1µm (previously indicated as the pixel size at the sample). The fluorescence signal emitted by the sample plane is collected by a 4X objective lens (4X Nikon CFI Plan Fluor, 0.13 NA, 17.2mm WD), filtered by a GFP emission filter and then imaged onto a CMOS sensor (Hamamatsu orca flash 4.0) with a tube lens (Nikon MXA20696). The camera acquisition was triggered by a signal from the DMD, originated at each pattern update.
The LED power at the sample was 2mW, projected on a rectangular area with sides 3.2mm and 0.13mm, which corresponds to an intensity of c.a 500mW/cm 2 . The acquisition time for sm-SVI was 100ms per pattern. SPIM measurement shown in Fig. 3 was performed with the setup described in [17], with a laser power of 2mW projected on an area of 3.2mm and 0.015mm. The acquisition time for SPIM was 50ms per plane.

Modulation patterns
The DMD modulates the light along detection optical axis (z) and eventually (not used here) the vertical axis y, while the light propagates along the x axis ( Fig. 1(c)). The pattern has a persistence length, which is given by the illumination numerical aperture (which could be changed with a diaphragm in the back of the illumination objective lens). In the typical measurement settings, the persistence length was approximately 200µm.
In both the used patterns set, Walsh-Hadamard and Scrambled Hadamard functions assume positive and negative values, while the illumination intensity can only be positive. In order to encode these functions, either (i) a pair of positive measurements or (ii) a single positive can be acquired. In the first case the real pattern is encoded by subtraction of the pair of measurements, while in the second it must be shifted and rescaled accordingly to the pattern corresponding to a constant illumination [38]. In our case we adopted the first strategy, which has been widely proved to be more robust for background subtraction [40]. An example of used patterns and the corresponding raw images are available in Visualization 1 and Visualization 2, respectively.

Compressive Sensing inverse problem
To work with the TV algorithm, which correlates voxels along the three dimensions, we need to treat the inverse problem as a whole. Thus, we reformulate the problem as follows: where χ is now the whole fluorescence distribution in each voxel, ξ is the whole set of M < N measurements (shown in Visualization 2), A(·) is the measurement operator acting on the whole dataset. The isotropic TV (L 2 norm) has been introduced to regularize the image in every direction. We solved the problem (4) by using the algorithm TVAL3 proposed by Li [41] adapted to 3D images. The reconstructions were carried out on a Workstation mounting Dual Intel Xeon processors (10 cores, 2.35GHz) and 64 Gb of RAM memory. On a region-of-interest of 1523 × 457 × 64 voxels, the reconstruction process took about 15min.
All the experiments were conducted minimizing stress and pain, with the use of apposite anesthetic. Zebrafish AB strains obtained from the Wilson lab (University College London, London, UK) were maintained at 28°C on a 14 h light/10 h dark cycle. The zebrafish transgenic Tg(α-actin:GFP) and Tg(kdrl:GFP) were used for imaging. Embryos were collected by natural spawning, staged according to Kimmel and colleagues, and raised at 28°C in fish water (Instant Ocean, 0.1% Methylene Blue) in Petri dishes, according to established techniques. After 24 hpf, to prevent pigmentation 0.003% 1-phenyl-2-thiourea (PTU, Sigma-Aldrich, Saint Louis, MO, USA) was added to the fish water. Embryos were washed, dechorionated and anaesthetized, with 0.016% tricaine (Ethyl 3-aminobenzoate methanesulfonate salt; Sigma-Aldrich), before acquisitions. During imaging, the fish were restrained in FEP (Fluorinated ethylene propylene) tubes [34].