Single-Pixel Fluorescent Diffraction Tomography

Optical diffraction tomography is an indispensable tool for studying objects in three-dimensions due to its ability to accurately reconstruct scattering objects. Until now this technique has been limited to coherent light because spatial phase information is required to solve the inverse scattering problem. We introduce a method that extends optical diffraction tomography to imaging spatially incoherent contrast mechanisms such as fluorescent emission. Our strategy mimics the coherent scattering process with two spatially coherent illumination beams. The interferometric illumination pattern encodes spatial phase in temporal variations of the fluorescent emission, thereby allowing incoherent fluorescent emission to mimic the behavior of coherent illumination. The temporal variations permit recovery of the propagation phase, and thus the spatial distribution of incoherent fluorescent emission can be recovered with an inverse scattering model.


Introduction
In fluorescence microscopy, light emitted from the specimen is spatially incoherent. Consequently, 3D imaging techniques require some form of spatial gating to map detected photons to the location from which they were emitted. This spatial gating is often achieved though some combination of confining the illumination volume and detection volumes. Examples of such strategies include selective plane illumination microscopy (SPIM) [10], where the illumination volume is restricted to a thin axial plane, or laser-scanning confocal microscopy, where both the illumination and detection volumes are restricted to a diffraction-limited spot in 3D [23]. These strategies allow each detected photon to be mapped to a 3D location in the specimen from which it was emitted, and often involve high numerical aperture (NA) optics that tightly focus the illumination light and/or restrict the volume over which light is detected.
Other strategies for 3D imaging rely on inverting a quantitative model of the illumination, emitted, and collected light to estimate the concentration of fluorescent emitters from detected intensity images. These computational imaging methods, such as optical projection tomography (OPT), require axially scanning or rotating the object to collect a set of arXiv:2008.02376v1 [physics.optics] 5 Aug 2020 data to be reconstructed [16]. Conventional fluorescent imaging methods suffer from limitations such as photobleaching and anisotropic spatial resolution between the axial and transverse directions [18]. While SPIM can partially mitigate the effects of photobleaching, anisotropic spatial resolution is a persistent problem [16]. In all of these methods, tissues must be optically cleared to reduce distortions from optical scattering to suitably low levels [16]. A more stringent restriction on SPIM and OPT microscopes is that the spatial resolution is coupled to the size of the object [16] -leading to decreased spatial resolution for in increased imaging region.
Coherent imaging strategies enable 3D imaging by making use of the direction of the scattered light. Emil Wolf recognized that the inverse scattering problem for coherent light propagating in an object can be solved by recording the complex, spatially coherent, scattered field [24]. Directional scattering allows the recording of spatial frequency components of an object by exploiting knowledge of the complex amplitude of light scattered in a particular direction when illuminated by a spatially coherent input wave. This concept is illustrated schematically in Fig. 1(a), where the illumination field, E 0 , and scattered field, E 1 , have corresponding wavevectors k 0 and k 1 respectively.
Interferometric techniques, e.g., holography, are able to record the complex scattered field that, within the Born approximation, can be mapped to an arc of spatial frequency information defined by the Ewald sphere by applying the Fourier diffraction theorem [24] shown in Fig. 1(b). The position in spatial frequency space is given by the wavevector difference, ∆k = k 1 − k 0 . This sparse spatial frequency information is encoded by the complex scattered field obtained in a single scattered field measurement. More complete object information can be acquired by introducing a relative rotation between the illumination and the object to fully sample the object spatial frequency distribution, yielding optical diffraction tomography (ODT).
Optical holography, and thus ODT, normally relies on spatially coherent light to interferometrically record the complex scattered field, allowing interior spatial frequency information to be acquired. Coherent scattering data can be inverted to solve the scattering problem for variations in refractive index of the specimen. Using coherent illumination allows object position to be encoded in the complex scattered field. The phase is critical since it encodes the axial location of the scatterer, and it is this phase that is required to enable diffraction tomography to be extended to incoherent fluorescent light. ODT uses a rotation of the object or illumination wave to capture a sequence of scattered fields that fill out the object spatial frequency information. Then computational imaging tools are applied to invert the information recorded in order to recover the object spatial frequency distribution, and thus the object spatial information. ODT has the advantage of not being constrained to imaging objects in the Rayleigh range of the illumination beam, as the light is allowed to diffract before encountering the optical detector.
Such a coherent imaging method is conventionally thought to be impossible with fluorescent light because, in the case of incoherent emission, phase is lost due to the random emission of the molecular emitters and this unstable phase obscures the relationship between the location of the emitter and the propagation direction and phase. If fluorescence is to be imaged in a similar manner as ODT, it is necessary to encode the coherent illumination propagation phase onto fluorescent light so that the phase can be recovered. It is possible to exploit the fact that an incoherent emitter is coherent with itself to encode spatial location of the emitter [20,15], but a general adaptation of coherent-like imaging methods to incoherent light remains elusive.
In this Letter, we introduce the first method that uses fluorescent light emission for diffraction tomographic imaging. As fluorescent light is spatially incoherent, it is necessary to mimic the process of coherent scattering to enable optical diffraction tomography with fluorescence. We mimic spatially coherent scattering by transferring the phase difference from a pair of spatially coherent illumination beams [7] into the a time-variation of fluorescent emission brightness, enabling ODT with fluorescent light recorded on a single element optical detector. By mimicking the incident and scattered fields in the illumination of a fluorescent object, we are able to perform optical diffraction tomography using fluorescent light. We refer to this method as Fluorescence Diffraction Tomography (FDT).
The FDT concept is illustrated in Fig. 1(c) and (d). A pair of illumination beams substitute for the incident and scattered waves in coherent scattering. The reference wave, E 0 , in Fig. 1(c), plays the role of the incident wave in coherent scattering and interferes with an illumination plane wave, E 1 , that represents the scattered wave. To map out the equivalent information as in coherent scattering, the incident direction of E 1 is scanned in time, producing a modulation of the illumination intensity that depends on the relative phase of the two illumination beams, ∆k(t) · x, where x = (x, z) is the spatial coordinate vector in the x − z plane. The difference wavevector, ∆k(t) = k 1 (t) − k 0 , behaves as the scattering vector in coherent scattering that is defined at the difference between the k-vector of the scattered field, k 1 , and incident waves, k 0 = k(0, 1), where k = 2π/λ is the wavenumber of the illumination.
The collected fluorescence recorded with a single-pixel detector serves as the FDT time signal. These measurements imprint the relative phase of the two spatially coherent illumination beams into an intensity modulation in space and time that allows the detected incoherent fluorescent light power to be treated as if it came from a coherent source. Because fluorescent light is incoherent, the detected fluorescent light power is equivalent to the overlap integral between the spatial distribution of the fluorophore concentration -our object -and the illumination intensity. Each measurement at time t samples the complex amplitude of the object spatial frequency distribution at the difference spatial frequency wavevector, ∆k(t). The result is that for each incident angle of E 1 (t), a spatial frequency projection is recorded that exactly mimics the complex spatial frequency information traditionally obtained through coherent scattering measurements; compare Figs. 1(b) and (d).
The key aspect of FDT is coherent transfer mediated by the modulated illumination intensity. The intensity arises from the interference of the reference and scanned fields, i.e., , where θ(t) denotes the incidence angle of E 1 with respect to k z axis, or equivalently the angle between k 1 and k 0 , at time t. The model for the interference is written as I ill (x, t) = 1 + µ(t) cos[∆Φ(x, t)], where µ(t) is the fringe visibility and ∆Φ is the phase difference between the reference and scanned fields in Fig. 1(c) [4,3]. The phase difference between the illumination fields, ∆Φ(x, t) = ω c t + ∆k(t) · x, imparts a temporal modulation pattern at each spatial position in the x − z plane. Here, ω c is a carrier frequency in the modulation, which is critical for isolating the complex phase information in the time signal [4,7], and ∆k(t) · x is the spatial phase variation that encodes the location of the object in the x − z plane.
The elements of ∆k(t) = (∆k x (t), ∆k z (t)) are difference frequencies. In our work, we set ∆k x (t) = k c t/T , corresponding to sin θ(t) = t NA/T , where k c = k NA is the coherent imaging cutoff spatial frequency for the illumination optics and t ∈ [−T, T ] with 2T denoting the total collection time. For this choice, we have ∆k z = k ( 1 − (t NA/T ) 2 −1). The spatio-temporal intensity modulation encodes the relative spatial phase of the illumination fields as a temporal modulation of the emitted fluorescent power from the object, thereby transferring coherent propagation behavior to fluorescent emission [4].
The time trace is generated by detecting the collected fluorescent emission as the scanning field sweeps through the range of incident angles supported by the NA of the illumination objective [4,7,3]. The temporal signal S(t, φ) is the projection of the spatial distribution c(x) of the fluorophore concentration onto illumination intensity at incidence angle φ: where Dirac integral notation, · x = · dx, denotes the spatial integration over x and z, performed by the singlepixel detector, and R φ is a rotation matrix by φ that yields the coordinate transform We have not included a measurement noise term ε(t) in writing Eq. (1) to keep subsequent equations simple in form, but it is understood that an additive noise term is always present. An equivalent representation of Eq. (1) is to write it in its complex-valued form, using Euler's identity, by including only a single sideband of the sinusoidal term in the illumination pattern. This representation is given bỹ is the complex Fourier kernel in rotated coordinates R φ x and we have assumed µ(t) = 1 for simplicity. The procedure for obtaining this representation from the data is illustrated in supplementary material Fig. S2(a,b), [4]. The value of this complex-valued representation in Eq. (4) is showing that each time sample corresponds to a complex amplitude of the object spatial frequency distribution -equivalent to the data in an individual ODT line image, establishing Eq. (4) as the forward model for FDT, which is equivalent to the Fourier Diffraction theorem [24,2]. We refer to Eq. (4) as the forward model for FDT with forward operator The spatial distribution of the data collected for each angle is referred to as a sinogram which is computed from a Fourier transform of the single side band time signal, s(R φ x) = F{S (1) (t, φ)}. The least-squares estimate (minimum L 2 norm for error) of the fluorescent concentration, denoted byĉ(x), is then computed by applying the inverse operator D −1 : where † denotes adjoint (complex-conjugate for Fourier kernel). The kernel of the inverse operator,Ψ φ , forms a biorthogonal system with the kernel of the forward operator, Ψ φ , at the limit of high NA, as shown in Supplementary Material. In a noiseless case, this biorthogonality leads to a perfect reconstruction of the object. The dual kernel is The estimate in Eq. (3) is formally equivalent to the ODT backpropagation reconstruction [2]. The FDT microscope was experimentally implemented by using a spinning modulation mask that behaves as a time varying grating spatial frequency [8]. The mask is illuminated by a line focus that is imaged relayed to the object region. At a snapshot in time, the mask appears as a static grating creating a zero order beam, E 0 , as well as positive, E 1 (t), and negative order diffracted beams. The negative diffracted order is blocked by a spatial filter, leaving the zero and positive diffracted order beams to be image relayed to the object plane [4]. Interference between the beams produces the desired spatio-temporally modulated illumination intensity pattern. The object is rotated a full 360 degrees. At each rotation angle a full time trace is acquired. Rotation in the spatial domain also causes the spatial frequency arc, Fig.  1(d), to rotate. Once data from all illumination angles has been acquired, the full (k x − k z ) frequency plane will have been sampled so that we may estimate the object with isotropic spatial resolution. See the supplementary information for details on the experimental apparatus and reconstruction algorithm.
FDT imaging was demonstrated experimentally using an object fabricated from cotton fibers stained with fluorescein. The stained fibers were mounted on a eight axis stage, Fig. S1(c). The mounting stage allowed for full 360 degree rotation of the sample as well as the ability to position the sample precisely in the microscope focus. Fig. 3 shows a 3D reconstruction of fluorescein stained fibers using alpha blending from Volume Viewer in imageJ. The image was generated with 200 evenly spaced x − z slices obtained by scanning along y. Each slice reconstruction used 360 measured time signals evenly spaced from φ = [−180, 180) degrees, averaged over 10 time traces. Due to mechanical instability of the y-axis stage, each x − z slice was shifted to align adjacent slices to avoid object discontinuity in the 3D reconstruction. The sub-images in Fig. 3 are slices from the 3D reconstruction and the colored frames correspond to the rectangular boxes in the 3D image. An absorption contrast image was simultaneously acquired with the fluorescence, however, for brevity this image is not shown in the main text; see Fig. S3.
There are several differences between standard optical diffraction tomography (ODT) and FDT that should be noted. While ODT and FDT obtain the complex spatial frequency values that follow the arc of spatial frequency information governed by diffraction as shown in Figs. 1(b) and (d), the physical origin of these data are remarkably different. ODT relies on the spatial coherence of the light scattered by the spatial variation in the refractive index of the object. As a result, ODT projection operation deviates from FDT by a complex scaling constant of −2i∆k z . In contrast, FDT records information from the spatial variation of fluorophore concentration from the interference of two spatially coherent illumination beams. Each method samples complex amplitudes that lie on the Ewald sphere, which naturally leads to recording data in the k x − k z spatial frequency plane by relative rotation of the object and illumination beams to spatially resolve the object in the x − z plane. However, in the derivation of the Fourier diffraction theorem for FDT the only assumption made was illumination by plane waves, and there is no need to invoke the Born approximation or Rytov approximation. Therefore, FDT does not have the same object size or object variation limitations that standard ODT experiences [13].
FDT mitigates the coupling between object size and spatial resolution typically seen in fluorescence imaging. Comparatively, in optical projection tomography, where the fluorescent light is detected with a camera, the object is restricted to the region of good focus (the Rayleigh range) to avoid background blur from out of focus light [16,1]. This causes the coupling of spatial resolution and object size conventionally seen with incoherent imaging modalities. In FDT, incoherent light emission may be treated as a coherent source allowing the object to extend over a much larger region not constrained by the Rayleigh range. Therefore, FDT decouples the need to reduce the numerical aperture of the illumination as the object size increases.
In summary, we introduced a new tomographic imaging technique, Fluorescence Diffraction Tomography (FDT), that extends optical diffraction tomography to incoherent contrast mechanisms, such as fluorescence and Raman scattering. We developed theory for both forward and inverse models. The forward model uses CHIRPT illumination and detection as a projection of spatial frequencies onto the sample [4,7,22,3]. The projection uses modulation transfer to encode the spatial phase of the illumination to allow phase transfer to incoherent sources. We demonstrate FDT reconstruction with dual functions that are biorthogonal to the intensity illumination of the rotated Fourier elements in the forward model. Additionally, we showed experimentally that FDT works for both coherent and incoherent contrast mechanisms.
In principle it can be used for any contrast mechanism including nonlinear mechanisms. We expect this technique will expand the range of samples that can be imaged and provide richer information as it is easy to co-register multiple contrast distributions simultaneously.

Funding
We acknowledge funding support from the National Institute of Health (NIH) (R21EB025389, R21MH117786). J. Squier is supported by the National Science Foundation (NSF)(1707287).

Disclosures
The authors declare no conflicts of interest.

Experimental Setup
The experimental setup for Fluorescent Diffraction Tomography (FDT) is shown in Fig. 4. A continuous-wave (CW) laser (Lighthouse, Sprout) wavelength, λ = 532-nm, is collimated and brought to a line focus with a cylindrical lens on a spinning modulator disk, Fig. 4(a). The modulator is a transmission mask designed to impart a unique modulation frequency as a function of disk radius [8,4,7,6]. As the disk spins at a constant angular velocity, the transmission pattern presents a time varying grating producing diffracted orders. A slit spatial filter is placed in the back focal plane of a 2f optical system, see Fig. 4(b), and selects only the zero and first diffracted orders [4] to produce a stationary reference beam and an angle scanning beam, which act as the incident field and scattered field in a coherent scattering experiment, respectively [24]. The filtered beams are image relayed to the sample region with a 4-f imaging configuration with a tube lens, f tube = 250 mm, and objective lens, f obj = 35 mm. The sample was mounted on a rotation stage (Newport, URB100CC) to allow full 360 degree rotation in the x − z plane, Fig. 4(c). The transmitted light was collected by a 0.25 NA aspheric lens (New Focus, 5725-A) and image relayed to a photodiode detector (Thorlabs, DET100A). The fluorescence was collected in the epi-direction and image relayed to a PMT (Hamamatsu H9305). The fluorescence was separated from the illumination light with a dichroic beamsplitter (Semrock, FF562-Di03) and an interference filter (Semrock, FF01-593/40).
The objective lens, a 35 mm focal length achromatic lens (Thorlabs, AC254-035-A), was chosen, instead of a typical high NA objective, to alleviate the transverse wobble seen by the mounting stage, ∼30µm, which can lead to significant reconstruction distortions. In order to correct the transverse wobble, a large field of view (FOV), ∼260µm, was used to ensure that the sample stayed in the central region of the FOV so the sample image could be shifted laterally in post processing to remove the effect of transverse wobble.

Mathematical description of FDT image formation
The forward operator D can be viewed as a map from L 2 (R 2 ), where the concentration function c(x) lives, to L 2 (R × (−π, π]), whereS (1) (t, φ) lives: The kernel of this operator is the Fourier kernel in rotated spatial coordinates R φ x: The forward operator in (4) is an inner product in L 2 (R 2 ). The adjoint operator is given by with the kernel Ψ † φ (x, ∆k(t)) = exp (−i ∆k(t) · R φ x) . (c) Obj.

Det.
Trans. Applying the adjoint operator toS (1) (t) is equivalent to doing correlation processing (matched filtering) on the data and furnishes an estimate of c(x). However, this estimate in general does not enjoy any sense of optimality.
The least-squares (minimum L 2 -norm error) estimate is obtained by building the inverse operator D −1 : The kernel for D −1 , called the dual kernel, is given bỹ is the determinant of the Jacobian of the coordinate transformation t −→ ∆k(t).
As the numerical aperture NA goes to 1, this dual kernel forms a biorthogonal system with the forward kernel, and at the limit we have where δ 2 (·) denotes a bivariate Dirac delta. To see this, let us define ∆x = x − x . Then, Q can be written as where J 0 (·) is the Bessel function of the first kind in two dimensions and · 2 denotes 2-norm. Substituting for ∆k(t), this can be simplified to . Then, the integral Q can be expressed in the simplified form where ∆κ c = ∆κ(1) = 2 (1 − 1 − NA 2 ). The value of the above integral is The biorthogonality property in (7) is satisfied when NA → 1 or equivalently when ∆κ c → √ 2, whereupon Q → δ 2 (x − x ). However, physical constraints imposed by the experimental system limit the maximum value of ∆κ c . In the experiment, the illumination light is propagating within the region of the object and can be described by Helmholtz equation. The dispersion relationship of the Helmholtz equation requires that a plane wave propagating with the transverse spatial frequency k x carries an axial spatial frequency of k z = k 2 − k 2 x . This condition sets the difference axial spatial frequency to ∆k z = k 2 − k 2 x − k. These restrictions enforce a maximum value of the difference wavevector norm of ∆κ c = √ 2 for NA = 1. This solution to the integral (7) is plotted in Fig. 5. The horizontal axis is k ∆x 2 . At the limiting case, and if there is no noise, this biorthogonality property results in exact reconstruction of the object through the inverse operator. For smaller numerical apertures, the width of Q limits the reconstruction resolution.
k ∆x 2 κ c = 5 κ c = 0.5 κ c = 50 Q Figure 5: Plots of the biorthogonal relationship of the relationship given in (7). In the limit as κ c → inf, the dual and forward model kernels become biorthogonal.

Data Processing
Equations (2) and (3) in the Letter describe the forward and inverse models for Fluorescence Diffraction Tomography (FDT). Here we describe the detailed signal processing steps required for processing experimental data and using that data for image reconstruction.
As a single scan is taken over one rotation of the modulator mask, the time trace, modeled by Eq. (1), is generated by collecting the signal light on a single-pixel photodetector (Fig. 6(a)). The complex demodulated sideband given in Eq. (2) is isolated by first taking a simple Fourier transform of the time trace. The carrier frequency, ω c , causes the spatial distribution of the projection, s φ (x φ ), to be centered at +ω c and a conjugate image is centered at −ω c , as shown in Fig. 6(b) [4,5]. The carrier frequency plays a role that is analogous to the off-axis reference beam for holography, avoiding the twin image problem [14].
In order to recover the complex object information, the positive single side band is isolated by applying a bandpass filter in the frequency domain, shown as a red dotted line in 6(c). Once the bandpass filter has been applied, the signal is converted back into the time domain by taking the inverse Fourier transform. This operation results in a complex time signal that contains the carrier frequency, i.e., the rapid oscillation of the real part of the time signal shown in Fig. 6(d). This complex temporal data is then demodulated by the carrier frequency to bring the line image information to the base band (Fig. 6(e)). In total, these operations provide the complex single sideband given by Eq. (2). At this point in the reconstruction workflow, known optical aberrations can be corrected, such as spherical aberration and so-called wobble phase imparted by an imperfectly mounted disc [4,11,7,5].
The demodulated time signal is downsampled to reduce the data size and speed up the reconstruction algorithm, thereby and reducing data pressure. Since the time signal is already band limited, it is not necessary to apply a lowpass filter in the downsampling operation. The figure in Fig. 6(e) shows the real part of the demodulated single sideband signal,S (1) (t), which is used in the FDT reconstruction algorithm (main text, Eq. (3)). Scanning the object over θ ∈ (−180, 180] degrees, processing the time traces as described above, and Fourier transforming the downsampled, demodulated, single side band, time trace results in the sinogram shown in the main text, Fig. 2(a).
Each time point inS (1) (t) is a measurement of the magnitude and phase of object spatial frequency representation at the instantaneous projected spatial frequency pair (∆k x (t), ∆k z (t)) [4,7,5]. The inlaid figures in Fig. 6(e), above the time trace, show the illumination intensity at a snapshot in time, which is the interference between the reference beam, E 0 , and the scanning beam, E 1 (t), at a crossing angle, θ(t), which produces a spatial frequency ∆k(t). The illumination intensity excites the fluorescent concentration distribution at the given spatial frequency. The emitted fluorescent light is collected by a single-pixel detector, performing a spatial integration along the spatial coordinates x and z, modeled by Eq. (1). Note that in this work, we assume the detector is infinite in extent for simplicity in our forward model. This large detector size is a good approximation to the experimental system that we have described here; although it should be noted that there are cases in which the finite size of the detector and point-spread-function of detection must be accounted for [3].
The measured spatial frequency information is mapped into the object spatial domain by the inverse operator (6). The action of the inverse operator is illustrated by the inlaid figures below the time trace in Fig. 6(e). These inlays illustrate how the measured spatial frequency information is mapped into the object spatial distribution with (6). As the disc rotates, each measured spatial frequency component is obtained from an arc that traverses the Ewald sphere over the range of transverse spatial frequencies supported by the NA of the illumination objective. In this way all spatial frequencies supported by the illumination objective are sequentially scanned as the field, E 1 , passes through the full spatial frequency support of the imaging system [4,6]. The green and orange points illustrate high and low spatial frequencies, respectively.
Due to the fact that we sample discrete time points, it is appropriate to express the inverse operator, (6), as a Riemann sum. The time points, at time t i , are samples on a regularly spaced grid with a time step ∆t. A fixed set of rotation angles, φ j , are acquired over an angular span of 2 π with uniform angular spacing ∆φ, using the index j. This discrete representation leads us to rewrite Eq. (3) in the main text aŝ which can be written explicitly aŝ where γ i = k|∆k xi |/( k 2 − [∆k xi ] 2 ) is the magnitude of the determinant of the Jacobian, and ∆k xi and ∆k zi are difference wavenumbers in x and z, respectively, x φj is the rotated x-coordinate vector, and z φj is the z-coordinate vector. From (9), we see that the reconstruction algorithm applied to discrete data is performed in the spatial domain by weighting the dual operator kernel functions, that are sampled on the discrete reconstruction spatial grid, by the complex, demodulated, single sideband time signal samples.

Absorption Contrast
FDT can be used for many contrast mechanisms (we demonstrate fluorescence and absorption), regardless of the whether the light emerging from the specimen is coherent or incoherent. While in the main text we have primarily focused on FDT for fluorescent light, we also demonstrate the ability to extend fluorescent diffraction tomography to simultaneously image light lost to absorption or scattering in transmission. Here, we demonstrate that absorption contrast mechanism, which was simultaneously acquired along with fluorescence on a separate single pixel detector. The absorption signal was acquired by collecting the transmitted excitation light, 532 nm, on a photodiode (Thorlabs, DET100A). Fig. 7 shows the 3D reconstruction of the absorption contrast. The left hand columns shows slices along the x − y, y − z, and x − z planes in descending order, respectively.

FDT Comparison with Backprojection
In the introduction of the main text, we mention one of the major limitations of Optical Projection Tomography (OPT) and Selective Plane Illumination Microscopy (SPIM) is the fact that both techniques exhibit coupling between the object size and spatial resolution [9,1,10,16]. That is, the object being imaged may not be larger than roughly twice the Rayleigh range of the line focus [16]. The reason for this limitation is that both OPT and SPIM assume that the illumination light is approximately planar in the object region so that diffraction is negligible. These assumptions are only valid inside the Rayleigh range of a beam focus (or the focal region of the point spread function). Therefore, larger objects require a thicker line focus of illumination for SPIM to extend the Rayleigh range over the object, and in the case of OPT, a lower NA objective is used so that the PSF is approximately collimated throughout the object thickness [16]. If this assumption is violated, an out of focus blur will result in the final reconstructed image. It is worth noting that several SPIM approaches have appeared that use virtual light sheets with PSF engineering approaches to create diffraction-free light sheets to circumvent this coupling [17].
With FDT, we do not make an assumption of planar illumination since the propagation phase is directly encoded in the measured temporal data. This allows FDT to numerically refocus the entire volume measured by the illumination beams in a similar manner to holography. In previous work we have demonstrated that the technique underlying FDT, CHIRPT microscopy, not only enables this holographic refocusing of fluorescent images, but also exhibits a depth-of-field up to 83× that of conventional imaging at the same NA without sacrificing spatial resolution (∼440 µm) [7]. This DOF was measured to extend beyond 1 cm with the same optical components, but at the cost of a loss of spatial frequency support. In the reconstruction presented here, it is not necessary to explicitly backpropagate or numerically refocus the line image [7]. Instead, the reconstruction algorithm uses dual operator to directly synthesise the object in x − z. The sum over the kernel function of the inverse dual operator weighted by the complex demodulated values of the time trace recorded for each rotation angle φ directly gives the backpropagated object distribution for that measurement angle. The encoded propagation phase allows objects to reside well outside the Rayleigh range of the focus of the illumination light sheet. This allows FDT to effectively decouple the spatial resolution from the maximum object size. c d Figure 8: Comparison between computed tomography using fluorescence intensity and Fluorescence Diffraction Tomography with phase encoding. Panel (a) shows the 2D spatial frequency support of computed tomography measured at a 45 degree angle. Panel (b) is the 2D FFT of (a) resulting in a 2D image of the object. Panel (c) shows the frequency support when the full sinogram is used in the computed tomography reconstruction. Panel (d) shows the 2D reconstructed object when diffraction is not accounted for in the reconstruction. Panel (e) illustrates how FDT, using modulation transfer, can extract the complex phase information, even from fluorescence, to map the measured spatial frequencies onto the Ewald sphere. Panel (f) shows the 2D object reconstruction using only one line image at a 45 degree angle. Panel (e) shows the full spatial frequency support when the full sinogram is used in the FDT reconstruction. Panel (h) shows the 2D reconstructed object generated by taking the 2D FFT of panel (g).
In order to illustrate the importance of encoding the propagation phase has on the reconstruction and the effect it has on the maximum aberration free field-of-view, we perform two simulations. We start by simulating a sinogram using fluorescence as the contrast mechanism. The sinogram was generated using an illumination wavelength, λ = 532 nm , a numerical aperture, NA = 0.90, field of view, FOV = 20 µm, and 360 evenly spaced illumination angles ranging from θ = [−180, 180) deg, sinogram shown in the main text, Fig. 2(a).
Using the sinogram, two reconstruction strategies were tested. In the first, we reconstructed the object without the phase information that we obtained from the complex spatial frequency values by using the magnitude of the sinogram, which is equivalent to the information that would be obtained in OPT if the object size was much larger than the Rayleigh range. We reconstructed this data with the filtered backprojection algorithm using an inverse Radon transform (iradon() function in Matlab). The inverse Radon transform is relevant for line-projection tomographies [19,21]. The results of this reconstruction strategy are shown in Figs. 8(a-d). Fig. 8 (a) is the 2D frequency support measured at θ = 45 deg. Notice, the frequency support lies on a straight line in the (k x − k z ) plane at a 45 degree angle, corresponding to the illumination angle, in agreement with the Fourier slice theorem [12]. Fig. 8(b) is the resulting 2D object reconstruction in the x − z plane, generated by taking the 2D inverse Fourier transform of Fig. 8(a). The reconstructed object does not exhibit diffraction, i.e., the objects appear as uniform lines of constant magnitude rotated by 45 • , which is in accordance with the assumptions made by OPT. Figure 8(c) is the frequency support using the full sinogram. The resulting 2D x − z object reconstruction, Fig. 8(d), was generated by taking the 2D inverse Fourier transform of Fig. 8(c). A radially dependent azimuthal blurring is evident in the reconstructed object. This result is expected since the Rayleigh range of the simulated illumination beam was ∼ 0.365 µm, while the reconstructed field of view was 20 µm.