Structured illumination temporal compressive microscopy

We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. © 2016 Optical Society of America OCIS codes: (110.1758) Computational imaging; (110.3010) Image reconstruction techniques; (180.2520) Fluorescence microscopy. References and links 1. D. Buonomano, “The biology of time across different scales,” Nat. Chem. Biol. 3(10), 594–597 (2007). 2. C. Hyeon and J. N. Onuchic, “Mechanical control of the directional stepping dynamics of the kinesin motor,” Proc. Natl. Acad. Sci., 104(44), 17382–17387 (2007). 3. M. Kim, M. Pan, Y. Gai, S. Pang, C. Han, C. Yang, and S. K. Y. Tang, “Optofluidic ultrahigh-throughput detection of fluorescent drops,” Lab Chip 15(6), 1417–1423 (2015). 4. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). 5. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). 6. J. Mertz, “Optical sectioning microscopy with planar or structured illumination,” Nat. Methods 8, 811–819 (2011). 7. J.W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company Publishers, 2005). 8. X. Yuan, P. Llull, X. Liao, J. Yang, D. J. Brady, G. Sapiro, and L. Carin, “Low-cost compressive sensing for color video and depth,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Columbus, Ohio, 2014), pp. 3318–3325. 9. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S.K. Nayar. “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of IEEE Conference on Computer Vision, (Institute of Electrical and ElectronicsEngineers, Barcelona, Spain, 2011), pp. 287–294. 10. D. Reddy, A. Veeraraghavan, and R. Chellappa. “P2C2: Programmable pixel compressive camera for high speed imaging.,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Colorado, 2011), pp. 329–336. 11. J. Bioucas-Dias and M. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). 12. X. Yuan, H. Jiang, G. Huang, and P. Wilford. “Lensless compressive imaging,” arXiv:1508.03498, (2015). 13. X. Yuan, H. Jiang, G. Huang, and P. Wilford. “Compressive sensing via low-rank Gaussian mixture models,” arXiv:1508.06901, (2015). 14. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Img. Sci. 2(1), 183–202 (2009). 15. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn. 3(1), 1–122 (2011). #253396 Received 5 Nov 2015; revised 25 Jan 2016; accepted 26 Jan 2016; published 3 Feb 2016 (C) 2016 OSA 1 Mar 2016 | Vol. 7, No. 3 | DOI:10.1364/BOE.7.000746 | BIOMEDICAL OPTICS EXPRESS 746 16. X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-`2,1 minimization with applications to model-based compressive sensing,” SIAM J. Img. Sci. 7(2), 797–823 (2014). 17. J. Yang, X. Yuan, X. Liao, P. Llull, G. Sapiro, D. J. Brady, and L. Carin, “Video compressive sensing using Gaussian mixture models,” IEEE Trans. Image Process. 23(11), 4863–4878 (2014). 18. T. M. Squires and S. R. Quake, “Microfluidics Fluid physics at the nanoliter,” Rev. Mod. Phys. 77(3), 977–1026 (2005). 19. W. Jager, Y. Horiguchi, J. Shah, T. Hayashi, S. Awrey, K. M. Gust, B. A. Hadaschik, Y. Matsui, S. Anderson, R. H. Bell, S. Ettinger, A. I. So, M. E. Gleave, I. Lee, C. P. Dinney, M. Tachibana, D. J. McConkey, and P. C. Black, “Hiding in plain view: genetic profiling reveals decades old cross contamination of bladder cancer cell line KU7 with HeLa,” J Urol. 190(4), 1404–1409 (2013). 20. J. H. Zhou, C. J. Rosser, M. Tanaka, M. Yang, E. Baranov, R. M. Hoffman, W.F. Benedict, “Visualizing superficial human bladder cancer cell growth in vivo by green fluorescent protein expression,” Cancer Gene Ther. 9(8), 681– 686 (2002). 21. T. C. George, D. Basiji, B. E. Hall, D. H. Lynch, W. E. Ortyn, D. J. Perry, M. J. Seo, C. Zimmerman, and P. J. Morrissey, “Distinguishing modes of cell death using the ImageStream multispectral imaging flow cytometer,” Cytometry. A 59, 237–245 (2004). 22. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging, Proc. Natl. Acad. Sci. USA 109(26), E1679–E1687 (2012).


Introduction
Temporal microscopy imaging is a powerful revealing tool for the study of cell dynamics.Many fundamental processes, such as neural activities, and the molecular motions are on the time scale of millisecond and sub-millisecond [1,2], requiring a frame rate >100 Hz.High speed fluorescence imaging system could also greatly improve the throughput of fluorescence bioassay and microfluidic based analysis devices [3].The CCD or CMOS imagers with fast readout electronics are widely deployed in these systems.Typical imagers in consumer electronics have a frame rates of 30 Hz. Sensors with readout speed from ∼200 to 1000 Hz are required for high speed imaging applications, but the cost is an order of magnitude more than typical sensors.Using code division multiple access video compression implemented on the detection side was recently demonstrated in a camera setting [4].Though a compression ratio of ∼ 10 was demonstrated, such a setup would reduce the collection efficiency by ∼ 50%, due to the coded aperture mask, and therefore is not suitable for fluorescence readout.
Structured illumination, as a coding mechanism, has been explored extensively in the microscopy.Two notable examples are the super-resolution in lateral direction beyond the diffraction limit and the enhancement of depth sectioning capability [5,6].In the two dimensional super-resolution microscope setup, a periodic illumination pattern induces a frequency shift in the Fourier domain, channeling the high spatial frequency components into the detectable range.
In this paper, we demonstrate a temporally compressive microscopy setup based on structured illumination.Similar to the super-resolution in the spatial domain, a temporally varying illumination is deployed in our system, which channels high temporal frequency components into low frame rate detection.To implement such a system, one only needs to insert a mask in the illumination path, which requires minimal modification to a conventional microscope.The source-side illumination coding scheme is suitable for low photon-budget applications: the emission photon can be collected by the full aperture.This paper is organized as follows.We first introduce the forward model of temporal compressive measurement and revisit the concept of structured illumination in microscopy in Section 2. Then we describe the reconstruction algorithm and discuss the effect of illumination feature size in reconstruction in Section 3. In Section 4 we show the simulated reconstruction results and determined the optimal feature size of the illumination pattern.Finally, we present our experimental results and conclude the paper with potential applications for the system in Section 5.

Imaging forward model
Let the time-varying reflection or fluorescence signal from the object be f (x, y,t).The microscope image sampling, which is determined by the detector pixel size after the object magnification, usually satisfies the Nyquist criterion, and the spatial bandwidth is limited by the numerical aperture (NA) of the microscope objective.The point spread function (PSF) is h(x, y), and the structured illumination imposed on the sample is S(x, y,t).Then the measurement at the detector coordinates (x , y ) and time point t i , described by g(x , y ,t i ), can be expressed as: where the frame period of the camera is ∆ t .We assume the time-varying illumination can be discretized in time.The time period between two steps is τ and each frame period can be divided into N T periods, i.e.∆ t /τ = N T .The illumination pattern at time So each captured frame can be considered as a compressed measurement of N T scenes from the object.Equation (1) can then be written as: where f k (x, y) = t i +(k+1)τ t i +kτ f (x, y,t)dt is the k th scene within the i th measurement frame.We can discretize f k (x, y) as: which consists of m × n pixels in total.Let the vectorized form of F k be f i , and the vectorized measurement g can be expressed by the following forward model: where S i is the structured illumination matrix of i th illumination pattern, and H is the PSF matrix of the objective served as a blur kernel.Figure 1 illustrates the forward model.

Structured illumination
In this section, we relate the structured illumination for spatial super-resolution to compressive temporal imaging.For the simplicity of the discussion, here we limit the system model to one dimension in space and one dimension in time.The structured illumination pattern is translated linearly at a constant speed s during one single-frame acquisition, i.e.S(x ,t ) = S(x − st ).The scene from the object is f (x ,t ).Then Equation (1) becomes: where h(x) is the point spread function of the microscope objective.The measurement in the Fourier domain, ĝ(u, v), can be expressed as where u and v are the spatial frequency variable and temporal frequency variable, respectively.f and Ŝ are the Fourier transform of the object function f and the structured illumination pattern S, respectively.H is the optical transfer function (OTF) of the imaging system, which is the Fourier transform of the point spread function h(x).The ideal normalized OTF can be calculated by [7] where w(u) is the amplitude transfer function: with λ representing the wavelength.The transfer function is a low-pass filter, and the pass-band is limited by the wavelength and the NA.We focus our discussion and simulation on diffractionlimited system, and for the case with optical aberrations, the OTF can be simply calculated by adding in the wavefront aberrations.
Without the structured illumination S(x), the measurement temporal bandwidth is limited by the frame-rate 1/∆ t of the imager.The spatial bandwidth is limited by the bandwidth of the objectives OTF, ∆ u .Thanks to the structured illumination Ŝ on the object, the high temporal and spatial components are aliased via the convolution f (u − w, v − sw) Ŝ(w)dw, and the recovery of the high spatial or temporal components becomes feasible.The expended bandwidth in the spatial domain is determined by the band-limit of the structured illumination ∆ s .In spatial supper-resolution microscope [5], the structured illumination channels the high spatial frequency components to the bandwidth within the OTF of the objective.As the illumination pattern is projected to the object through the microscope objective, the band-limit of the structured illumination Ŝ is also ∆ u , and the spatial resolution of the reconstructed image can be extended up to 2∆ u .
The temporal resolution, however, can be extended to s∆ s .Different from that of the spatial resolution enhancement, the spatial bandwidth of the structured illumination does not limit the temporal resolution, as long as the speed of the translation s is sufficiently fast.To expand the temporal bandwidth from 1/∆ t to 1/τ, the speed of the translation needs to be on the order of (τ∆ s ) −1 .This implies that in order to resolve two consecutive scenes, the structured illumination needs to be translate one feature size of the illumination pattern.Instead of enhancing the spatial resolution, our system applies temporally varying structured illumination to achieve a higher image acquisition rate.Here, we assume that the object does not contain frequency components beyond the bandwidth of the microscope objective.

Reconstruction algorithm: BWISE
Diverse algorithms have been proposed and used for video compressive sensing [4,8,9,10].However, most of these algorithm are based on the model without consideration of the blur kernel H.These algorithms also do not take the difference between the feature size of the spatial coding and the pixel size of the sensor into consideration.This difference is negligible for the detection-side coding scheme.Since the pattern size can be several magnitudes larger than the sensor's pixel size in the structured illumination setup, the existing algorithms would fail to recover the high spatial frequency components beyond the bandwidth of the illumination pattern, leading to an inferior resolution in reconstruction.Therefore, we have developed a new algorithm integrating both considerations, which are critical to the success of the reconstruction.Roughly, these compressive sensing reconstruction algorithms are developed based on the sparsity of the video in certain domains.For instance, the wavelet and discrete cosine transformation (DCT) are used in [4,10] and the dictionary learning is used in [9].In this work, we will focus on the total variation (TV) based methods, with significant improvement described below by proposing the Block-WIse Smooth Estimator (BWISE), which has been demonstrated to be effective in solving our problem.
Let A = HS, and the forward model in ( 4) can be re-written as The reconstruction problem can be formulated as where R(f) is a regularizer and it can be used to impose the sparsity of the signal in the basis such as the wavelet and DCT, or a TV operator [11].The regularizer penalizes characteristics of the estimated f that would result in poor reconstructions.τ is the Lagrange parameter balancing the measurement error (the first term in Equation ( 10)) and the regularizer is specified where is performed on each frame and hence penalize estimates with sharp spatial gradients.
Several iterative compressive reconstruction algorithms can be categorized as an 2-step iterative method, comprising [12,13]: (i) projecting the measurement data to the desired videos/images; and (ii) denoising the results obtained in step (i).For Step (i), various algorithms have been used and the most popular methods are the iterative shrinkage-thresholding (IST) algorithm [14], the ADMM [15] and the generalized alternating projection (GAP) algorithm [8,16].Our algorithm also falls into this two-step iterative regime.For Step (i), both ADMM and GAP need a matrix inversion [13], while IST is easier to be implemented, only requiring matrix multiplications.Introducing a step size parameter α, we have the following two-step iteratively alternating projection algorithm.For k th iteration Equations ( 13) and ( 14) are iteratively performed until termination criteria are satisfied.
For Equation ( 14), complicated patch based algorithms have been developed for video compressive sensing [17] and have achieved excellent results.However, these algorithms are usually time consuming.The TV based algorithm [11], on the other hand, usually presents decent results [4] in a shorter time.
In optical microscopy applications, the reconstruction directly applying TV regularizer could lead to the loss of fine details.The proposed algorithm, BWISE, imposes a block-wise TV regularization, rather than the pixel-wise TV regularization, and the size of the block depends on the feature size of the structured illumination.Mathematically, the conventional TV denoising is performed after imposing the pixel-wise differentiation operator, D. In BWISE, the TV denoising is performed after the block-wise differential operation.Thus the denoising in BWISE is a joint global-local method.It is worth noting that the TV regularzier should be only performed spatially within each frame as in Equation (12), rather than applied to the entire 3D data cube, which allows us to reconstruct the motions between frames.Here we use D to denote the block-wise differential operation, and by introducing z = Df, the iterative clipping algorithm for block-TV denoising becomes: where z 0 = 0, γ is the thresholding parameter used in the clipping function, β ≥ maxeig(DD ) and the clipping function clip(•) is defined as: The BWISE algorithm is composed solely by Equation (13) and Equations ( 15)-( 16), where Equations ( 15)-( 16) play the role of denoising as mentioned in Equation ( 14) (Step (ii) mentioned above) and Equation ( 13) is playing the role of Step (i).

Experimental setup
The experimental setup employed a 490nm LED (M490L3, Thorlabs) as the light source, as most epi-illumination microscopes are equipped with an incoherent source for its costeffectiveness.It is worth noting that our system could also use a coherent source for higher illumination efficiency, similar to the setup in [5].The sample was a mounted microscope slide with a layer of quantum dots 525 (753769, Sigma Aldrich).The pattern of "UCF" logo was transferred to the microscope cover slip, and the thickness of the pattern is ∼ 2µm.The pattern size is 100µm × 50µm.The fluorescent sample was translated at a speed of 2mm/sec by a step motor (LHA-HS, Newport).We also prepared samples of fluorescent cells.The infected HeLa cell line expressing green fluorescence protein (GFP) [19,20], were seeded onto poly-L-lysinecoated 1.2cm coverslips at a density of 10,000 cells per coverslip in 0.15 ml of DMEM/10% FBS for overnight before imaging.The detection path of the microscope system consisted of a 20× objective (0.5 NA, Nikon), and a tube lens with 200mm focal length.The fluorescence filter cube has an excitation band centered at 480nm (bandwidth 40nm), and an emission band centered at 530nm (bandwidth 50nm) (41001, Chroma).We captured the video with a low frame rate of 40 f rame/second, limited by the camera (GO5000USB, JAI).The mask was a chrome patterned on a fused silica optical blank, with a feature size of 6.5µm (HTA photo mask), as shown in Fig. 2. The mask was mounted on a piezo actuator, with the maximum stroke of 40µm (P-840.3Physik Instrument).The magnification of the mask on the illumination side can be adjust by the illumination tube lens (M6Z1212, Computar).The calibration frame was acquired by averaging the frames of a moving fluorescence target with a stationary mask.

Reconstruction algorithm comparison
To demonstrate the performance of our proposed algorithm, we first synthesize video frames using a moving "UCF" logo containing high frequency components (fine slanted strips) with ground truth shown in the first column of Fig. 3.The translation step-size is 5µm between frames.The frame has a total number of 256 × 512 pixels and the size of each pixel is 0.25µm×0.25µmand we use NA = 0.5 with the wavelength λ = 0.488µm in Equations ( 7)-( 8) to calculate the OTF of the microscope objective.The feature size of the code is set to 2µm.We compare our algorithm BWISE, with the following popular reconstruction algorithms: (i) TwIST [11], which exploits the pixel-wise TV but using a different projection approach from our method, (ii) GAP, which exploits the sparsity of the video cube in the transformation domain, and here we use wavelet in space and DCT in time, same as [4].For the proposed BWISE algorithm, we perform both pixel-wise and block-wise TV (with a block size of 8 × 8 pixels).We compare the PSNR (peak-signal-to-noise ratio) of the results from different algorithms.With less than 200 iterations, the reconstructed images are shown in Fig. 3. Visually, both the pixel-wise and block-wise TV provide higher PSNRs than TwIST and GAP.Furthermore, the proposed BWISE reconstruction is able to keep the detailed features of the object, as demonstrated in Fig. 4. Though the PSNR (of the entire image) reconstructed by the block-wise TV is lower than the pixel-wise TV, the block-wise TV retains the high frequency components for

Illumination pattern size
We further conduct simulation of another dataset: the moving beads traveling in a microfluidic channel, as shown in Fig. 5.The diameter of the beads is around 20µm.Same parameters (NA = 0.5, λ = 0.488µm, pixel size 0.25µm×0.25µm)are used as the "UCF" logo dataset, and we aim to investigate the performance of the reconstruction under different illumination pattern sizes.We perform inversion with BWISE with different feature sizes from 0.75µm to 6µm with PSNR results plotted in Fig. 6.We only show the first and last frames of the reconstruction in also presented in Fig. 6.It can be seen that when the illumination pattern size equals 1.5µm, the highest PNSR is achieved and we can also identify the quality of reconstructed images in Fig. 5. On the other pattern sizes, the reconstruction images are blurred or dispersed.This further verifies our speculation in Section 2.2.When the feature of the illumination pattern is close to or smaller than the point spread function of the objective, the contrast of the imposed structure is low, and the coding contrast in each frame becomes weaker.We used incoherent light source in our experiment, and the resolution of the microscope is 0.60µm, determined by Rayleigh criterion.When the illumination feature size is 0.75µm, close to the resolution of the objective, the low contrast of the pattern would result in poor reconstruction.Though large mask pattern size will improve illumination contrast, there will be large patches not being illuminated, leading to errors in the reconstruction as well.The illumination pattern that is about twice the size of the microscope resolution shows excellent reconstruction results.

High frame rate reconstructions
Low frame rate measurements The experimental results are shown in Fig. 7. Figure 7(a) shows 5 frames of the measurement of the raw images without structured illumination.The "UCF" logo was translated at speed of 2mm/sec, and the period of the imaging sensor is 25ms.The sample travels 50µm within this time period.Each letter of the logo has a dimension roughly 50µm × 50µm.The letter is severely blurred because of the translational movement.The structured illumination temporal compression microscope is suitable to image microfluidic systems, especially in imaging flow cytometry and high-throughput droplet counting applications [3].The flow speed of microfluidic system usually ranges from 1 µm −1 to 1 cm/sec [18], and to emulate such systems, we imaged a sequence of green fluorescence protein (GFP) labeled HeLa cells on a microscope coverslip translated at a speed of 20 µm/sec.The image acquisition time is 0.5 second.For this experiment, we used a higher NA microscope objective (Nikon, 20 × 0.75NA) to increase the collection efficiency.It is worth noting that due to the limit of the illumination irradiance, one should not compare the current imaging speed with the commercialized imaging flow cytometry [21].The structured illumination demonstrated here could serve a general method to be applied to the commercialized imaging system to improve the throughput.

Imaging of the resolution target
In Section 2, we have mentioned that the temporal resolution can be extended despite of the low spatial bandwidth of the structured illumination, as long as the speed of the translation s is sufficiently high.Figure 9(a) shows the structured illuminated measurement of a USAF-1951 resolution target (Newport) under the bight field reflection geometry.In order to exclude the effect of motion blur, we keep the resolution target stationary during the capturing process.The reconstructed fames are shown in Fig. 9(b).The smallest feature in the resolution target has a line width of 2.2µm, which is close to the optimal feature size of our illumination pattern.
From the zoomed-in figure and intensity profiles of the reconstructed images, Fig. 9(c), the line features can be clearly distinguished.Here the reader might notice the background pattern in the reconstructed images.The textures are caused by the limited variance of the illumination pattern rather than the reconstruction algorithm.Due to the stroke limit of the piezo actuator, during the four-step movement, certain area on the sample remains unilluminated, leaving certain block appears dark in the reconstruction.An improved illumination coding scheme would reduce this background texture.

Conclusion
We have reported a temporal compressive microscope system based on structured illumination.The source-side illumination coding scheme allows the reflection/emission photons col-  lected by the full aperture of the microscope objective.The time-varying structured illumination aliased the high temporal frequency components into the low frame-rate measurements.We proposed a block-wise smoothing estimator, which imposes a regularizer on the blocks of images according to the illumination feature size, rather than the size of the pixels.Via this algorithm, we can reconstruct the image with high fidelity as well as keeping image details.Though the simple analysis indicates that the coding size does not affect the temporal resolution, however, as our simulation and experimental results demonstrated that the frequency components of the illumination pattern does impact the reconstruction.On the one hand, if the mask pattern is finer than the point spread function of the objective, the contrast of the illumination will be reduced, resulting in a deteriorated reconstruction.On the other hand, if the over-sized mask pattern is used, the prior knowledge of the sample frequency distribution will not be able to recover the regions that are not illuminated by the opaque parts of the mask, leading to the errors in the reconstruction.According to the simulation results, we chose the illumination pattern size of 1.5µm for 0.5 NA microscope objective.It is worth noting that the current experimental system is limited to the imaging of 2-dimensional sample.The coding using an incoherent source loses the coding contrast out of the focus of the objective.Structured 3D coding device can be developed for temporally compressed depth resolved imaging in the future.
We have demonstrated a compression ratio of 4:1 in experiments.The system uses an incoherent light source, and requires minimal modification to an epi-illumination microscope.Such an imaging system could expand the applications in functional microscopy imaging and high-speed fluorescence readout module for microfluidic total analysis systems.Also, the structured illumination provides a degree of freedom in design at high frame rate.Depending on the application, the illumination pattern could be specifically engineered so that the most relevant biologically information will be extracted [22].

Fig. 1 .
Fig. 1.The forward model of structured illumination microscope.On the left we show the microscope measurement g, and on the right we depict the sensing process.The mathematical formulation is demonstrated in Equation (4).H is the point spread function matrix served as the blur kernel.{S i } N T i=1 are the structured illumination matrix and {f i } N T i=1 are the signal intensity from the object at different time slots.Each frame of the scene is first encoded via the structured illumination matrix and then the measurement is convoluted by the point spread function.

Fig. 2 .
Fig. 2. (a) The photo of the experimental setup.(b) The schematic of the setup.A 20 × 0.5 NA objective (bottom part) is used in the system.The coded aperture mask, placed at the conjugate image plane (middle-right part) in the illumination path.The step motion of the mask is synchronized with the camera (top part) acquisition.

Fig. 3 .Fig. 4 .
Fig. 3.The comparison of the reconstruction results of the simulated moving "UCF" logo dataset with different algorithms.The coded measurement is shown on the top-left.Each row shows one frame.Each row (2-7) shows one frame.From the first to the fifth columns are the truth, TwIST reconstruction, GAP reconstruction, pixel-wise TV reconstruction and proposed BWISE reconstruction, Proposed BWISE Pixel-wise TV Ground truth

Fig. 5 .Fig
Fig. 5. Examples of the reconstructed images of the simulated moving beads dataset with illumination pattern size of 0.75, 1.5 and 6 µm.The top row shows the coded measurement and the bottom two rows demonstrate the reconstructed Frame 1 and Frame 6.

Fig. 7 .
Fig. 7. Experimental reconstructions of the "UCF" logo with the hardware setup shown in Fig. 2. (a) Frames of the raw measurements without structured illumination at 40 frames/second, (b) Coded measurements with the structured illumination at 40 frames/second, (c) Reconstruction of high-speed frames at 160 frames/second.

Figure 7 (Fig. 8 .
Fig. 8. Fluorescence imaging of two moving HeLa emulating imaging flow cytometry.(a) 20× microscope image of bright field (top) and fluorescence image (bottom).(b) Low frame rate (2 fps) measurement with structured illumination.(c) Image sequence reconstruction with high frame rate (8 fps).The circles with a of 30µm were added in each frame to aid the visualization.4.3.2.Imaging of cell samples Figure 8 shows the reconstruction results.Due to the motion blur, the fluorescence signals from the two cells are indistinguishable at low frame rate, as shown in Fig. 8(b).At high frame rate reconstruction shown in Fig. 8(c), two cell nuclei can be identified.The compression ratio is 4:1.The average size of the cell nuclei is 10-15µm and the estimated flow speed is 18 µm/sec, which is in agreement with the experimental setup.

Fig. 9 .
Fig. 9. Reconstruction results of a static USAF-1951 resolution target.(a) Measurement from the experimental prototype in reflection mode.(b) The 4 reconstructed frames.(c) Zoom-in figure of the boxed area in (b).Notice that the highest resolution can be clearly identified from the intensity profile (c2).