Recovering higher dimensional image data using multiplexed structured illumination

: Structured illumination (SI) using non-uniform intensity patterns is well-known for improving lateral resolution in microscopy. Here, we propose a multiplexed SI technique for recovering images with higher lateral resolution and with higher dimensional information at the same time. In this framework, we use unknown non-uniform intensity patterns for incoherent sample illumination and use the corresponding acquisitions for image recovery. In the first example, we use the reported framework to recover sample images with higher lateral resolution and separate different sections of the sample along the z-direction. In the second example, we recover the sample images with higher lateral resolution and separate the images at different spectral bands. The reported multiplexed-SI framework may find applications in general imaging settings where higher dimensional information is mixed in 2D image measurements. It can also be used in microscopy settings for computational sectioning and multispectral imaging.


Introduction
Higher information content in images is desired in many application areas. However, typical images are in 2D and represent a mixture of higher dimensional data. Dedicated hardware is needed to separate the mixture and fit it into a higher dimensional data cube (such as 3D confocal imaging and multispectral imaging). We consider an example of fluorescence microscopy, where the emission from the sample is captured by a 2D image sensor. The captured 2D image represents a mixture of 2D data at different z-sections and a mixture of 2D data at different wavelengths. The information at different z-sections and at different spectral bands are considered higher dimensional data in this case.
Here, we explore a multiplexed framework for recovering sample images with higher lateral resolution and with higher dimensional information at the same time. The reported framework, termed multiplexed structure illumination (multiplexed-SI), builds upon the conventional structured illumination (SI) technique, where non-uniform intensity patterns are used for sample illumination and the corresponding acquisitions are used for image recovery [1,2]. In a typical implementation of SI, sinusoidal patterns are used for modulating the highfrequency component into the passband of the objective lens. Therefore, the recorded images contain sample information that is beyond the resolution limit of the employed optics [1,2]. Along the same line, speckle patterns have been used in SI for the same purpose. Resolution improvement has been demonstrated using different reconstruction methods, including phase retrieval, optimization, Bayes estimation and etc [3][4][5][6][7][8][9][10][11][12][13]. However, to the best of our knowledge, these different techniques are mainly targeted at resolution improvement and the acquired images have not been modeled as a mixture of higher dimensional data. Here, we propose a multiplexed framework that allows us to improve the lateral resolution and to recover higher dimensional data at the same time. The reported multiplexed-SI framework may find applications in general incoherent imaging settings where higher dimensional data is mixed in 2D image measurements.

Multiplexed structured illumination
The basic idea of the reported multiplexed-SI framework is shown in Fig. 1. Similar to the concept of conventional SI, we use unknown speckle patterns for sample illumination. The captured images are then used to recover sample images with higher lateral resolution and with higher dimensional information. Figure 1(a) shows the case of recovering different z sections of the sample and Fig. 1(b) shows the case of recovering images at different spectral bands. The forward imaging model of these two cases can be described as follows: where  stands for Fourier transform, I n stands for 2D image measurements, OTF m stands for the optical transfer function (OTF) of the objective lens (a known parameter in our implementation), I obj_m stands for the ground-truth image of the sample, and P mn stands for illumination patterns. In Eq. (1), the summation over subscript 'm' stands for the mixture of higher dimensional data. For example, we can model the captured images I n as a summation of red, green, blue channels with m = 1, m = 2, and m = 3. The second example is to model the captured images as a summation of m different 2D sections along the z direction. In Eq.
(1), we assume the no interaction between different incoherent modes. For each mode 'm', we have 'n' different intensity patterns for sample illumination, and thus, we have two subscripts for 'P mn '. In our implementation, we will translate the unknown illumination pattern to 'n' different spatial positions to get the corresponding 2D image measurements. As a result, we only have 'm' unknown illumination patterns. The goal here is to recover different modes of the object I obj_m as well as the unknown illumination patterns P mn (m = 1,2…) from the 2D image measurements I n . If m = 1, Eq. (1) reduces to the forward imaging model of conventional SI [3,4,13]. The recovery process is inspired and modified from the mode multiplexing and decomposition scheme in ptychography [14][15][16]. It starts with initial guesses of the different modes of the object I obj_m and the unknown illumination pattern P mn (m = 1,2…). We first define I pm and I tm as follows: Equations (2)-(5) represent 4*m equations. The updating process will be repeated for all n measurements and the entire process is terminated until convergence, which can be measured by the difference between two successive recoveries. In a practical implementation, we can simply terminate it with a predefined loop number, typically 10-100. We can draw connections between the above procedures and the ptychography approach [14,15]. The key part of ptychography algorithms is an operation called Fourier magnitude projection, where the magnitude of exit wave estimate is replaced by the square root of measured intensity and the phase is kept unchanged. In multi-state ptychography, the summation of all coherent state's amplitude is used in the replacement process of Fourier magnitude projection. Here, in the case of incoherent imaging, we only consider intensity of the images, and we used Eq. (2) as an updating process that is similar to the Fourier magnitude projection in ptychography [14,15]. The rest of the equations are the same as those reported in Ref [4]. We will validate the reported approach with two simulations. In the first simulation, we assume a two-layer object is separated by 6 microns and a 0.3 numerical aperture (NA) objective is used for image acquisition. We assume the NA of speckle pattern is 0.9 (can be generated by large-angle interference). We propagate the light field of the speckle to the two corresponding z-sections. We then multiply the intensity of the speckle patterns to the two object sections. We sum the resulting intensity from the two sections and low-pass filter it with the objective. Figure 2(a) shows the raw image under speckle illumination. We can see that, the raw image contains information of the two sections at different z positions (Fig. 2(b1) and 2(c1)). In this simulation, we translate the speckles to 220 different positions and generate the corresponding low-resolution images. The recovered images and speckles are shown in Fig. 2(b2)-(b3) and 2(c2)-(c3). We can see that, the reported framework is able to separate the two sections and improve the lateral resolution. In Fig. 2(d1), we use mean square error (MSE) to characterize the imaging performance as a function of different noise level. We can see that, the performance gradually degrades as the noise increases. In Fig. 2(d2) and 2(d3), we plotted the MSE as a function of pattern number (with a loop number of 75) and loop number (with a pattern number of 220). We note that, the sectioning effective of conventional SI technique is to recover one section of the 3D sample. The reported approach, on the other hand, is able recover multiple sections and improve lateral resolution at the same time.  In the second simulation, we assume the object contains three different color channels (red, green, and blue), and the captured images represent a mixture of these three channels, as described by Eq. (1). Figure 3(a) shows the raw image under speckle illumination and the corresponding Fourier spectrum. We have added 1% noise in the raw images in this simulation. For the monochromatic raw image in Fig. 3(a), we cannot see any spectral information of the sample. We then translate the speckle patterns to 220 different positions and generate the corresponding mixtures similar to the first example. The recovered objects and speckles using the multiplexed-SI scheme are shown in Figs. 3(b) and 3(c). The recovered color combination and the ground truth of the three color channels are shown in Figs. 3(d) and 3(e).

Experiments
We have performed two experiments to validate the reported imaging scheme. The first experiment aims to separate spectral bands using the proposed multiplexed-SI. As shown in Fig. 4(a), we used a video projector to project an unknown color speckle patterns and translate it to 114 positions. We used a monochromatic CCD camera to capture the corresponding images. Figure 4 In Figs. 4(c4) and 4(d4), we combined the three channels and show the comparison between the color image under uniform illumination and the multiplexed-SI recovery. The highresolution ground truth is shown in Fig. 4(f). The corresponding line traces are also shown for comparison in Fig. 4(g). Based on the dip-to-dip feature (~0.4 mm) highlighted in Fig. 4(g), the effective NA is ~0.00058 and it is ~1.7 times higher than the measured NA of the imaging system. We can see that, the multiplexed-SI scheme is able to recover the color image of the sample using monochromatic acquisitions and achieve resolution improvement.  Fig. 5(e) for comparison. For Figs. 5(c1) and 5(d1), we removed the other layer to capture images of the single layer (layer 2 is in-focus and layer 1 is out-of-focus). We can see that, the proposed imaging scheme is able to recover information in z-direction and improve the resolution. We can also see that, the shadow from layer 1 can be seen from the layer-2 recovery in Fig. 5(d2). This effect is due to the fact that we does not model the interaction between different modes.

Summary and discussion
In summary, we have discussed an imaging framework for recovering higher dimensional image data and improving lateral resolution at the same time. In the reported framework, unknown speckle patterns are used for incoherent sample illumination and the corresponding acquisitions are used for information recovery. The major contribution of this paper is to model the acquired images as an incoherent mixture of higher dimensional data. To the best of our knowledge, this is new to the structure illumination technique and may find broad applications in incoherent imaging settings where higher dimensional information is mixed in 2D image measurements.
There are several future directions of the reported multiplexed-SI framework. 1) In the reported framework, we did not model the interaction between different modes in the mixture. In other words, we assume different modes are independent of each other. This assumption is valid for information at different spectral bands. For information at different z sections, this assumption is only valid for transparent sample, where emission from one section is independent of other sections. If we can model the interaction between different modes [17], we may be able to extend the reported scheme to handle diffused samples. 2) The relationship between the number of raw image acquisitions and the number of modes we can model in the mixture is currently unknown. This relationship may depend on the information redundancy of different modes. Further research is needed. 3) In the reported framework, we assume the optical transfer function for different imaging modes is known. We can also add one updating step to refine the OTF in the iterative process, similar to Eqs. (4) and (5). Updating OTF in the iterative process may be useful to handle unknown sample-induced aberrations.