Analysis of Fourier ptychographic microscopy with half of the captured images

Fourier ptychography microscopy (FPM) is a new computational imaging technique that can provide gigapixel images with both high resolution and a wide field of view (FOV). However, time consuming of the data-acquisition process is a critical issue. In this paper, we make an analysis on the FPM imaging system with half number of the captured images. Based on the image analysis of the conventional FPM system, we then compare the reconstructed images with different number of captured data. Simulation and experiment results show that the reconstructed image with half number captured data do not show obvious resolution degradation compared to that with all the captured data, except a contrast reduction. In particular in the case when the object is close to phase-only/amplitude only, the quality of the reconstructed image with half of the captured data is nearly as good as the one reconstructed with full data.


Introduction
It's well known that in the conventional microscope, the low numerical aperture (NA) of the objective lens produces a wide FOV image, but induces low image resolution. FPM is a newly developed computational optical imaging technique, which breaks the diffraction limit of the objective lens by using an angularly varying light emitting diode (LED) il-lumination and a phase retrieval algorithm [1][2][3]. In the FPM system, a programmable LED array is usually used as the light source. After capturing a stack of low resolution images under different illumination angles, an iterative phase retrieval process [4,5] is used to reconstruct the object's complex field with enhanced resolution without sacrificing the wide FOV. Because the FPM breaks the limitation of the space-bandwidth product (SBP) of the optical system, and achieves gigapixel imaging, it has a great potential in a variety of applications, such as biomedical imaging [6][7][8][9][10][11], characterizing unknown optical aberrations of lenses [12,13].
However, FPM's high SBP imaging capability is time consuming, limiting its application to only static objects. This limitation comes from the requirement of a large amount of low resolution images. In addition, the low illumination intensity of the LED array induces long exposure time during image acquisition. To improve the capture efficiency, a variety of techniques have been proposed. In principle, these techniques can be devided into two categories. The first one is to improve the FPM illumination. For example, Kuang et al. have been proposed to use a laser instead of the LED array to enhance the intensity of the illumination, so as to reduce the exposure time during the capture procedure [14]. Other approaches in this category include the lighting up of several LEDs simultaneously [10,[15][16][17], and non-uniform sampling of the object's spectrum [6,[18][19][20][21] to reduce the re-quired number of images . These techniques usually change the illumination structure and need to find an optimal strategy, which sacrifices the simplicity of the original FPM. The other one is to improve the reconstruction algorithm [22][23][24]. For example, the Wirtinger flow optimization algorithm has been proposed to save around 80% of the exposure time [22]. But this algorithm increases the calculation cost than the original FPM [1].
In this paper, we propose an alternative approach by analyzing the imaging process. Based on the analysis, we perform simulations and experiments to demonstrate our method. In section 2, we make a theoretical analysis on the FPM imaging process. In section 3 and 4, simulations and experiments are conducted to prove the analysis.

Analysis of the FPM imaging system
We suppose that, in a FPM system, a complex object with the transmittance function of O(x, y) = A(x, y) exp[jφ(x, y)] is located at the front focal plane of the microscopic objective, where A(x, y) and φ(x, y) are the amplitude and phase of the object. A plane wave parallel to the optical axis illuminates the object with a wavelength of λ. Without considering the aperture size, the Fourier spectrum H(u, v) located at the back focal plane of the microscopic objective is written as [25,26] where f is the focal length of the objective, and (u, v) is the spatial frequency coordinate. In the case that the illumination angle is (θ x , θ y ), the Fourier spectrum becomes which indicates the Fourier spectrum is shifted to (u 0 , v 0 ) due to the illumination angle. It has been proved that u 0 = f sin θ x and v 0 = f sin θ x [27]. Because of the limited NA of the system, only a part of the Fourier spectrum transmits the tube lens. Let the radius of the transmitted sub spectrum H sub θx,θy (u, v) be r, it then can be written as where circ √ u 2 + v 2 /r is the circle function. Suppose the tube lens has a same focal length as the objective lens, the captured image under specific illumination angle thus can be written as where F −1 is the inverse Fourier transform, is the inverse Fourier transform of the circle function, and J 1 (x, y) is the first order Bessel function.
In the case that the object is amplitude-only, i.e., O(x, y) = A(x, y), Eq. 4 can be written as From Eq. 5, it can be seen that the captured intensity images I θx,θy (x, y) equals I −θx,−θy (x, y) strictly.
In the case that the object is phase-only, i.e., O(x, y) = exp[jφ(x, y)], Eq. 4 can be written as When the phase-only object is thin enough so that |2π (x sin θ x /λ + y sin θ y /λ)| φ(x, y), Eq. 6 can be written as From Eq. 7, we can find that the assumption is better satisfied when the θ x and θ y are larger, which means that the images captured at larger illumination angles are more close to the ground-truth than those that are captured at smaller illumination angles.
In the case that the object is complex, Eq. 4 can be written as Compared to phase-only object, influenced by the amplitude A(x, y), the difference between I θx,θy (x, y) and I −θx,−θy (x, y) is complex, which is hard to compare. From the theoretical analysis of the FPM imaging system, we get a preliminary conclusion that the images captured with circular symmetrical illumination angles have little intensity difference when the object is amplitude-only or phase-only. Considering the fact that the FPM uses only intensity images to reconstruct the wide FOV and high resolution images, we speculate that the reconstructed images with a half number of low resolution images can reach a comparable resolution to that with the whole images in the FPM.

Simulation verification
We performed two simulations to verify the above analysis. In these simulations, we used a 4× 0.1 NA microscopic objective and a 15 × 15 LED array. The wavelength of the LED array was 630 nm, and the spacing between two adjacent LEDs was 4 mm. The distance between the LED array and the sample was 110 mm. The camera sensor has 128 × 128 pixels, the pitch of which is 6.5 µm.
In the first simulation, we compared the images captured by a microscope under symmetrical illumination angles. Figure 1 shows the amplitude and phase profiles of the object used in this simulation. The original object has 512 × 512 pixels. The pixel number of the low resolution pictures captured by the camera is 128 × 128. To make the object thin enough, the range of phase value was set to [0, 0.5π]. The object was set to be at the focal plane of the microscope.
To compare the captured images, we plot the gray values along the horizontal center lines of the captured images and calculate the root-meansquare error (RMSE) between them. The images were compared in three cases, i.e., the object is amplitude-only, phase-only and complex. Figure 2 shows the results. θ 1 and θ 2 are the illumination angles. Figure 2(a) shows the results when the object is amplitude-only. The RMSE of the images are all 0 in θ 1 and θ 2 respectively, we can observe that the curves are exactly identical. Figure 2(b) shows the results when the object is phase-only. The RMSE of the images are 5.71 and 3.63 in θ 1 and θ 2 respectively, we can observe that the difference between the curves become smaller when the illumination angle θ 1 is increasing. Figure 2(c) shows the results of the complex object. The RMSE of the images are 28.01 and 5.65 in θ 1 and θ 2 respectively. There is obvious dif-  ference between the curves in different illumination angles. The simulation results are coincided with the analysis in section 2. Considering both the numerical analysis and the simulations, we come to a conclusion that the captured images under symmetrical illumination angles are absolutely the same when the sample is amplitude-only, are almost the same when the sample is thin phase-only, and are different when the object is complex.
The second simulation was performed to verify our speculation. The amplitude and phase profiles of the object used in the FPM simulation are the same as in the first simulation. The results are plotted in Fig. 3. In Fig. 3(a-c), the left and center columns show the reconstructed amplitude and phase with whole (15 × 15) and half (15 × 8) camera images respectively. The right column shows the intensity comparison between the left and cen-  ter columns along the middle horizontal intersection. Figure 3(a) and (b) show that the shape and distribution of the amplitude and phase profiles are similar. We also performed a simulation when the object is complex. From the results shown in Fig. 3(c), we can observe obvious difference between using whole and half captured images. The reason is that the captured images are totally different when the object is illuminated by symmetrical angles. Besides, during the iteration process of phase retrieval algorithm in FPM, there are some crosstalk between the amplitude and phase reconstructions. From the simulation results, we observe that the reconstructions using half captured images do not show obvious resolution degradation in the cases of amplitude-only and phase-only objects.

Experimental verification
We built an FPM system by replacing the light source of an Olympus IX 73 inverted microscope with a programmable LED array (32 × 32 LEDs, 4 mm spacing) controlled by an Arduino. The LEDs have a central wavelength of 629 nm and a bandwidth of 20 nm.
Two experiments were performed to verify our speculation. One used a USAF-1951 resolution target as the object, which can be regarded as an amplitude-only object. The other one used a microscopic object that has phase information. Both samples were imaged with a 4× 0.1 NA microscopic objective and a complementary metal oxide semiconductor (CMOS) camera (PCO. edge 4.2). The pixel size of the CMOS is 6.5 µm. In the first experiment, the central 15 × 15 LEDs of the array were on sequentially so that 225 low resolution images were captured by the sCMOS camera. The exposure time of the camera was 600 ms per acquisition. The distance between the LED array and the sample was 108 mm. The expected synthesized NA of the imaging system is 0.45. The experimental results are shown in Fig. 4. Figure 4(b) is one of the segment of the original cap-tured images. Figure 4(c) shows the reconstructed result using 225 images. Figure 4(d) shows the reconstructed result using 120 images. Figure 4(e) shows the intensity profile along the red vertical red lines in Figs. 4(c) and 4(d). From the two curves, we can observe that their shape are almost the same, but the gray values of the reconstruction with all the acquired images are higher than that using half of them. This means that the image resolution of Figs. 4(c) and 4(d) has no significant difference except the image contrast. In the second experiment, the central 17 × 17 LEDs of the array were on sequentially so that 289 low resolution images were captured. The distance between the LED array and the sample was 113.5 mm. The expected synthesized NA of the imaging system is 0.48. The other experimental specifications were the same as in the first experiment. The experimental results are shown in Fig. 5. Figure 5(b) is one of the segment of the original captured images. Figure 5(c) and 5(f) show the reconstructed amplitudes and phases using 289 images. Figure 5(d) and 5(g) show the reconstructed amplitude and phase using 153 images. Figure5(e) shows the intensity profile along the red horizon lines in Fig. 5(c) and 5(d). Figure 5(h) shows the intensity profile along the red horizon lines in Figs. 5(f) and 5(g). We can also observe that the reconstructions don't show obvious resolution degradation both in the reconstructed amplitudes and phases, only the image contrast reduction can be observed. Because the real object's amplitude and phase exist similarity in structure, they perform like a phase-only object. Therefor, there is no significant crosstalk between the amplitude and phase.

Conclusion
Under the assumption of thin biological samples of FPM, both the theoretical analysis and simulations of a microscopic imaging system have shown that images captured with circular symmetrical illumination angles have little intensity difference when the sample is amplitude-only or phaseonly, therefore, the FPM reconstructions show insignificant difference. In the FPM experiments, both amplitude-only and complex objects were used to perform the verification. The reconstructions didn't show obvious resolution degradation between using full and half images, only the image contrast reduction can be observed. It also shows the reconstructions are degraded very less when the phase and amplitude of the object have similar distribution. Since many biological samples satisfy this condition, half number of captured images can be used in these cases. Owing to half number images reduction, time costs of both image capture and computational processing were also reduced in half.